100 lines
3.2 KiB
Text
100 lines
3.2 KiB
Text
|
= testdays Infrastructure SOP
|
||
|
|
||
|
https://pagure.io/fedora-qa/testdays-web/[testdays] is an app developed
|
||
|
by Fedora QA to aid with managing testday events for the community.
|
||
|
|
||
|
== Contents
|
||
|
|
||
|
* <<_contact_information>>
|
||
|
* <<_file_locations>>
|
||
|
* <<_building_for_infra>>
|
||
|
* <<_upgrading>>
|
||
|
* <<_watchdog>>
|
||
|
* <<_components>>
|
||
|
|
||
|
== Contact Information
|
||
|
|
||
|
Owner::
|
||
|
Fedora QA Devel
|
||
|
Contact::
|
||
|
#fedora-qa
|
||
|
Persons::
|
||
|
jskladan, smukher
|
||
|
Servers::
|
||
|
* In OpenShift.
|
||
|
Purpose::
|
||
|
Hosting the https://pagure.io/fedora-qa/testdays-web/[testdays] for the QA ad the community
|
||
|
|
||
|
== File Locations
|
||
|
|
||
|
`testdays/cli.py - cli for the app
|
||
|
`resultsdb/cli.py - cli for the resultsDB
|
||
|
|
||
|
== Configuration
|
||
|
|
||
|
Configuration is loaded from the environment in the pod. The default configuration is
|
||
|
set in the playbook: `roles/openshift-apps/testdays/templates/deploymentconfig.yml`. Remember that the configuration needs
|
||
|
to be changed for the both pods (testdays, and resultsdb).
|
||
|
|
||
|
The possible values to set up can be found in `testdays/config.py` and
|
||
|
`resultsdb/config.py` inside the `openshift_config` function.
|
||
|
Apart from that, secrets, tokens, and api keys are set
|
||
|
in the secrets Ansible repository.
|
||
|
|
||
|
== Building for Infra
|
||
|
|
||
|
The application levarages s2i containers. Both the production
|
||
|
and staging instances of testcloud are tracking `master`
|
||
|
branch from the testdays-web repository, resultsdb instance
|
||
|
is tracking legacy_testdays branch on both prod and stg.
|
||
|
The build don't happen automatically, but need
|
||
|
to be triggered manually from the OpenShift web console.
|
||
|
|
||
|
== Upgrading
|
||
|
|
||
|
Testdays is currently configured through ansible and all
|
||
|
configuration changes need to be done through ansible.
|
||
|
|
||
|
The pod initialization is set in the way that all database upgrades
|
||
|
happen automatically on startup. That means the extra care is needed,
|
||
|
and all deployments that do database changes need to happen on stg first.
|
||
|
|
||
|
== Deployment sanity test
|
||
|
|
||
|
The deployment is configured to perform automatic sanity testing.
|
||
|
The first phase is running `cli.py upgrade_db`, and the second
|
||
|
phase consists of the cluster trying to get HTTP return
|
||
|
from container on port `8080` on the `testdays` pod.
|
||
|
|
||
|
If any of these fail, the cluster automatically reverts
|
||
|
to the previous build, and such failure can be seen on `Events` tab
|
||
|
in the DeploymentConfig details.
|
||
|
|
||
|
== Deployment WatchDog
|
||
|
|
||
|
The deployment is configured to perform automatic liveness testing.
|
||
|
The first phase is running `cli.py upgrade_db`, and the second
|
||
|
phase consists of the cluster trying to get HTTP return
|
||
|
from container on port `8080` on the `testdays` and `resutlsdb` pods.
|
||
|
|
||
|
If any of these fail, the cluster automatically reverts
|
||
|
to the previous build, and such failure can be seen on `Events` tab
|
||
|
in the DeploymentConfig details.
|
||
|
|
||
|
Apart from that, the cluster regularly polls the `testdays` and `resultsdb`
|
||
|
for liveness testing. If that fails or times out, a pod restart occurs.
|
||
|
Such event can be seen in `Events` tab of the DeploymentConfig.
|
||
|
|
||
|
== Components of Deployment
|
||
|
|
||
|
=== Testdays
|
||
|
|
||
|
The base testdays app that provides both backend and frontend
|
||
|
inside the single deployment.
|
||
|
|
||
|
=== ResultsDB
|
||
|
|
||
|
Forked state of the upstream ResultsDB that has OpenShift changes
|
||
|
applied on top of it while not introducing any other changes that
|
||
|
are in upstream branch. Available on https://pagure.io/taskotron/resultsdb/tree/legacy_testdays[Pagure].
|