79 lines
2.5 KiB
Text
79 lines
2.5 KiB
Text
= Blockerbugs Infrastructure SOP
|
|
|
|
https://pagure.io/fedora-qa/blockerbugs[Blockerbugs] is an app developed
|
|
by Fedora QA to aid in tracking items related to release blocking and
|
|
freeze exception bugs in branched Fedora releases.
|
|
|
|
== Contents
|
|
|
|
* <<_contact_information>>
|
|
* <<_file_locations>>
|
|
* <<_building_for_infra>>
|
|
* <<_upgrading>>
|
|
* <<_watchdog>>
|
|
* <<_sync>>
|
|
|
|
== Contact Information
|
|
|
|
Owner::
|
|
Fedora QA Devel
|
|
Contact::
|
|
#fedora-qa
|
|
Persons::
|
|
jskladan, kparal
|
|
Servers::
|
|
* In OpenShift.
|
|
Purpose::
|
|
Hosting the https://pagure.io/fedora-qa/blockerbugs[blocker bug
|
|
tracking application] for QA
|
|
|
|
== File Locations
|
|
|
|
`blockerbugs/cli.py - cli for the app
|
|
|
|
== Configuration
|
|
|
|
Configuration is loaded from the environment in the pod. The default configuration is
|
|
set in the playbook: `roles/openshift-apps/blockerbugs/templates/deploymentconfig.yml`.
|
|
|
|
The possible values to set up can be found in `blockerbugs/config.py` inside
|
|
the `openshift_config` function. Apart from that, secrets, tokens, and api keys
|
|
are set in the secrets Ansible repository.
|
|
|
|
== Building for Infra
|
|
|
|
The application levarages s2i containers. The production instance is
|
|
tracking `master` branch from the blockerbugs repository, the staging instance
|
|
is tracking `develop` branch. The build don't happen automatically, but need
|
|
to be triggered manually from the OpenShift web console.
|
|
|
|
== Upgrading
|
|
|
|
Blockerbugs is currently configured through ansible and all
|
|
configuration changes need to be done through ansible.
|
|
|
|
The pod initialization is set in the way that all database upgrades
|
|
happen automatically on startup. That means the extra care is needed,
|
|
and all deployments that do database changes need to happen on stg first.
|
|
|
|
== Deployment WatchDog
|
|
|
|
The deployment is configured to perform automatic liveness testing.
|
|
The first phase is running `cli.py upgrade_db`, and the second
|
|
phase consists of the cluster trying to get HTTP return
|
|
from container on port `8080` on the pod.
|
|
|
|
If any of these fail, the cluster automatically reverts
|
|
to the previous build, and such failure can be seen on `Events` tab
|
|
in the DeploymentConfig details.
|
|
|
|
Apart from that, the cluster regularly polls the pod
|
|
for liveness testing. If that fails or times out, a pod restart occurs.
|
|
Such event can be seen in `Events` tab of the DeploymentConfig.
|
|
|
|
== Periodic sync
|
|
|
|
Blockerbugs app deployment consists of two pods. One serves as both backend and
|
|
frontend, the other is spawned every 30 minutes with the `cli.py sync` executed.
|
|
This synchronizes the data from bugzilla and pagure into the blockerbugs db.
|
|
|