The messaging bridge queues have very specific setup, we can't use the
rabbit/queue role because it binds all queues to both amq.topic and
zmq.topic and we don't wan't that for the bridges.
This reverts commit 649eec104d.
The fedocal cron jobs fail in staging because they try to send to
'localhost' for smtp server. We could redirect them to use bastion, but
then people would get a bunch of reminders from prod and staging and get
confused by it. Ideally, fedocal would have a way to just print emails
to stdout instead of sending to SMTP server to use for testing them in
staging, but for now, until we have that just disable the cron job in
staging.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
I like alerts. Do you like alerts?
I like getting them so I can fix things.
So, adding myself here to all these apps so I can tell when pods are
crashing or builds are failing or whatever. :)
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Before we enable any monitoring, we should clean up app owners some so
we do not spam people who arent around anymore and no longer care about
the app. ;)
If I removed anyone here who is still around and does care, we can
easily add you back in.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
The previous attempt to have staging Bodhi use grouped JSON
critical path data didn't work because we did it in the backend
(i.e. the composer), not the openshift pods where the web UI
and the consumers run.
We need at least the web UI and consumer pods to have the
critpath data, as both those pods may create updates (Bodhi
decides if the update is critical path at creation time). This
attempts to handle that by having a daily openshift cron job
that runs a simple container with the necessary packages in it,
checks out the script, and runs it. It's run on a persistent
storage volume which is also mounted by all the Bodhi pods in
the place where Bodhi will look for the data.
The cron job frequency is temporarily set to once an hour; this
is so it will run soon after initial deployment. Once it has
run once we can cut it back to once a day.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
It may be that having this on some of the proxies is causing problems
because it's trying to ping the old openshift 3.11 cluster and filling
up apache slots with it. We do not need this stuff anymore, so remove
it.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Everytime we run the playbook a new build kicks off, but
the app was just restarted. So what happens is we end up
with the app getting started twice (once when the
deploymentconfig gets updated and once when the build finishes).
This could be bad if the app has some startup steps that need
to not be interrupted.
Let's just manually trigger builds since we have the permissions
to do that in the web interface and via the CLI.
With the release of 1.6.0 we can now remove the poetry specific changes for
staging and instead use the same for staging and production.
Signed-off-by: Michal Konečný <mkonecny@redhat.com>
We're debugging something right now and need no new builds to get
created so we don't lose some existing images that we need to go
back to for reference/debugging.