The uptream of these two was changed to use `dumb-init` to allow for
defunt processes to get reaped in the container [1] so let's change the
commented out sleep commands to do the same.
[1] 9d5618eace
linux system roles does a fine job configuring networking on our
systems, but without this it just configures it but doesn't bring things
'live' until a 'nmcli c up eth0'. Just set this so it should allow it to
restart things and reflect the network as we want it right after the
playbook runs on it.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
This was previously set to one day, but according to
https://github.com/fedora-infra/anitya/issues/952 it's not enough. Let's extend
the session time to one week then.
Signed-off-by: Michal Konečný <mkonecny@redhat.com>
Before we enable any monitoring, we should clean up app owners some so
we do not spam people who arent around anymore and no longer care about
the app. ;)
If I removed anyone here who is still around and does care, we can
easily add you back in.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Bodhi wants critpath components by SRPM name, not binary RPM name.
The script was already being called with `--srpm` when used to
update the PDC data, we just forgot to do the same here.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
The previous attempt to have staging Bodhi use grouped JSON
critical path data didn't work because we did it in the backend
(i.e. the composer), not the openshift pods where the web UI
and the consumers run.
We need at least the web UI and consumer pods to have the
critpath data, as both those pods may create updates (Bodhi
decides if the update is critical path at creation time). This
attempts to handle that by having a daily openshift cron job
that runs a simple container with the necessary packages in it,
checks out the script, and runs it. It's run on a persistent
storage volume which is also mounted by all the Bodhi pods in
the place where Bodhi will look for the data.
The cron job frequency is temporarily set to once an hour; this
is so it will run soon after initial deployment. Once it has
run once we can cut it back to once a day.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
This doesn't work - it puts the critical path data on the
'backend', which is not where we need it to be. We need that
data in the openshift pods, there's another commit alongside
this one which tries to do that.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
With release of Anitya 1.7.0 the special cases for production are no longer
needed. Let's remove them.
Signed-off-by: Michal Konečný <mkonecny@redhat.com>
content is "undefined" if using variables and you can't put a newline in
it, so just move this to a simple template.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Right now it's adding the cert without a newline at the end, but it also
expects the cert to be at the top and the intermediate below it. So,
swap them around and try putting a newline in it.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Justin Flory is the new Fedora Community Architect (formerly FCAIC)
Also, hopefully future-proof these aliases a bit by using the fcaic
alias instead naming the specific person. Fewer edits to make the next
time the role turns over.
Signed-off-by: Ben Cotton <bcotton@fedoraproject.org>
The proxies seem to be hitting file limits, so try increasing them.
Also, set httpd to restart on failure, this should help mask the problem
if it persists with the higher limit.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Right now releng-bot has a fas address of 'releng-bot@fedoraproject.org'
which is... confusing. The alias overrides this and sends email to
admin, but it results in a duplicate, causing the cron job to send mail
about the duplicate everytime newaliases run.
So, instead drop the alias here and switch the user in fas to be
admin+relengbot. This will still go to admin, not run into problems with
the address already in use in fas and should cause the newaliases to
stop complaining.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
It seems to be struggling with memory exhaustion ATM, and I
think it's causing tests to run slower.
Signed-off-by: Adam Williamson <awilliam@redhat.com>