This domain is already in the dns repo (unsigned).
So, this adds it to named.conf and adds it as an alias on the
fedoraproject.org site for now.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
The fedocal cron jobs fail in staging because they try to send to
'localhost' for smtp server. We could redirect them to use bastion, but
then people would get a bunch of reminders from prod and staging and get
confused by it. Ideally, fedocal would have a way to just print emails
to stdout instead of sending to SMTP server to use for testing them in
staging, but for now, until we have that just disable the cron job in
staging.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
I like alerts. Do you like alerts?
I like getting them so I can fix things.
So, adding myself here to all these apps so I can tell when pods are
crashing or builds are failing or whatever. :)
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Set MAILTO for the particular cron file to the email address
of the Linux system roles community so any output from the log
pruning job going to stderr is reported to them. Send stdout to
/dev/null since it is not important.
Signed-off-by: Jiri Kucera <jkucera@redhat.com>
Before we enable any monitoring, we should clean up app owners some so
we do not spam people who arent around anymore and no longer care about
the app. ;)
If I removed anyone here who is still around and does care, we can
easily add you back in.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
The previous attempt to have staging Bodhi use grouped JSON
critical path data didn't work because we did it in the backend
(i.e. the composer), not the openshift pods where the web UI
and the consumers run.
We need at least the web UI and consumer pods to have the
critpath data, as both those pods may create updates (Bodhi
decides if the update is critical path at creation time). This
attempts to handle that by having a daily openshift cron job
that runs a simple container with the necessary packages in it,
checks out the script, and runs it. It's run on a persistent
storage volume which is also mounted by all the Bodhi pods in
the place where Bodhi will look for the data.
The cron job frequency is temporarily set to once an hour; this
is so it will run soon after initial deployment. Once it has
run once we can cut it back to once a day.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
This doesn't work - it puts the critical path data on the
'backend', which is not where we need it to be. We need that
data in the openshift pods, there's another commit alongside
this one which tries to do that.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
This commit removes the old tasks to try and create a cert/intermediate
bundle file for stunnel in favor of just doing it when we renew/get the
cert. It also fixes stunnel to use the correct bundled cert.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
This switches the Bodhi staging instance to use (and regularly
update) its own grouped critical path data, instead of consuming
the data from PDC that is non-grouped and irregularly updated by
releng. If this works out well, we'll also apply it to prod.
This requires Bodhi 7 or higher.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
Remove redis from playbook, it's no longer used. We are using memcached instead.
Start the services automatically after deployment.
Signed-off-by: Michal Konečný <mkonecny@redhat.com>
Get new certs per instructions
Put new certs in ansible_private from letsencrypt
Change the cert name in configs to 2023 to show different from 2017 one.
Signed-off-by: Stephen Smoogen <ssmoogen@redhat.com>
It may be that having this on some of the proxies is causing problems
because it's trying to ping the old openshift 3.11 cluster and filling
up apache slots with it. We do not need this stuff anymore, so remove
it.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>