Pushing this now as it's an outage of our backups and I want to get it
going asap. It's only affecting backup01.
Add dhcp entries for backup01's mgmt and eth0 interface.
Use eth0 instead of eno1.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
These staging virthosts have no vm's on them anymore and are going to be
replaced with new hardware. So, remove them from inventory and shut them
down in prep for them being removed.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
openqa-x86-worker03 seems to be a bit poorly lately, it quite
often fails jobs in 'hardware blip' looking ways, even after a
reboot. It's also the equal-worst hardware in the worker host
pool with 05. So let's swap 03 and 06 so prod has most of the
best hardware, and lab has the poorly box. Also while doing a
quick hardware survey I noticed 05 is equally as underpowered
as 03 (it has 2x E5-2680v3, total 24 physical CPUs, all the
other hosts aside from those two have 2x16 core CPUs), so this
cuts its worker count to the same as 03 (and makes the comment
more accurate for both). Added comments to inventory with the
CPU info for each box for future reference.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
It seems to be struggling with memory exhaustion ATM, and I
think it's causing tests to run slower.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
We have been having issues with webkitgtk not being able to build due to
memory constraints on the existing builders. Also, we are overcomitted
on memory on the kvm lpar. So, to hopefully fix this:
* remove 3 existing builders.
* just leave the 3 cpus and 17gb memory from one free for the host
* make 2 of the other builders double the size in memory, cpu and disk.
* Will add these 2 to the heavybuilder channel and hopefully webkitgtk
will be happy again.
I'm a bit concerned that this might slow the mass rebuild down, but we
will see. :)
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Upgraded proxies and builders to f37. We have a reduced timeframe to get
this done before the holidays, so this time we just upgraded them in
place. Usually we do a full reinstall. We will try and do that next
cycle.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Our openshift 3.11 cluster(s) served us long and well.
Now we have everything finally moved to the openshift 4 clusters (fas2
was the last holdout). We can finally retire this. :)
🎉🥂
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Some of the openqa workers are encrypted and some aren't (this is a bit of a
mess that's partly a result of all the redeployments we did around
https://bugzilla.redhat.com/show_bug.cgi?id=2009585 ). We should only run
the nbde_client role on workers which are encrypted. Hopefully this gets that
right.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
These are new mt snow boxes. 80 cpus, 384gb mem, 6 1TB nvme drives.
They will be replacing emags. We cannot replace all the emags yet, since
we still need armv7 support and these don't have that, but after next
year we should be able to start dropping them.
One of these might move over to staging, still pondering.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
These were the first aarch64 machines we had. 1u, no working lights,
prone to needing rebuilt, but they have served long and well.
We salute you!
Signed-off-by: Kevin Fenzi <kevin@scrye.com>