openqa/server: drop createhdds stuff

This was disabled due to a bug for some time now. Originally I
meant to turn it back on, but now I don't think I do: it makes
more sense to just keep letting the worker hosts handle disk
image building, it doesn't make any sense to have the server do
it for x86_64 but worker hosts do it for other arches. If the
server can't do it *all*, we may as well be consistent across
arches and always have the worker hosts do it.

This does mean that on initial deployment using these plays there
is a time where the server is up and running but any jobs run
that need the base disk images will fail because the worker play
won't have built them yet. But I think that's not a big problem,
and it was already the case for non-x86_64 arches anyhow.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
This commit is contained in:
Adam Williamson 2020-05-06 14:27:37 -07:00
parent 2b511fa419
commit 32f9933aad

View file

@ -189,11 +189,6 @@
- name: Remove old openqa_fedora_tools checkout
file: path=/root/openqa_fedora_tools state=absent
- name: Check out createhdds
git:
repo: https://pagure.io/fedora-qa/createhdds.git # noqa 401
dest: /root/createhdds
- name: Create asset directories
file: path={{ item }} state=directory owner=geekotest group=root mode=0755
with_items:
@ -204,38 +199,6 @@
- /var/lib/openqa/share/factory/repo
- /var/lib/openqa/share/factory/other
#- name: Set up createhdds cron job
# copy: src=createhdds dest=/etc/cron.daily/createhdds owner=root group=root mode=0755
# While #1539330 is a thing, we probably don't want the servers
# crashing every day...
- name: Remove createhdds cron job (#1539330)
file: path=/etc/cron.daily/createhdds state=absent
- name: Check if any hard disk images need (re)building
command: "/root/createhdds/createhdds.py check"
args:
chdir: /var/lib/openqa/share/factory/hdd/fixed
register: diskcheck
failed_when: "1 != 1"
changed_when: "1 != 1"
check_mode: no
- name: Ensure libvirt is running if needed to create images
service: name=libvirtd enabled=yes state=started
when: "diskcheck.rc > 1"
# > 1 is not a typo; check exits with 1 if all images are present but some
# are outdated, and 2 if any images are missing. We only want to handle
# outright *missing* images here in the playbook (to handle the case of
# first deployment). Outdated images are handled by the daily cron run.
# disabled due to #1539330
#- name: Create hard disk images (this may take a long time!)
# command: "/etc/cron.daily/createhdds"
# when: "diskcheck.rc > 1"
# ignore_errors: yes
- name: Copy in meta-data for cloud-init ISO creation
copy: src=meta-data dest=/var/tmp/meta-data owner=root group=root mode=0644