Merge branch 'master' of /git/ansible
This commit is contained in:
commit
f3408ad27e
22 changed files with 596 additions and 199 deletions
246
README.cloud
246
README.cloud
|
@ -1,154 +1,116 @@
|
|||
== Cloud information ==
|
||||
|
||||
cloud instances:
|
||||
to startup a new cloud instance and configure for basic server use run (as
|
||||
root):
|
||||
The dashboard for the production cloud instance is:
|
||||
https://fed-cloud09.cloud.fedoraproject.org/dashboard/
|
||||
|
||||
el6:
|
||||
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/el6_temp_instance.yml
|
||||
Note that this is a self signed cert.
|
||||
You will need to:
|
||||
|
||||
f19:
|
||||
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/f19_temp_instance.yml
|
||||
wget http://infrastructure.fedoraproject.org/fed-cloud09.cloud.fedoraproject.org.pem
|
||||
sudo cp fed-cloud09.cloud.fedoraproject.org.pem /etc/pki/ca-trust/source/anchors
|
||||
sudo /usr/bin/update-ca-trust
|
||||
|
||||
You can download credentials via the dashboard (under security and access)
|
||||
|
||||
=== Transient instances ===
|
||||
|
||||
Transient instances are short term use instances for Fedora
|
||||
contributors. They can be terminated at any time and shouldn't be
|
||||
relied on for any production use. If you have an application
|
||||
or longer term item that should always be around
|
||||
please create a persistent playbook instead. (see below)
|
||||
|
||||
to startup a new transient cloud instance and configure for basic
|
||||
server use run (as root):
|
||||
|
||||
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/transient_cloud_instance.yml -e 'name=somename'
|
||||
|
||||
The -i is important - ansible's tools need access to root's sshagent as well
|
||||
as the cloud credentials to run the above playbooks successfully.
|
||||
|
||||
This will setup a new instance, provision it and email sysadmin-main that
|
||||
the instance was created, it's instance id (for terminating it, attaching
|
||||
volumes, etc) and it's ip address.
|
||||
the instance was created and it's ip address.
|
||||
|
||||
You will then be able to login, as root.
|
||||
You will then be able to login, as root if you are in the sysadmin-main group.
|
||||
(If you are making the instance for another user, see below)
|
||||
|
||||
You can add various extra vars to the above commands to change the instance
|
||||
you've just spun up.
|
||||
You MUST pass a name to it, ie: -e 'name=somethingdescriptive'
|
||||
You can optionally override defaults by passing any of the following:
|
||||
image=imagename (default is centos70_x86_64)
|
||||
instance_type=some instance type (default is m1.small)
|
||||
root_auth_users='user1 user2 user3 @group1' (default always includes sysadmin-main group)
|
||||
|
||||
variables to define:
|
||||
instance_type=c1.medium
|
||||
security_group=default
|
||||
root_auth_users='username1 username2 @groupname'
|
||||
hostbase=basename for hostname - will have instance id appended to it
|
||||
Note: if you run this playbook with the same name= multiple times
|
||||
openstack is smart enough to just return the current ip of that instance
|
||||
and go on. This way you can re-run if you want to reconfigure it without
|
||||
reprovisioning it.
|
||||
|
||||
=== Persistent cloud instances ===
|
||||
|
||||
define these with:
|
||||
|
||||
--extra-vars="varname=value varname1=value varname2=value"
|
||||
|
||||
Name Memory_MB Disk VCPUs
|
||||
m1.tiny 512 0 1
|
||||
m1.small 2048 20 1
|
||||
m1.medium 4096 40 2
|
||||
m1.large 8192 80 4
|
||||
m1.xlarge 16384 160 8
|
||||
m1.builder 5120 50 3
|
||||
Persistent cloud instances are ones that we want to always have up and
|
||||
configured. These are things like dev instances for various applications,
|
||||
proof of concept servers for evaluating something, etc. They will be
|
||||
reprovisioned after a reboot/maint window for the cloud.
|
||||
|
||||
Setting up a new persistent cloud host:
|
||||
1. select an ip:
|
||||
source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
oeuca-describe-addresses
|
||||
- pick an ip from the list that is not assigned anywhere
|
||||
- add it into dns - normally in the cloud.fedoraproject.org but it doesn't
|
||||
have to be
|
||||
|
||||
2. If needed create a persistent storage disk for the instance:
|
||||
source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
euca-create-volume -z nova -s <size in gigabytes>
|
||||
1) Select an available floating IP
|
||||
|
||||
source /srv/private/ansible/files/openstack/novarc
|
||||
nova floating-ip-list
|
||||
|
||||
3. set up the host/ip in ansible host inventory
|
||||
- add to ansible/inventory/inventory under [persistent-cloud]
|
||||
- either the ip itself or the hostname you want to refer to it as
|
||||
2) Add that IP addr to dns (typically as foo.cloud.fedoraproject.org)
|
||||
|
||||
4. setup the host_vars
|
||||
- create file named by the hostname or ip you used in the inventory
|
||||
- for adding persistent volumes add an entry like this into the host_vars file
|
||||
3) Create persistent storage disk for the instance (if necessary.. you might not
|
||||
need this).
|
||||
|
||||
volumes: ['-d /dev/vdb vol-BCA33FCD', '-d /dev/vdc vol-DC833F48']
|
||||
nova volume-create --display-name SOME_NAME SIZE
|
||||
|
||||
for each volume you want to attach to the instance.
|
||||
4) Add to ansible inventory in the persistent-cloud group.
|
||||
You should use the FQDN for this and not the IP. Names are good.
|
||||
|
||||
The device names matter - they start at /dev/vdb and increment. However,
|
||||
they are not reliable IN the instance. You should find the device, partition
|
||||
it, format it and label the formatted device then mount the device by label
|
||||
or by UUID. Do not count on the device name being the same each time.
|
||||
5) setup the host_vars file. It should looks something like this::
|
||||
|
||||
instance_type: m1.medium
|
||||
image:
|
||||
keypair: fedora-admin-20130801
|
||||
security_group: webserver
|
||||
zone: nova
|
||||
tcp_ports: [22, 80, 443]
|
||||
|
||||
Contents should look like this (remove all the comments)
|
||||
inventory_tenant: persistent
|
||||
inventory_instance_name: taiga
|
||||
hostbase: taiga
|
||||
public_ip: 209.132.184.50
|
||||
root_auth_users: ralph maxamillion
|
||||
description: taiga frontend server
|
||||
|
||||
---
|
||||
# 2cpus, 3GB of ram 20GB of ephemeral space
|
||||
instance_type: m1.large
|
||||
# image id - see global vars. You can also use euca-describe-images to find other images as well
|
||||
image: "{{ el6_qcow_id }}"
|
||||
keypair: fedora-admin-20130801
|
||||
# what security group to add the host to
|
||||
security_group: webserver
|
||||
zone: fedoracloud
|
||||
# instance id will be appended
|
||||
hostbase: hostname_base-
|
||||
# ip should be in the 209.132.184.XXX range
|
||||
public_ip: $ip_you_selected
|
||||
# users/groups who should have root ssh access
|
||||
root_auth_users: skvidal bkabrda
|
||||
description: some description so someone else can know what this is
|
||||
volumes:
|
||||
- volume_id: VOLUME_UUID_GOES_HERE
|
||||
device: /dev/vdc
|
||||
|
||||
The available images can be found by running::
|
||||
source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
euca-describe-images | grep ami
|
||||
cloud_networks:
|
||||
# persistent-net
|
||||
- net-id: "7c705493-f795-4c3a-91d3-c5825c50abfe"
|
||||
|
||||
4. setup a host playbook ansible/playbooks/hosts/$YOUR_HOSTNAME_HERE.yml
|
||||
Note: the name of this file doesn't really matter but it should normally
|
||||
be the hostname of the host you're setting up.
|
||||
6) setup the host playbook
|
||||
|
||||
- name: check/create instance
|
||||
hosts: $YOUR_HOSTNAME/IP HERE
|
||||
user: root
|
||||
gather_facts: False
|
||||
|
||||
vars_files:
|
||||
- /srv/web/infra/ansible/vars/global.yml
|
||||
- "{{ private }}/vars.yml"
|
||||
|
||||
tasks:
|
||||
- include: "{{ tasks }}/persistent_cloud.yml"
|
||||
|
||||
- name: provision instance
|
||||
hosts: $YOUR_HOSTNAME/IP HERE
|
||||
user: root
|
||||
gather_facts: True
|
||||
|
||||
vars_files:
|
||||
- /srv/web/infra/ansible/vars/global.yml
|
||||
- "{{ private }}/vars.yml"
|
||||
- /srv/web/infra/ansible/vars//{{ ansible_distribution }}.yml
|
||||
|
||||
tasks:
|
||||
- include: "{{ tasks }}/cloud_setup_basic.yml
|
||||
# fill in other actions/includes/etc here
|
||||
|
||||
handlers:
|
||||
- include: "{{ handlers }}/restart_services.yml
|
||||
|
||||
|
||||
5. add/commit the above to the git repo and push your changes
|
||||
|
||||
|
||||
6. set it up:
|
||||
7) run the playbook:
|
||||
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/hosts/$YOUR_HOSTNAME_HERE.yml
|
||||
|
||||
7. login, etc
|
||||
|
||||
You should be able to run that playbook over and over again safely, it will
|
||||
only setup/create a new instance if the ip is not up/responding.
|
||||
|
||||
SECURITY GROUPS
|
||||
=== SECURITY GROUPS ===
|
||||
|
||||
FIXME: needs work for new cloud.
|
||||
|
||||
- to edit security groups you must either have your own cloud account or
|
||||
be a member of sysadmin-main
|
||||
|
||||
This gives you the credential to change things in the persistent tenant
|
||||
- source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
|
||||
|
||||
This lists all security groups in that tenant:
|
||||
- euca-describe-groups | grep GROUP
|
||||
|
||||
|
@ -190,70 +152,16 @@ impacting other instances using that security group.
|
|||
- You will almost always want to allow 22/tcp (sshd) and icmp -1 -1 (ping
|
||||
and traceroute and friends).
|
||||
|
||||
|
||||
|
||||
|
||||
TERMINATING INSTANCES
|
||||
=== TERMINATING INSTANCES ===
|
||||
|
||||
For transient:
|
||||
1. source /srv/private/ansible/files/openstack/transient-admin/ec2rc.sh
|
||||
1. source /srv/private/ansible/files/openstack/transient-admin/keystonerc.sh
|
||||
|
||||
- OR -
|
||||
|
||||
For persistent:
|
||||
1. source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
1. source /srv/private/ansible/files/openstack/persistent-admin/keystonerc.sh
|
||||
|
||||
2. euca-describe-instances | grep <ip of your instance>
|
||||
2. nova list | grep <ip of your instance or name of your instance>
|
||||
|
||||
3. euca-terminate-instances <the id, something like i-00000295>
|
||||
|
||||
|
||||
|
||||
New Cloud stuff
|
||||
===============
|
||||
|
||||
These are instructions for some basic tasks regarding our "new" openstack cloud.
|
||||
|
||||
Creating a persistant instance
|
||||
------------------------------
|
||||
|
||||
1) Select an available floating IP
|
||||
|
||||
source /srv/private/ansible/files/openstack/novarc
|
||||
nova floating-ip-list
|
||||
|
||||
2) Add that IP addr to dns (typically as foo.cloud.fedoraproject.org)
|
||||
|
||||
3) Create persistent storage disk for the instance (if necessary.. you might not
|
||||
need this).
|
||||
|
||||
nova volume-create --display-name SOME_NAME SIZE
|
||||
|
||||
4) Add to ansible inventory in the persistent-cloud group.
|
||||
You should use the FQDN for this and not the IP. Names are good.
|
||||
|
||||
5) setup the host_vars file. It should looks something like this::
|
||||
|
||||
instance_type: m1.medium
|
||||
image: "{{ f20_qcow_id }}"
|
||||
keypair: fedora-admin-20130801
|
||||
security_group: webserver
|
||||
zone: nova
|
||||
tcp_ports: [22, 80, 443]
|
||||
|
||||
inventory_tenant: persistent
|
||||
inventory_instance_name: taiga
|
||||
hostbase: taiga
|
||||
public_ip: 209.132.184.50
|
||||
root_auth_users: ralph maxamillion
|
||||
description: taiga frontend server
|
||||
|
||||
volumes:
|
||||
- volume_id: VOLUME_UUID_GOES_HERE
|
||||
device: /dev/vdc
|
||||
|
||||
cloud_networks:
|
||||
# persistent-net
|
||||
- net-id: "7c705493-f795-4c3a-91d3-c5825c50abfe"
|
||||
|
||||
6) setup the host playbook
|
||||
3. nova delete <name of instance or ID of instance>
|
||||
|
|
|
@ -10,3 +10,20 @@ nrpe_procs_warn: 250
|
|||
nrpe_procs_crit: 300
|
||||
|
||||
freezes: false
|
||||
|
||||
# settings for the beaker db, server and lab controller
|
||||
beaker_db_host: localhost
|
||||
beaker_db_name: beaker
|
||||
beaker_db_user: "{{ stg_beaker_db_user }}"
|
||||
beaker_db_password: "{{ stg_beaker_db_password }}"
|
||||
mariadb_root_password: "{{ stg_beaker_mariadb_root_password }}"
|
||||
|
||||
beaker_server_url: "https://beaker.stg.qa.fedoraproject.org"
|
||||
beaker_server_cname: "beaker.stg.fedoraproject.org"
|
||||
beaker_server_hostname: "beaker-stg01.qa.fedoraproject.org"
|
||||
beaker_server_admin_user: "{{ stg_beaker_server_admin_user }}"
|
||||
beaker_server_admin_pass: "{{ stg_beaker_server_admin_pass }}"
|
||||
beaker_server_email: "sysadmin-qa-members@fedoraproject.org"
|
||||
|
||||
beaker_lab_controller_username: "host/beaker01.qa.fedoraproject.org"
|
||||
beaker_lab_controller_password: "{{ stg_beaker_lab_controller_password }}"
|
||||
|
|
|
@ -6,7 +6,7 @@ copr_nova_tenant_id: "undefined_tenant_id"
|
|||
copr_nova_tenant_name: "copr"
|
||||
copr_nova_username: "copr"
|
||||
|
||||
copr_builder_image_name: "builder_base_image_2015_04_01"
|
||||
copr_builder_image_name: "Fedora-Cloud-Base-20141203-21"
|
||||
copr_builder_flavor_name: "m1.builder"
|
||||
copr_builder_network_name: "copr-net"
|
||||
copr_builder_key_name: "buildsys"
|
||||
|
|
|
@ -6,7 +6,7 @@ copr_nova_tenant_id: "566a072fb1694950998ad191fee3833b"
|
|||
copr_nova_tenant_name: "coprdev"
|
||||
copr_nova_username: "copr"
|
||||
|
||||
copr_builder_image_name: "builder_base_image_2015_04_01"
|
||||
copr_builder_image_name: "Fedora-Cloud-Base-20141203-21"
|
||||
copr_builder_flavor_name: "m1.builder"
|
||||
copr_builder_network_name: "coprdev-net"
|
||||
copr_builder_key_name: "buildsys"
|
||||
|
|
|
@ -47,3 +47,23 @@
|
|||
|
||||
handlers:
|
||||
- include: "{{ handlers }}/restart_services.yml"
|
||||
|
||||
- name: configure beaker and required services
|
||||
hosts: beaker-stg
|
||||
user: root
|
||||
gather_facts: True
|
||||
|
||||
vars_files:
|
||||
- /srv/web/infra/ansible/vars/global.yml
|
||||
- "/srv/private/ansible/vars.yml"
|
||||
- /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
|
||||
|
||||
roles:
|
||||
- { role: mariadb_server, tags: ['mariadb'] }
|
||||
- { role: beaker/base, tags: ['beakerbase'] }
|
||||
- { role: beaker/labcontroller, tags: ['beakerlabcontroller'] }
|
||||
- { role: beaker/server, tags: ['beakerserver'] }
|
||||
|
||||
handlers:
|
||||
- include: "{{ handlers }}/restart_services.yml"
|
||||
|
||||
|
|
|
@ -60,7 +60,7 @@
|
|||
|
||||
- name: install cloud-utils (dnf)
|
||||
command: dnf install -y cloud-utils
|
||||
when: ansible_distribution_major_version > '21' and not ansible_cmdline.ostree
|
||||
when: ansible_distribution_major_version > '21' and ansible_cmdline.ostree is not defined
|
||||
|
||||
- include: "{{ tasks }}/cloud_setup_basic.yml"
|
||||
|
||||
|
|
11
roles/beaker/base/files/beaker-server-fedora.repo
Normal file
11
roles/beaker/base/files/beaker-server-fedora.repo
Normal file
|
@ -0,0 +1,11 @@
|
|||
[beaker-server]
|
||||
name=Beaker Server - Fedora$releasever
|
||||
baseurl=https://beaker-project.org/yum/server/Fedora$releasever/
|
||||
enabled=1
|
||||
gpgcheck=0
|
||||
|
||||
[beaker-server-testing]
|
||||
name=Beaker Server -Fedora$releasever - Testing
|
||||
baseurl=https://beaker-project.org/yum/server-testing/Fedora$releasever/
|
||||
enabled=0
|
||||
gpgcheck=0
|
27
roles/beaker/base/tasks/main.yml
Normal file
27
roles/beaker/base/tasks/main.yml
Normal file
|
@ -0,0 +1,27 @@
|
|||
#
|
||||
# This is the base beaker role - mostly installing repos for beaker
|
||||
#
|
||||
---
|
||||
|
||||
- name: put beaker server repos on Rhel systems
|
||||
template:
|
||||
src: "{{ item }}"
|
||||
dest: "/etc/yum.repos.d/{{ item }}"
|
||||
owner: root
|
||||
group: root
|
||||
mode: 0644
|
||||
with_items:
|
||||
- beaker-server-rhel.repo
|
||||
when: ansible_distribution == 'RedHat'
|
||||
|
||||
- name: put beaker server repos on Fedora systems
|
||||
copy:
|
||||
src: "{{ item }}"
|
||||
dest: "/etc/yum.repos.d/{{ item }}"
|
||||
owner: root
|
||||
group: root
|
||||
mode: 0644
|
||||
with_items:
|
||||
- beaker-server-fedora.repo
|
||||
when: ansible_distribution == 'Fedora'
|
||||
|
11
roles/beaker/base/templates/beaker-server-rhel.repo
Normal file
11
roles/beaker/base/templates/beaker-server-rhel.repo
Normal file
|
@ -0,0 +1,11 @@
|
|||
[beaker-server]
|
||||
name=Beaker Server - RedHatEnterpriseLinux{{ ansible_distribution_major_version }}
|
||||
baseurl=https://beaker-project.org/yum/server/RedHatEnterpriseLinux{{ ansible_distribution_major_version }}/
|
||||
enabled=1
|
||||
gpgcheck=0
|
||||
|
||||
[beaker-server-testing]
|
||||
name=Beaker Server - RedHatEnterpriseLinux{{ ansible_distribution_major_version }} - Testing
|
||||
baseurl=https://beaker-project.org/yum/server-testing/RedHatEnterpriseLinux{{ ansible_distribution_major_version }}/
|
||||
enabled=0
|
||||
gpgcheck=0
|
10
roles/beaker/labcontroller/handlers/main.yml
Normal file
10
roles/beaker/labcontroller/handlers/main.yml
Normal file
|
@ -0,0 +1,10 @@
|
|||
#####################################################################
|
||||
# Handlers for restarting services specific to beaker lab controllers
|
||||
#
|
||||
|
||||
- name: restart beaker lab controller
|
||||
service: name={{ item }} state=restarted
|
||||
with_items:
|
||||
- beaker-proxy
|
||||
- beaker-provision
|
||||
- beaker-watchdog
|
51
roles/beaker/labcontroller/tasks/main.yml
Normal file
51
roles/beaker/labcontroller/tasks/main.yml
Normal file
|
@ -0,0 +1,51 @@
|
|||
#
|
||||
# This is a beaker_labcontroller role.
|
||||
#
|
||||
---
|
||||
- name: install packages needed for beaker lab-controller
|
||||
yum: pkg={{ item }} state=present
|
||||
with_items:
|
||||
- beaker-lab-controller
|
||||
- tftp-server
|
||||
|
||||
- name: check beaker-transfer state
|
||||
command: service beaker-transfer status
|
||||
failed_when: no
|
||||
changed_when: no
|
||||
register: transfer_state
|
||||
|
||||
- name: Replace default labcontroller.conf file
|
||||
template:
|
||||
src: etc/beaker/labcontroller.conf.j2
|
||||
dest: /etc/beaker/labcontroller.conf
|
||||
owner: apache
|
||||
group: root
|
||||
mode: 0660
|
||||
backup: yes
|
||||
force: yes
|
||||
register: configure_result
|
||||
notify:
|
||||
- restart httpd
|
||||
- restart beaker lab controller
|
||||
tags:
|
||||
- beaker_lab_controller
|
||||
|
||||
- name: restart beaker-transfer
|
||||
service: name=beaker-transfer state=restarted
|
||||
when: (transfer_state.rc == 0) and (configure_result.changed)
|
||||
|
||||
- name: enable tftp
|
||||
command: chkconfig tftp on
|
||||
tags:
|
||||
- beaker_lab_controller
|
||||
|
||||
- name: start required services
|
||||
service: name={{ item }} state=started enabled=yes
|
||||
with_items:
|
||||
- httpd
|
||||
- xinetd
|
||||
- beaker-proxy
|
||||
- beaker-provision
|
||||
- beaker-watchdog
|
||||
tags:
|
||||
- beaker_lab_controller
|
|
@ -0,0 +1,48 @@
|
|||
# Hub xml-rpc address.
|
||||
#HUB_URL = "https://localhost:8080"
|
||||
HUB_URL = "{{beaker_server_url}}"
|
||||
|
||||
# Hub authentication method. Example: krbv, password, worker_key
|
||||
AUTH_METHOD = "password"
|
||||
#AUTH_METHOD = "krbv"
|
||||
|
||||
# Username and password
|
||||
USERNAME = "{{beaker_lab_controller_username}}"
|
||||
PASSWORD = "{{beaker_lab_controller_password}}"
|
||||
|
||||
# Kerberos service prefix. Example: host, HTTP
|
||||
KRB_SERVICE = "HTTP"
|
||||
|
||||
# Kerberos realm. If commented, last two parts of domain name are used. Example: MYDOMAIN.COM.
|
||||
KRB_REALM = "DOMAIN.COM"
|
||||
|
||||
#Uncomment and change the following two lines if using krb with qpid
|
||||
#QPID_KRB_PRINCIPAL='HTTP/localhost'
|
||||
|
||||
#QPID_KRB_KEYTAB='/etc/my/file.keytab'
|
||||
|
||||
# By default, job logs are stored locally on the lab controller.
|
||||
# If you have set up an archive server to store job logs, uncomment and
|
||||
# configure the following settings. You will also need to enable the
|
||||
# beaker-transfer daemon to move logs to the archive server.
|
||||
#ARCHIVE_SERVER = "http://archive-example.domain.com/beaker"
|
||||
#ARCHIVE_BASEPATH = "/var/www/html/beaker"
|
||||
#ARCHIVE_RSYNC = "rsync://USER@HOST/var/www/html/beaker"
|
||||
#RSYNC_FLAGS = "-ar --password-file /root/rsync-secret.txt"
|
||||
|
||||
# How often to renew our session on the server
|
||||
#RENEW_SESSION_INTERVAL = 300
|
||||
|
||||
# Root directory served by the TFTP server. Netboot images and configs will be
|
||||
# placed here.
|
||||
TFTP_ROOT = "/var/lib/tftpboot"
|
||||
|
||||
# URL scheme used to generate absolute URLs for this lab controller.
|
||||
# It is used for job logs served by Apache. Set it to 'https' if you have
|
||||
# configured Apache for SSL and you want logs to be served over SSL.
|
||||
#URL_SCHEME = "http"
|
||||
|
||||
# Fully qualified domain name of *this* system (not the Beaker server).
|
||||
# Defaults to socket.gethostname(). Ordinarily that is sufficient, unless you
|
||||
# have registered this lab controller with Beaker under a CNAME.
|
||||
URL_DOMAIN = "{{beaker_server_cname}}"
|
84
roles/beaker/server/files/beaker-server.conf
Normal file
84
roles/beaker/server/files/beaker-server.conf
Normal file
|
@ -0,0 +1,84 @@
|
|||
# Unencrypted access is bad
|
||||
# Un-comment the following to force https connections
|
||||
RewriteEngine on
|
||||
#RewriteCond %{REQUEST_URI} !^/rpms/.* [NC]
|
||||
#RewriteCond %{REQUEST_URI} !^/repos/.* [NC]
|
||||
#RewriteCond %{REQUEST_URI} !^/harness/.* [NC]
|
||||
#RewriteCond %{REQUEST_URI} !^/kickstart/.* [NC]
|
||||
#RewriteCond %{REQUEST_URI} !/ipxe-script$ [NC]
|
||||
#RewriteCond %{HTTPS} off
|
||||
#RewriteRule ^/(.*) https://%{HTTP_HOST}%{REQUEST_URI}
|
||||
#RewriteRule ^/bkr$ /bkr/ [R]
|
||||
|
||||
Alias /static /usr/share/bkr/server/static
|
||||
Alias /assets/generated /var/cache/beaker/assets
|
||||
Alias /assets /usr/share/bkr/server/assets
|
||||
Redirect permanent /apidoc http://beaker-project.org/docs/server-api
|
||||
Alias /logs /var/www/beaker/logs
|
||||
Alias /rpms /var/www/beaker/rpms
|
||||
Alias /repos /var/www/beaker/repos
|
||||
Alias /harness /var/www/beaker/harness
|
||||
|
||||
<Directory "/var/www/beaker/logs">
|
||||
<Files "*.log">
|
||||
ForceType text/plain
|
||||
</Files>
|
||||
</Directory>
|
||||
|
||||
# To work around a thread safety issue in TurboGears where HTTP requests will
|
||||
# sometimes fail with NoApplicableMethods during application startup, it is
|
||||
# recommended to set threads=1 here.
|
||||
# See https://bugzilla.redhat.com/show_bug.cgi?id=796037 for details.
|
||||
WSGIDaemonProcess beaker-server user=apache group=apache display-name=beaker-server maximum-requests=1000 processes=8 threads=1
|
||||
WSGISocketPrefix /var/run/wsgi
|
||||
WSGIRestrictStdout On
|
||||
WSGIRestrictSignal Off
|
||||
WSGIPythonOptimize 2
|
||||
WSGIPassAuthorization On
|
||||
|
||||
WSGIScriptAlias / /usr/share/bkr/beaker-server.wsgi
|
||||
|
||||
<Directory /usr/share/bkr>
|
||||
WSGIApplicationGroup beaker-server
|
||||
WSGIProcessGroup beaker-server
|
||||
<IfModule mod_authz_core.c>
|
||||
# Apache 2.4
|
||||
Require all granted
|
||||
</IfModule>
|
||||
<IfModule !mod_authz_core.c>
|
||||
# Apache 2.2
|
||||
Order deny,allow
|
||||
Allow from all
|
||||
</IfModule>
|
||||
</Directory>
|
||||
|
||||
<Directory /var/cache/beaker/assets>
|
||||
<IfModule mod_authz_core.c>
|
||||
# Apache 2.4
|
||||
Require all granted
|
||||
</IfModule>
|
||||
<IfModule !mod_authz_core.c>
|
||||
# Apache 2.2
|
||||
Order deny,allow
|
||||
Allow from all
|
||||
</IfModule>
|
||||
# Generated assets have a content hash in their filename so they can
|
||||
# safely be cached forever.
|
||||
ExpiresActive on
|
||||
ExpiresDefault "access plus 1 year"
|
||||
</Directory>
|
||||
|
||||
# Authentication settings for kerberos logins..
|
||||
# Uncomment and customize for your environment
|
||||
#<Location /bkr/login>
|
||||
# AuthType Kerberos
|
||||
# AuthName "Inventory Web UI"
|
||||
# KrbMethodNegotiate on
|
||||
# KrbMethodK5Passwd on
|
||||
# KrbServiceName HTTP
|
||||
# KrbAuthRealm DOMAIN.COM
|
||||
# Krb5Keytab /etc/httpd/conf/httpd.keytab
|
||||
# KrbSaveCredentials on
|
||||
# Require valid-user
|
||||
#</Location>
|
||||
|
6
roles/beaker/server/handlers/main.yml
Normal file
6
roles/beaker/server/handlers/main.yml
Normal file
|
@ -0,0 +1,6 @@
|
|||
#############################################################
|
||||
# Handlers for restarting services specific to beaker servers
|
||||
#
|
||||
|
||||
- name: restart beaker server
|
||||
service: name=beakerd state=restarted
|
67
roles/beaker/server/tasks/main.yml
Normal file
67
roles/beaker/server/tasks/main.yml
Normal file
|
@ -0,0 +1,67 @@
|
|||
#
|
||||
# This is a beaker_server role.
|
||||
#
|
||||
---
|
||||
|
||||
# it's unfortunate, but the beaker devs say that this is required until
|
||||
# https://bugzilla.redhat.com/show_bug.cgi?id=1074384 is solved
|
||||
- name: switch selinux off
|
||||
selinux: state=disabled
|
||||
tags:
|
||||
- selinux
|
||||
- beaker_server
|
||||
|
||||
- name: install beaker-server package
|
||||
yum: name=beaker-server state=present
|
||||
tags:
|
||||
- beaker_server
|
||||
- MySQL-python
|
||||
|
||||
- name: Replace default apache beaker-server.conf
|
||||
copy:
|
||||
src: beaker-server.conf
|
||||
dest: /etc/httpd/conf.d/beaker-server.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: 0644
|
||||
notify:
|
||||
- restart httpd
|
||||
tags:
|
||||
- beaker-server
|
||||
|
||||
- name: Replace default beaker_server.cfg file
|
||||
template:
|
||||
src: etc/beaker/server.cfg.j2
|
||||
dest: /etc/beaker/server.cfg
|
||||
owner: apache
|
||||
group: root
|
||||
mode: 0660
|
||||
backup: yes
|
||||
force: yes
|
||||
register: setup_beaker_conf
|
||||
notify:
|
||||
- restart beaker server
|
||||
- restart httpd
|
||||
tags:
|
||||
- beaker-server
|
||||
|
||||
- name: create the beaker database
|
||||
mysql_db: name=beaker state=present
|
||||
|
||||
- name: create beaker user
|
||||
mysql_user: name={{beaker_server_admin_user}} password={{beaker_server_admin_pass}} priv=beaker.*:ALL,GRANT state=present
|
||||
|
||||
- name: initialize beaker database
|
||||
command: "beaker-init -u {{beaker_server_admin_user}} -p {{beaker_server_admin_pass}} -e {{beaker_server_email}}"
|
||||
when: setup_beaker_conf|success
|
||||
tags:
|
||||
- beaker-init
|
||||
- beaker-server
|
||||
|
||||
- name: ensure the Apache server and the Beaker daemon are running
|
||||
service: name={{ item }} state=started enabled=yes
|
||||
with_items:
|
||||
- httpd
|
||||
- beakerd
|
||||
tags:
|
||||
- beaker-server
|
148
roles/beaker/server/templates/etc/beaker/server.cfg.j2
Normal file
148
roles/beaker/server/templates/etc/beaker/server.cfg.j2
Normal file
|
@ -0,0 +1,148 @@
|
|||
[global]
|
||||
# This defines the URL prefix under which the Beaker web application will be
|
||||
# served. This must match the prefix used in the Alias and WSGIScriptAlias
|
||||
# directives in /etc/httpd/conf.d/beaker-server.conf.
|
||||
# The default configuration places the application at: http://example.com/bkr/
|
||||
# server.webpath = "/"
|
||||
|
||||
# Database connection URI for Beaker's database, in the form:
|
||||
# <driver>://<user>:<password>@<hostname>:<port>/<database>?<options>
|
||||
# The charset=utf8 option is required for proper Unicode support.
|
||||
# The pool_recycle setting is required for MySQL, which will (by default)
|
||||
# terminate idle client connections after 10 hours.
|
||||
sqlalchemy.dburi="mysql://{{beaker_db_user}}:{{beaker_db_password}}@{{beaker_db_host}}/{{beaker_db_name}}?charset=utf8"
|
||||
sqlalchemy.pool_recycle = 3600
|
||||
|
||||
# If you want to send read-only report queries to a separate slave
|
||||
# database, configure it here. If not configured, report queries will
|
||||
# fall back to using the main Beaker database (above).
|
||||
#reports_engine.dburi = "mysql://beaker_ro:beaker_ro@dbslave/beaker?charset=utf8"
|
||||
#reports_engine.pool_recycle = 3600
|
||||
|
||||
# Set to True to enable sending emails.
|
||||
#mail.on = False
|
||||
|
||||
# TurboMail transport to use. The default 'smtp' sends mails over SMTP to the
|
||||
# server configured below. Other transports may be available as TurboMail
|
||||
# extension packages.
|
||||
#mail.transport = "smtp"
|
||||
# SMTP server where mails should be sent. By default we assume there is an
|
||||
# SMTP-capable MTA running on the local host.
|
||||
#mail.smtp.server = "127.0.0.1"
|
||||
|
||||
# The address which will appear as the From: address in emails sent by Beaker.
|
||||
#beaker_email = "root@localhost.localdomain"
|
||||
|
||||
# If this is set to a value greater than zero, Beaker will enforce a limit on
|
||||
# the number of concurrently running power/provision commands in each lab. Set
|
||||
# this option if you have a lab with many machines and are concerned about
|
||||
# a flood of commands overwhelming your lab controller.
|
||||
#beaker.max_running_commands = 10
|
||||
|
||||
# Timeout for authentication tokens. After this many minutes of inactivity
|
||||
# users will be required to re-authenticate.
|
||||
#visit.timeout = 360
|
||||
|
||||
# Secret key for encrypting authentication tokens. Set this to a very long
|
||||
# random string and DO NOT disclose it. Changing this value will invalidate all
|
||||
# existing tokens and force users to re-authenticate.
|
||||
# If not set, a secret key will be generated and stored in /var/lib/beaker,
|
||||
# however this configuration impacts performance therefore you should supply
|
||||
# a secret key here.
|
||||
#visit.token_secret_key = ""
|
||||
|
||||
# Enable LDAP for user account lookup and password authentication.
|
||||
#identity.ldap.enabled = False
|
||||
# URI of LDAP directory.
|
||||
#identity.soldapprovider.uri = "ldaps://ldap.domain.com"
|
||||
# Base DN for looking up user accounts.
|
||||
#identity.soldapprovider.basedn = "dc=domain,dc=com"
|
||||
# If set to True, Beaker user acounts will be automatically created on demand
|
||||
# if they exist in LDAP. Account attributes are populated from LDAP.
|
||||
#identity.soldapprovider.autocreate = False
|
||||
# Timeout (seconds) for LDAP lookups.
|
||||
#identity.soldapprovider.timeout = 20
|
||||
# Server principal and keytab for Kerberos authentication. If using Kerberos
|
||||
# authentication, this must match the mod_auth_kerb configuration in
|
||||
# /etc/httpd/conf.d/beaker-server.conf.
|
||||
#identity.krb_auth_principal = "HTTP/hostname@EXAMPLE.COM"
|
||||
#identity.krb_auth_keytab = "/etc/krb5.keytab"
|
||||
|
||||
# These are used when generating absolute URLs (e.g. in e-mails sent by Beaker)
|
||||
# You should only have to set this if socket.gethostname() returns the wrong
|
||||
# name, for example if you are using CNAMEs.
|
||||
tg.url_domain = '{{beaker_server_cname}}'
|
||||
tg.url_scheme = "https"
|
||||
# If your scheduler is multi-homed and has a different hostname for your test
|
||||
# machines you can use the tg.lab_domain variable here to specify it.
|
||||
# If tg.lab_domain is not set it will fall back to tg.url_domain, and if that's
|
||||
# not set it will fall back to socket.gethostname().
|
||||
tg.lab_domain = '{{beaker_server_hostname}}'
|
||||
|
||||
# Tag for distros which are considered "reliable".
|
||||
# Broken system detection logic will be activated for distros with this tag
|
||||
# (see the bkr.server.model:System.suspicious_abort method). Leave this unset
|
||||
# to deactivate broken system detection.
|
||||
#beaker.reliable_distro_tag = "RELEASED"
|
||||
|
||||
# The contents of this file will be displayed to users on every page in Beaker.
|
||||
# If it exists, it must contain a valid HTML fragment (e.g. <span>...</span>).
|
||||
#beaker.motd = "/etc/beaker/motd.xml"
|
||||
|
||||
# The URL of a page describing your organisation's policies for reserving
|
||||
# Beaker machines. If configured, a message will appear on the reserve workflow
|
||||
# page, warning users to adhere to the policy with a hyperlink to this URL. By
|
||||
# default no message is shown.
|
||||
#beaker.reservation_policy_url = "http://example.com/reservation-policy"
|
||||
|
||||
# If both of these options are set, the Piwik tracking javascript snippet will
|
||||
# be embedded in all pages, reporting statistics back to the given Piwik
|
||||
# installation.
|
||||
# Make sure that piwik.base_url is a protocol-relative URL starting with //
|
||||
#piwik.base_url = "//analytics.example.invalid/piwik/"
|
||||
#piwik.site_id = 123
|
||||
|
||||
# These install options are used as global defaults for every provision. They
|
||||
# can be overriden by options on the distro tree, the system, or the recipe.
|
||||
#beaker.ks_meta = ""
|
||||
#beaker.kernel_options = "ksdevice=bootif"
|
||||
#beaker.kernel_options_post = ""
|
||||
|
||||
# See BZ#1000861
|
||||
#beaker.deprecated_job_group_permissions.on = True
|
||||
|
||||
# When generating MAC addresses for virtual systems, Beaker will always pick
|
||||
# the lowest free address starting from this base address.
|
||||
#beaker.base_mac_addr = "52:54:00:00:00:00"
|
||||
|
||||
# Beaker increases the priority of recipes when it detects that they match only
|
||||
# one candidate system. You can disable this behaviour here.
|
||||
#beaker.priority_bumping_enabled = True
|
||||
|
||||
# When generating RPM repos, we can configure what utility
|
||||
# to use. So far, only 'createrepo' and 'createrepo_c' have been
|
||||
# tested. See https://github.com/Tojaj/createrepo_c
|
||||
#beaker.createrepo_command = "createrepo"
|
||||
|
||||
# If you have set up a log archive server (with beaker-transfer) and it
|
||||
# requires HTTP digest authentication for deleting old logs, set the username
|
||||
# and password here.
|
||||
#beaker.log_delete_user = "log-delete"
|
||||
#beaker.log_delete_password = "examplepassword"
|
||||
|
||||
# If carbon.address is set, Beaker will send various metrics to carbon
|
||||
# (collection daemon for Graphite) at the given address. The address must be
|
||||
# a tuple of (hostname, port).
|
||||
# The value of carbon.prefix is prepended to all names used by Beaker.
|
||||
#carbon.address = ('graphite.example.invalid', 2023)
|
||||
#carbon.prefix = 'beaker.'
|
||||
|
||||
# Use OpenStack for running recipes on dynamically created guests.
|
||||
#openstack.identity_api_url = 'https://openstack.example.com:5000/v2.0'
|
||||
#openstack.dashboard_url = 'https://openstack.example.com/dashboard/'
|
||||
|
||||
# Set this to limit the Beaker web application's address space to the given
|
||||
# size (in bytes). This may be helpful to catch excessive memory consumption by
|
||||
# Beaker. On large deployments 1500000000 is a reasonable value.
|
||||
# By default no address space limit is enforced.
|
||||
#rlimit_as=
|
|
@ -6,7 +6,7 @@ config_opts['plugin_conf']['ccache_enable'] = False
|
|||
config_opts['plugin_conf']['yum_cache_enable'] = True
|
||||
config_opts['plugin_conf']['yum_cache_opts']['max_age_days'] = 150
|
||||
config_opts['plugin_conf']['yum_cache_opts']['max_metadata_age_days'] = 150
|
||||
config_opts['plugin_conf']['yum_cache_opts']['dir'] = %(cache_topdir)s/%(root)s/%(package_manager)s_cache/"
|
||||
config_opts['plugin_conf']['yum_cache_opts']['dir'] = "%(cache_topdir)s/%(root)s/%(package_manager)s_cache/"
|
||||
config_opts['plugin_conf']['yum_cache_opts']['target_dir'] = "/var/cache/%(package_manager)s/"
|
||||
|
||||
config_opts['plugin_conf']['root_cache_enable'] = False
|
||||
|
|
|
@ -8,8 +8,8 @@
|
|||
yum: state=present pkg={{ item }}
|
||||
with_items:
|
||||
- dnf
|
||||
- mock
|
||||
- mock-lvm
|
||||
# - mock
|
||||
# - mock-lvm
|
||||
- createrepo
|
||||
- yum-utils
|
||||
- pyliblzma
|
||||
|
@ -19,6 +19,9 @@
|
|||
- libselinux-python
|
||||
- libsemanage-python
|
||||
|
||||
- get_url: url=https://kojipkgs.fedoraproject.org//packages/mock/1.2.9/1.fc21/noarch/mock-1.2.9-1.fc21.noarch.rpm dest=/tmp/
|
||||
- yum: state=present name=/tmp/mock-1.2.9-1.fc21.noarch.rpm
|
||||
|
||||
- name: make sure newest rpm
|
||||
# todo: replace with dnf after ansible 1.9 is available
|
||||
yum: name={{ item }} state=latest
|
||||
|
@ -26,7 +29,6 @@
|
|||
- rpm
|
||||
- glib2
|
||||
- ca-certificates
|
||||
- https://kojipkgs.fedoraproject.org//packages/mock/1.2.9/1.fc21/noarch/mock-1.2.9-1.fc21.noarch.rpm
|
||||
|
||||
- name: put updated mock configs into /etc/mock
|
||||
copy: src=files/mock/{{ item }} dest=/etc/mock
|
||||
|
@ -35,26 +37,6 @@
|
|||
|
||||
# ansible doesn't support simultaneously usage of async and with_* options
|
||||
# it's not even planned for implementation, see https://github.com/ansible/ansible/issues/5841
|
||||
#- name: prepare caches
|
||||
# when: prepare_base_image is defined
|
||||
# async: 3600
|
||||
# command: mock -r {{ item }} --init
|
||||
# with_items:
|
||||
# - epel-5-i386
|
||||
# - epel-5-x86_64
|
||||
# - epel-6-i386
|
||||
# - epel-6-x86_64
|
||||
# - epel-7-x86_64
|
||||
# - fedora-20-i386
|
||||
# - fedora-20-x86_64
|
||||
# - fedora-21-i386
|
||||
# - fedora-21-x86_64
|
||||
# - fedora-22-i386
|
||||
# - fedora-22-x86_64
|
||||
# - fedora-rawhide-i386
|
||||
# - fedora-rawhide-x86_64
|
||||
|
||||
|
||||
- name: prepare cache
|
||||
when: prepare_base_image is defined
|
||||
async: 3600
|
||||
|
|
|
@ -11,8 +11,7 @@
|
|||
wait_for: "{{ max_spawn_time }}"
|
||||
flavor_id: "{{ flavor_name|flavor_name_to_id(OS_USERNAME, OS_PASSWORD, OS_TENANT_NAME, OS_AUTH_URL) }}"
|
||||
security_groups: ssh-anywhere-coprdev,default,ssh-from-persistent-coprdev #,ssh-internal-persistent
|
||||
## again some bug in openstack, we can still live without it, since we have "provisioned" base image
|
||||
# key_name: "{{ key_name }}"
|
||||
key_name: "{{ key_name }}"
|
||||
nics:
|
||||
- net-id: "{{ network_name|network_name_to_id(OS_USERNAME, OS_PASSWORD, OS_TENANT_NAME, OS_AUTH_URL) }}"
|
||||
register: nova
|
||||
|
|
|
@ -178,7 +178,7 @@
|
|||
|
||||
- name: update selinux context for results if root folder does not have proper type
|
||||
command: "restorecon -vvRF /var/lib/copr/public_html/"
|
||||
when: "'copr_data_t' not in public_html_ls.stdout "
|
||||
when: public_html_ls is defined and 'copr_data_t' not in public_html_ls.stdout
|
||||
|
||||
- name: install cert to access fed-cloud09
|
||||
# TODO: remove this when fed-cloud09 receives external cert
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
notify:
|
||||
- run rkhunter
|
||||
tags:
|
||||
- rkhunter
|
||||
- packages
|
||||
|
||||
- name: rkhunter.conf
|
||||
|
@ -11,6 +12,7 @@
|
|||
notify:
|
||||
- run rkhunter
|
||||
tags:
|
||||
- rkhunter
|
||||
- config
|
||||
|
||||
- name: rkhunter sysconfig
|
||||
|
@ -18,4 +20,5 @@
|
|||
notify:
|
||||
- run rkhunter
|
||||
tags:
|
||||
- rkhunter
|
||||
- config
|
||||
|
|
|
@ -200,7 +200,12 @@ ALLOW_SSH_PROT_V1=0
|
|||
# tests, the test names, and how rkhunter behaves when these options are used.
|
||||
#
|
||||
ENABLE_TESTS="all"
|
||||
{% if ansible_hostname.startswith('fed-cloud') %}
|
||||
# Disable the promisc test here as openstack has it set on interfaces
|
||||
DISABLE_TESTS="suspscan hidden_procs deleted_files packet_cap_apps apps promisc"
|
||||
{% else %}
|
||||
DISABLE_TESTS="suspscan hidden_procs deleted_files packet_cap_apps apps"
|
||||
{% endif %}
|
||||
|
||||
#
|
||||
# The HASH_FUNC option can be used to specify the command to use
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue