diff --git a/README b/README index bce9c8d951..20bb1793b7 100644 --- a/README +++ b/README @@ -15,6 +15,12 @@ library - library of custom local ansible modules playbooks - collections of plays we want to run on systems + groups: groups of hosts configured from one playbook. + + hosts: playbooks for single hosts. + + manual: playbooks that are only run manually by an admin as needed. + tasks - snippets of tasks that should be included in plays roles - specific roles to be use in playbooks. @@ -22,6 +28,13 @@ roles - specific roles to be use in playbooks. filter_plugins - Jinja filters +master.yml - This is the master playbook, consisting of all + current group and host playbooks. Note that the + daily cron doesn't run this, it runs even over + playbooks that are not yet included in master. + This playbook is usefull for making changes over + multiple groups/hosts usually with -t (tag). + == Paths == public path for everything is: @@ -36,212 +49,23 @@ In general to run any ansible playbook you will want to run: sudo -i ansible-playbook /path/to/playbook.yml -== Cloud information == +== Scheduled check-diff == -cloud instances: -to startup a new cloud instance and configure for basic server use run (as -root): +Every night a cron job runs over all playbooks under playbooks/{groups}{hosts} +with the ansible --check --diff options. A report from this is sent to +sysadmin-logs. In the ideal state this report would be empty. -el6: -sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/el6_temp_instance.yml +== Idempotency == -f19: -sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/f19_temp_instance.yml +All playbooks should be idempotent. Ie, if run once they should bring the +machine(s) to the desired state, and if run again N times after that they should +make 0 changes (because the machine(s) are in the desired state). +Please make sure your playbooks are idempotent. +== Can be run anytime == -The -i is important - ansible's tools need access to root's sshagent as well -as the cloud credentials to run the above playbooks successfully. - -This will setup a new instance, provision it and email sysadmin-main that -the instance was created, it's instance id (for terminating it, attaching -volumes, etc) and it's ip address. - -You will then be able to login, as root. - -You can add various extra vars to the above commands to change the instance -you've just spun up. - -variables to define: -instance_type=c1.medium -security_group=default -root_auth_users='username1 username2 @groupname' -hostbase=basename for hostname - will have instance id appended to it - - -define these with: - ---extra-vars="varname=value varname1=value varname2=value" - -Name Memory_MB Disk VCPUs -m1.tiny 512 0 1 -m1.small 2048 20 1 -m1.medium 4096 40 2 -m1.large 8192 80 4 -m1.xlarge 16384 160 8 -m1.builder 5120 50 3 - -Setting up a new persistent cloud host: -1. select an ip: - source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh - oeuca-describe-addresses - - pick an ip from the list that is not assigned anywhere - - add it into dns - normally in the cloud.fedoraproject.org but it doesn't - have to be - -2. If needed create a persistent storage disk for the instance: - source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh - euca-create-volume -z nova -s - - -3. set up the host/ip in ansible host inventory - - add to ansible/inventory/inventory under [persistent-cloud] - - either the ip itself or the hostname you want to refer to it as - -4. setup the host_vars - - create file named by the hostname or ip you used in the inventory - - for adding persistent volumes add an entry like this into the host_vars file - - volumes: ['-d /dev/vdb vol-BCA33FCD', '-d /dev/vdc vol-DC833F48'] - - for each volume you want to attach to the instance. - - The device names matter - they start at /dev/vdb and increment. However, - they are not reliable IN the instance. You should find the device, partition - it, format it and label the formatted device then mount the device by label - or by UUID. Do not count on the device name being the same each time. - - -Contents should look like this (remove all the comments) - ---- -# 2cpus, 3GB of ram 20GB of ephemeral space -instance_type: m1.large -# image id - see global vars. You can also use euca-describe-images to find other images as well -image: "{{ el6_qcow_id }}" -keypair: fedora-admin-20130801 -# what security group to add the host to -security_group: webserver -zone: fedoracloud -# instance id will be appended -hostbase: hostname_base- -# ip should be in the 209.132.184.XXX range -public_ip: $ip_you_selected -# users/groups who should have root ssh access -root_auth_users: skvidal bkabrda -description: some description so someone else can know what this is - -The available images can be found by running:: - source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh - euca-describe-images | grep ami - -4. setup a host playbook ansible/playbooks/hosts/$YOUR_HOSTNAME_HERE.yml - Note: the name of this file doesn't really matter but it should normally - be the hostname of the host you're setting up. - -- name: check/create instance - hosts: $YOUR_HOSTNAME/IP HERE - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "{{ private }}/vars.yml" - - tasks: - - include: "{{ tasks }}/persistent_cloud.yml" - -- name: provision instance - hosts: $YOUR_HOSTNAME/IP HERE - user: root - gather_facts: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "{{ private }}/vars.yml" - - /srv/web/infra/ansible/vars//{{ ansible_distribution }}.yml - - tasks: - - include: "{{ tasks }}/cloud_setup_basic.yml - # fill in other actions/includes/etc here - - handlers: - - include: "{{ handlers }}/restart_services.yml - - -5. add/commit the above to the git repo and push your changes - - -6. set it up: - sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/hosts/$YOUR_HOSTNAME_HERE.yml - -7. login, etc - -You should be able to run that playbook over and over again safely, it will -only setup/create a new instance if the ip is not up/responding. - -SECURITY GROUPS -- to edit security groups you must either have your own cloud account or - be a member of sysadmin-main - -This gives you the credential to change things in the persistent tenant -- source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh - - -This lists all security groups in that tenant: -- euca-describe-groups | grep GROUP - -the output will look like this: -euca-describe-groups | grep GROU -GROUP d4e664a10e2c4210839150be09c46e5e default default -GROUP d4e664a10e2c4210839150be09c46e5e jenkins jenkins instance group -GROUP d4e664a10e2c4210839150be09c46e5e logstash logstash security group -GROUP d4e664a10e2c4210839150be09c46e5e smtpserver list server group. needs web and smtp -GROUP d4e664a10e2c4210839150be09c46e5e webserver webserver security group -GROUP d4e664a10e2c4210839150be09c46e5e wideopen wideopen - - -This lets you list the rules in a specific group: -- euca-describe-group groupname - -the output will look like this: - -euca-describe-group wideopen -GROUP d4e664a10e2c4210839150be09c46e5e wideopen wideopen -PERMISSION d4e664a10e2c4210839150be09c46e5e wideopen ALLOWS tcp 1 65535 FROM CIDR 0.0.0.0/0 -PERMISSION d4e664a10e2c4210839150be09c46e5e wideopen ALLOWS icmp -1 -1 FROM CIDR 0.0.0.0/0 - - -To create a new group: -euca-create-group -d "group description here" groupname - -To add a rule to a group: -euca-authorize -P tcp -p 22 groupname -euca-authorize -P icmp -t -1:-1 groupname - -To delete a rule from a group: -euca-revoke -P tcp -p 22 groupname - -Notes: -- Be careful removing or adding rules to existing groups b/c you could be -impacting other instances using that security group. - -- You will almost always want to allow 22/tcp (sshd) and icmp -1 -1 (ping -and traceroute and friends). - - - - -TERMINATING INSTANCES - -For transient: -1. source /srv/private/ansible/files/openstack/transient-admin/ec2rc.sh - - - OR - - -For persistent: -1. source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh - -2. euca-describe-instances | grep - -3. euca-terminate-instances +When a playbook or change is checked into ansible you should assume +that it could be run at ANY TIME. Always make sure the checked in state +is the desired state. Always test changes when they land so they don't +surprise you later. diff --git a/README.cloud b/README.cloud new file mode 100644 index 0000000000..f46dd01902 --- /dev/null +++ b/README.cloud @@ -0,0 +1,181 @@ +== Cloud information == + +The dashboard for the production cloud instance is: +https://fedorainfracloud.org/dashboard/ + +You can download credentials via the dashboard (under security and access) + +=== Transient instances === + +Transient instances are short term use instances for Fedora +contributors. They can be terminated at any time and shouldn't be +relied on for any production use. If you have an application +or longer term item that should always be around +please create a persistent playbook instead. (see below) + +to startup a new transient cloud instance and configure for basic +server use run (as root): + +sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/transient_cloud_instance.yml -e 'name=somename' + +The -i is important - ansible's tools need access to root's sshagent as well +as the cloud credentials to run the above playbooks successfully. + +This will setup a new instance, provision it and email sysadmin-main that +the instance was created and it's ip address. + +You will then be able to login, as root if you are in the sysadmin-main group. +(If you are making the instance for another user, see below) + +You MUST pass a name to it, ie: -e 'name=somethingdescriptive' +You can optionally override defaults by passing any of the following: +image=imagename (default is centos70_x86_64) +instance_type=some instance type (default is m1.small) +root_auth_users='user1 user2 user3 @group1' (default always includes sysadmin-main group) + +Note: if you run this playbook with the same name= multiple times +openstack is smart enough to just return the current ip of that instance +and go on. This way you can re-run if you want to reconfigure it without +reprovisioning it. + + +Sizes options +------------- + +Name Memory_MB Disk VCPUs +m1.tiny 512 0 1 +m1.small 2048 20 1 +m1.medium 4096 40 2 +m1.large 8192 80 4 +m1.xlarge 16384 160 8 +m1.builder 5120 50 3 + + +=== Persistent cloud instances === + +Persistent cloud instances are ones that we want to always have up and +configured. These are things like dev instances for various applications, +proof of concept servers for evaluating something, etc. They will be +reprovisioned after a reboot/maint window for the cloud. + +Setting up a new persistent cloud host: + +1) Select an available floating IP + + source /srv/private/ansible/files/openstack/novarc + nova floating-ip-list + +Note that an "available floating IP" is one that has only a "-" in the Fixed IP +column of the above `nova` command. Ignore the fact that the "Server Id" column +is completely blank for all instances. If there are no ip's with -, use: + + nova floating-ip-create + +and retry the list. + +2) Add that IP addr to dns (typically as foo.fedorainfracloud.org) + +3) Create persistent storage disk for the instance (if necessary.. you might not + need this). + + nova volume-create --display-name SOME_NAME SIZE_IN_GB + +4) Add to ansible inventory in the persistent-cloud group. + You should use the FQDN for this and not the IP. Names are good. + +5) setup the host_vars file. It should looks something like this:: + + instance_type: m1.medium + image: + keypair: fedora-admin-20130801 + security_group: default # NOTE: security_group MUST contain default. + zone: nova + tcp_ports: [22, 80, 443] + + inventory_tenant: persistent + inventory_instance_name: taiga + hostbase: taiga + public_ip: 209.132.184.50 + root_auth_users: ralph maxamillion + description: taiga frontend server + + volumes: + - volume_id: VOLUME_UUID_GOES_HERE + device: /dev/vdc + + cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + +6) setup the host playbook + +7) run the playbook: + sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/hosts/$YOUR_HOSTNAME_HERE.yml + +You should be able to run that playbook over and over again safely, it will +only setup/create a new instance if the ip is not up/responding. + +=== SECURITY GROUPS === + +FIXME: needs work for new cloud. + +- to edit security groups you must either have your own cloud account or + be a member of sysadmin-main + +This gives you the credential to change things in the persistent tenant +- source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh + +This lists all security groups in that tenant: +- euca-describe-groups | grep GROUP + +the output will look like this: +euca-describe-groups | grep GROU +GROUP d4e664a10e2c4210839150be09c46e5e default default +GROUP d4e664a10e2c4210839150be09c46e5e jenkins jenkins instance group +GROUP d4e664a10e2c4210839150be09c46e5e logstash logstash security group +GROUP d4e664a10e2c4210839150be09c46e5e smtpserver list server group. needs web and smtp +GROUP d4e664a10e2c4210839150be09c46e5e webserver webserver security group +GROUP d4e664a10e2c4210839150be09c46e5e wideopen wideopen + + +This lets you list the rules in a specific group: +- euca-describe-group groupname + +the output will look like this: + +euca-describe-group wideopen +GROUP d4e664a10e2c4210839150be09c46e5e wideopen wideopen +PERMISSION d4e664a10e2c4210839150be09c46e5e wideopen ALLOWS tcp 1 65535 FROM CIDR 0.0.0.0/0 +PERMISSION d4e664a10e2c4210839150be09c46e5e wideopen ALLOWS icmp -1 -1 FROM CIDR 0.0.0.0/0 + + +To create a new group: +euca-create-group -d "group description here" groupname + +To add a rule to a group: +euca-authorize -P tcp -p 22 groupname +euca-authorize -P icmp -t -1:-1 groupname + +To delete a rule from a group: +euca-revoke -P tcp -p 22 groupname + +Notes: +- Be careful removing or adding rules to existing groups b/c you could be +impacting other instances using that security group. + +- You will almost always want to allow 22/tcp (sshd) and icmp -1 -1 (ping +and traceroute and friends). + +=== TERMINATING INSTANCES === + +For transient: +1. source /srv/private/ansible/files/openstack/transient-admin/keystonerc.sh + + - OR - + +For persistent: +1. source /srv/private/ansible/files/openstack/novarc + +2. nova list | grep + +3. nova delete diff --git a/files/artboard/artboard.conf b/files/artboard/artboard.conf index a457b2a023..2728550aba 100644 --- a/files/artboard/artboard.conf +++ b/files/artboard/artboard.conf @@ -3,7 +3,14 @@ AllowOverride All - Order allow,deny - Allow from all + + # Apache 2.4 + Require all granted + + + # Apache 2.2 + Order deny,allow + Allow from all + diff --git a/files/download/sync-up-downloads.sh b/files/download/sync-up-downloads.sh index 46b736f248..b1517a13d2 100755 --- a/files/download/sync-up-downloads.sh +++ b/files/download/sync-up-downloads.sh @@ -8,7 +8,7 @@ RSYNC='/usr/bin/rsync' RS_OPT="-avSHP --numeric-ids" RS_DEADLY="--delete --delete-excluded --delete-delay --delay-updates" -ALT_EXCLUDES="--exclude deltaisos/archive --exclude 21_Alpha* --exclude 21-Alpha* --exclude 21_Beta* --exclude=F21a-TC1" +ALT_EXCLUDES="--exclude deltaisos/archive --exclude 22_Alpha* --exclude 22_Beta*" EPL_EXCLUDES="" FED_EXCLUDES="" diff --git a/files/fedora-cloud/fed09-ssh-key.pub b/files/fedora-cloud/fed09-ssh-key.pub index 3885ced01f..92ed6f374e 100644 --- a/files/fedora-cloud/fed09-ssh-key.pub +++ b/files/fedora-cloud/fed09-ssh-key.pub @@ -1 +1,2 @@ -ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZAnj+U9s4Xn36DpQYOfAhW2Q1ZQqkKASvG3rJNsOCpRlvWGcmxrvjUr5mQ3ZMapEu0IaaQUq40JvP8iqJ1HIq4C8UXLBq9SFEfeNYh5qRqEpEn5CRcjrJPwFf6jLpr3bN+F98Vo3E/FMgJ3MzBsynZoT+A6d02oitoxV6DomDB7gXU08Pfz7oQYXBzAVe3+BP4IaeUWbjHDv57LGBa/Xfw5SKrgk+/IKXIGk2Rkxn7sShtHzkpkI4waNl4gqUzwsJ/Y+FJxpI1DvWxHuzlx1uOLupxYA9p+ejJo5sXGZtO2Ynx2NFEjIzqmBljaiy+wmDYvZz2JdIFwSAjPbaFjtF root@fed-cloud09.cloud.fedoraproject.org +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCv8WqXOuL78Rd7ZvDqoi84M7uRV3uueXTXtvlPdyNQBzIBmxh+spw9IhtoR+FlzgQQ1MN4B7YVLTGki6QDxWDM5jgTVfzxTh/HTg7kJ31HbM1/jDuBK7HMfay2BGx/HCqS2oxIBgIBwIMQAU93jBZUxNyYWvO+5TiU35IHEkYOtHyGYtTtuGCopYRQoAAOIVIIzzDbPvopojCBF5cMYglR/G02YgWM7hMpQ9IqEttLctLmpg6ckcp/sDTHV/8CbXbrSN6pOYxn1YutOgC9MHNmxC1joMH18qkwvSnzXaeVNh4PBWnm1f3KVTSZXKuewPThc3fk2sozgM9BH6KmZoKl + diff --git a/files/fedora-cloud/haproxy.cfg b/files/fedora-cloud/haproxy.cfg index 5489f08186..8548645e9a 100644 --- a/files/fedora-cloud/haproxy.cfg +++ b/files/fedora-cloud/haproxy.cfg @@ -35,6 +35,10 @@ global # turn on stats unix socket stats socket /var/lib/haproxy/stats + tune.ssl.default-dh-param 1024 + ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK + + #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block @@ -62,32 +66,46 @@ defaults #frontend keystone_admin *:35357 # default_backend keystone_admin frontend neutron - bind 0.0.0.0:9696 ssl crt /etc/haproxy/fed-cloud09.combined + bind 0.0.0.0:9696 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined default_backend neutron + # HSTS (15768000 seconds = 6 months) + rspadd Strict-Transport-Security:\ max-age=15768000 frontend cinder - bind 0.0.0.0:8776 ssl crt /etc/haproxy/fed-cloud09.combined + bind 0.0.0.0:8776 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined default_backend cinder + # HSTS (15768000 seconds = 6 months) + rspadd Strict-Transport-Security:\ max-age=15768000 frontend swift - bind 0.0.0.0:8080 ssl crt /etc/haproxy/fed-cloud09.combined + bind 0.0.0.0:8080 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined default_backend swift + # HSTS (15768000 seconds = 6 months) + rspadd Strict-Transport-Security:\ max-age=15768000 frontend nova - bind 0.0.0.0:8774 ssl crt /etc/haproxy/fed-cloud09.combined + bind 0.0.0.0:8774 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined default_backend nova + # HSTS (15768000 seconds = 6 months) + rspadd Strict-Transport-Security:\ max-age=15768000 frontend ceilometer - bind 0.0.0.0:8777 ssl crt /etc/haproxy/fed-cloud09.combined + bind 0.0.0.0:8777 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined default_backend ceilometer + # HSTS (15768000 seconds = 6 months) + rspadd Strict-Transport-Security:\ max-age=15768000 frontend ec2 - bind 0.0.0.0:8773 ssl crt /etc/haproxy/fed-cloud09.combined + bind 0.0.0.0:8773 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined default_backend ec2 + # HSTS (15768000 seconds = 6 months) + rspadd Strict-Transport-Security:\ max-age=15768000 frontend glance - bind 0.0.0.0:9292 ssl crt /etc/haproxy/fed-cloud09.combined + bind 0.0.0.0:9292 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined default_backend glance + # HSTS (15768000 seconds = 6 months) + rspadd Strict-Transport-Security:\ max-age=15768000 backend neutron server neutron 127.0.0.1:8696 check diff --git a/files/fedora-cloud/hosts b/files/fedora-cloud/hosts index f2736b22e9..ef76e1dad2 100644 --- a/files/fedora-cloud/hosts +++ b/files/fedora-cloud/hosts @@ -21,4 +21,4 @@ 209.132.181.6 infrastructure infrastructure.fedoraproject.org 209.132.181.32 fas-all.phx2.fedoraproject.org -{{ controller_private_ip }} fed-cloud09.cloud.fedoraproject.org +{{ controller_private_ip }} fed-cloud09.cloud.fedoraproject.org fedorainfracloud.org diff --git a/files/fedora-cloud/packstack-controller-answers.txt b/files/fedora-cloud/packstack-controller-answers.txt index 1dcebcab1d..08e406e3d7 100644 --- a/files/fedora-cloud/packstack-controller-answers.txt +++ b/files/fedora-cloud/packstack-controller-answers.txt @@ -96,11 +96,11 @@ CONFIG_AMQP_SSL_PORT=5671 # The filename of the certificate that the AMQP service is going to # use -CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/fed-cloud09.pem +CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/fedorainfracloud.org.pem # The filename of the private key that the AMQP service is going to # use -CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/fed-cloud09.key +CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/fedorainfracloud.org.key # Auto Generates self signed SSL certificate and key CONFIG_AMQP_SSL_SELF_SIGNED=n @@ -198,10 +198,10 @@ CONFIG_NOVA_COMPUTE_PRIVIF=lo CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager # Public interface on the Nova network server -CONFIG_NOVA_NETWORK_PUBIF={{ controller_public_ip }} +CONFIG_NOVA_NETWORK_PUBIF=eth0 # Private interface for network manager on the Nova network server -CONFIG_NOVA_NETWORK_PRIVIF=lo +CONFIG_NOVA_NETWORK_PRIVIF=eth1 # IP Range for network manager CONFIG_NOVA_NETWORK_FIXEDRANGE={{ internal_interface_cidr }} @@ -214,7 +214,7 @@ CONFIG_NOVA_NETWORK_FLOATRANGE={{ public_interface_cidr }} CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=external # Automatically assign a floating IP to new instances -CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=y +CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n # First VLAN for private networks CONFIG_NOVA_NETWORK_VLAN_START=100 @@ -258,6 +258,16 @@ CONFIG_NEUTRON_L2_PLUGIN=ml2 # metadata agent CONFIG_NEUTRON_METADATA_PW={{ NEUTRON_PASS }} +# Set to 'y' if you would like Packstack to install Neutron LBaaS +CONFIG_LBAAS_INSTALL=y + +# Set to 'y' if you would like Packstack to install Neutron L3 +# Metering agent +CONFIG_NEUTRON_METERING_AGENT_INSTALL=y + +# Whether to configure neutron Firewall as a Service +CONFIG_NEUTRON_FWAAS=y + # A comma separated list of network type driver entrypoints to be # loaded from the neutron.ml2.type_drivers namespace. CONFIG_NEUTRON_ML2_TYPE_DRIVERS=local,flat,gre @@ -350,14 +360,14 @@ CONFIG_HORIZON_SSL=y # PEM encoded certificate to be used for ssl on the https server, # leave blank if one should be generated, this certificate should not # require a passphrase -CONFIG_SSL_CERT=/etc/pki/tls/certs/fed-cloud09.pem +CONFIG_SSL_CERT=/etc/pki/tls/certs/fedorainfracloud.org.pem # PEM encoded CA certificates from which the certificate chain of the # # server certificate can be assembled. -CONFIG_SSL_CACHAIN=/etc/pki/tls/certs/fed-cloud09.pem +CONFIG_SSL_CACHAIN=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem # Keyfile corresponding to the certificate if one was entered -CONFIG_SSL_KEY=/etc/pki/tls/private/fed-cloud09.key +CONFIG_SSL_KEY=/etc/pki/tls/private/fedorainfracloud.key # The password to use for the Swift to authenticate with Keystone CONFIG_SWIFT_KS_PW={{ SWIFT_PASS }} @@ -443,7 +453,7 @@ CONFIG_CEILOMETER_SECRET={{ CEILOMETER_SECRET }} CONFIG_CEILOMETER_KS_PW={{ CEILOMETER_PASS }} # The IP address of the server on which to install mongodb -CONFIG_MONGODB_HOST={{ controller_public_ip }} +CONFIG_MONGODB_HOST=127.0.0.1 # The password of the nagiosadmin user on the Nagios server CONFIG_NAGIOS_PW= diff --git a/files/httpd/newvirtualhost.conf.j2 b/files/httpd/newvirtualhost.conf.j2 new file mode 100644 index 0000000000..18c7a2e8ad --- /dev/null +++ b/files/httpd/newvirtualhost.conf.j2 @@ -0,0 +1,75 @@ + + # Change this to the domain which points to your host. + ServerName {{ item.name }} + + # Use separate log files for the SSL virtual host; note that LogLevel + # is not inherited from httpd.conf. + ErrorLog logs/{{ item.name }}_error_log + TransferLog logs/{{ item.name }}_access_log + LogLevel warn + + # SSL Engine Switch: + # Enable/Disable SSL for this virtual host. + SSLEngine on + + # SSL Protocol support: + # List the enable protocol levels with which clients will be able to + # connect. Disable SSLv2 access by default: + SSLProtocol all -SSLv2 + + # SSL Cipher Suite: + # List the ciphers that the client is permitted to negotiate. + # See the mod_ssl documentation for a complete list. + #SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW + SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5 + + # Server Certificate: + # Point SSLCertificateFile at a PEM encoded certificate. If + # the certificate is encrypted, then you will be prompted for a + # pass phrase. Note that a kill -HUP will prompt again. A new + # certificate can be generated using the genkey(1) command. + SSLCertificateFile /etc/pki/tls/certs/{{ sslcertfile }} + + # Server Private Key: + # If the key is not combined with the certificate, use this + # directive to point at the key file. Keep in mind that if + # you've both a RSA and a DSA private key you can configure + # both in parallel (to also allow the use of DSA ciphers, etc.) + SSLCertificateKeyFile /etc/pki/tls/private/{{ sslkeyfile }} + + # Server Certificate Chain: + # Point SSLCertificateChainFile at a file containing the + # concatenation of PEM encoded CA certificates which form the + # certificate chain for the server certificate. Alternatively + # the referenced file can be the same as SSLCertificateFile + # when the CA certificates are directly appended to the server + # certificate for convinience. + #SSLCertificateChainFile /etc/pki/tls/certs/server-chain.crt + {% if sslintermediatecertfile != '' %} + SSLCertificateChainFile /etc/pki/tls/certs/{{ sslintermediatecertfile }} + {% endif %} + + # Certificate Authority (CA): + # Set the CA certificate verification path where to find CA + # certificates for client authentication or alternatively one + # huge file containing all of them (file must be PEM encoded) + #SSLCACertificateFile /etc/pki/tls/certs/ca-bundle.crt + + DocumentRoot {{ item.document_root }} + + Options Indexes FollowSymLinks + + + + + + # Change this to the domain which points to your host. + ServerName {{ item.name }} + {% if sslonly %} + RewriteEngine On + RewriteCond %{HTTPS} off + RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [NE] + {% else %} + Options Indexes FollowSymLinks + {% endif %} + diff --git a/files/jenkins/master/config.xml b/files/jenkins/master/config.xml index 19ec77249e..f1775cabed 100644 --- a/files/jenkins/master/config.xml +++ b/files/jenkins/master/config.xml @@ -46,6 +46,22 @@ class="jenkins.model.ProjectNamingStrategy$DefaultProjectNamingStrategy"/> + + Fedora22 + + /mnt/jenkins/ + 2 + NORMAL + + + jenkins-f22.fedorainfracloud.org + 22 + 950d5dd7-acb2-402a-8670-21f152d04928 + + + + Fedora20 @@ -63,7 +79,7 @@ class="jenkins.model.ProjectNamingStrategy$DefaultProjectNamingStrategy"/> - EL7-beta + EL7 /mnt/jenkins/ 2 @@ -71,7 +87,7 @@ class="jenkins.model.ProjectNamingStrategy$DefaultProjectNamingStrategy"/> - 172.16.5.14 + 172.16.5.27 22 950d5dd7-acb2-402a-8670-21f152d04928 diff --git a/files/koschei/config.cfg.j2 b/files/koschei/config.cfg.j2 deleted file mode 100644 index bd26a2a2c1..0000000000 --- a/files/koschei/config.cfg.j2 +++ /dev/null @@ -1,58 +0,0 @@ -# This is a config file for Koschei that can override values in default -# configuration in /usr/share/koschei/config.cfg. It is a python file expecting -# assignment to config dictionary which will be recursively merged with the -# default one. -config = { - "database_config": { - "username": "koschei", - "password": "{{ koschei_pgsql_password }}", - "database": "koschei" - }, - "koji_config": { - "cert": "/etc/koschei/koschei.pem", - "ca": "/etc/koschei/fedora-ca.cert", - "server_ca": "/etc/koschei/fedora-ca.cert", - }, - "flask": { - "SECRET_KEY": "{{ koschei_flask_secret_key }}", - }, - "logging": { - "loggers": { - "": { - "level": "DEBUG", - "handlers": ["stderr", "email"], - }, - }, - "handlers": { - "email": { - "class": "logging.handlers.SMTPHandler", - "level": "WARN", - "mailhost": "localhost", - "fromaddr": "koschei@fedoraproject.org", - "toaddrs": ['msimacek@redhat.com', 'mizdebsk@redhat.com'], - "subject": "Koschei warning", - }, - }, - }, - "fedmsg-publisher": { - "enabled": True, - "modname": "koschei", - }, -# "services": { -# "polling": { -# "interval": 60, -# }, -# }, - "dependency": { - "repo_chache_items": 5, - "keep_build_deps_for": 2 - }, - "koji_config": { - "max_builds": 30 - }, -} - -# Local Variables: -# mode: Python -# End: -# vi: ft=python diff --git a/files/koschei/koschei.repo b/files/koschei/koschei.repo deleted file mode 100644 index 265806e614..0000000000 --- a/files/koschei/koschei.repo +++ /dev/null @@ -1,13 +0,0 @@ -[koschei-mizdebsk] -name=Koschei repo -baseurl=https://mizdebsk.fedorapeople.org/koschei/repo/ -enabled=1 -gpgcheck=0 -metadata_expire=60 - -[koschei-msimacek] -name=Koschei repo -baseurl=https://msimacek.fedorapeople.org/koschei/repo/ -enabled=1 -gpgcheck=0 -metadata_expire=60 diff --git a/files/lists-dev/apache.conf.j2 b/files/lists-dev/apache.conf.j2 new file mode 100644 index 0000000000..c45d4208f6 --- /dev/null +++ b/files/lists-dev/apache.conf.j2 @@ -0,0 +1,17 @@ + + ServerAdmin admin@fedoraproject.org + ServerName {{ ansible_hostname }} + + + ServerAdmin admin@fedoraproject.org + ServerName {{ ansible_hostname }} + + SSLEngine on + SSLCertificateFile /etc/pki/tls/certs/localhost.crt + SSLCertificateKeyFile /etc/pki/tls/private/localhost.key + #SSLCertificateChainFile /etc/pki/tls/cert.pem + SSLHonorCipherOrder On + SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK + SSLProtocol -All +TLSv1 +TLSv1.1 +TLSv1.2 + + diff --git a/roles/mailman/templates/mailman.logrotate.j2 b/files/lists-dev/mailman.logrotate.j2 similarity index 74% rename from roles/mailman/templates/mailman.logrotate.j2 rename to files/lists-dev/mailman.logrotate.j2 index 5975814267..048e3a3051 100644 --- a/roles/mailman/templates/mailman.logrotate.j2 +++ b/files/lists-dev/mailman.logrotate.j2 @@ -3,7 +3,7 @@ sharedscripts su mailman mailman postrotate - /bin/kill -HUP `cat /run/mailman3/master.pid 2>/dev/null` 2>/dev/null || true + /bin/kill -HUP `cat {{ mailman_webui_basedir }}/var/master.pid 2>/dev/null` 2>/dev/null || true # Don't run "mailman3 reopen" with SELinux on here in the logrotate # context, it will be blocked #/usr/bin/mailman3 reopen >/dev/null 2>&1 || true diff --git a/roles/mailman/templates/mailman3.service.j2 b/files/lists-dev/mailman3.service.j2 similarity index 100% rename from roles/mailman/templates/mailman3.service.j2 rename to files/lists-dev/mailman3.service.j2 diff --git a/files/lists-dev/pgpass.j2 b/files/lists-dev/pgpass.j2 index b0b2297296..a7bd44af62 100644 --- a/files/lists-dev/pgpass.j2 +++ b/files/lists-dev/pgpass.j2 @@ -1,3 +1,2 @@ *:*:mailman:mailmanadmin:{{ lists_dev_mm_db_pass }} *:*:hyperkitty:hyperkittyadmin:{{ lists_dev_hk_db_pass }} -*:*:kittystore:kittystoreadmin:{{ lists_dev_ks_db_pass }} diff --git a/files/lists-dev/ssl.conf b/files/lists-dev/ssl.conf new file mode 100644 index 0000000000..adb7c7c9b9 --- /dev/null +++ b/files/lists-dev/ssl.conf @@ -0,0 +1,2 @@ +LoadModule ssl_module modules/mod_ssl.so +Listen 443 diff --git a/files/mailman/mailman2-import.sh b/files/mailman/mailman2-import.sh deleted file mode 100644 index 71f0821d14..0000000000 --- a/files/mailman/mailman2-import.sh +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -HKCONFDIR="/etc/hyperkitty/sites/default" -MMDIR=$1 -DOMAIN=$2 - -if [ -z "$MMDIR" ]; then - echo "Usage: $0 " - exit 2 -fi - -[ -z "$DOMAIN" ] && DOMAIN=lists.fedoraproject.org - -existinglists=`mktemp` -trap "rm -f $existinglists" EXIT -sudo -u mailman mailman3 lists -q > $existinglists - -for listname in `ls $MMDIR/lists`; do - listaddr="$listname@$DOMAIN" - if ! grep -qs $listaddr $existinglists; then - echo "sudo -u mailman mailman3 create -d $listaddr" - echo "sudo -u mailman PYTHONPATH=/usr/lib/mailman mailman3 import21 $listaddr $MMDIR/lists/$listname/config.pck" - fi - echo "sudo kittystore-import -p $HKCONFDIR -s settings_admin -l $listaddr --continue $MMDIR/archives/private/${listname}.mbox/${listname}.mbox" -done diff --git a/files/mailman/pgpass.j2 b/files/mailman/pgpass.j2 index bfb3161ad0..dae76e65da 100644 --- a/files/mailman/pgpass.j2 +++ b/files/mailman/pgpass.j2 @@ -1,7 +1,5 @@ *:*:mailman:mailman:{{ mailman_mm_db_pass }} *:*:hyperkitty:hyperkittyapp:{{ mailman_hk_db_pass }} *:*:hyperkitty:hyperkittyadmin:{{ mailman_hk_admin_db_pass }} -*:*:kittystore:kittystoreapp:{{ mailman_ks_db_pass }} -*:*:kittystore:kittystoreadmin:{{ mailman_ks_admin_db_pass }} *:*:postorius:postoriusapp:{{ mailman_ps_db_pass }} *:*:postorius:postoriusadmin:{{ mailman_ps_admin_db_pass }} diff --git a/files/osbs/atomic-reactor.repo b/files/osbs/atomic-reactor.repo new file mode 100644 index 0000000000..b19cde06b3 --- /dev/null +++ b/files/osbs/atomic-reactor.repo @@ -0,0 +1,8 @@ +[atomic-reactor] +name=Copr repo for atomic-reactor owned by maxamillion +baseurl=https://copr-be.cloud.fedoraproject.org/results/maxamillion/atomic-reactor/epel-7-$basearch/ +skip_if_unavailable=True +gpgcheck=1 +gpgkey=https://copr-be.cloud.fedoraproject.org/results/maxamillion/atomic-reactor/pubkey.gpg +enabled=1 +enabled_metadata=1 diff --git a/files/osbs/osbs.conf b/files/osbs/osbs.conf new file mode 100644 index 0000000000..8284b7b5a1 --- /dev/null +++ b/files/osbs/osbs.conf @@ -0,0 +1,18 @@ +[general] +build_json_dir = /usr/share/osbs/ + +[default] +openshift_uri = https://losbs.example.com:8443/ +# if you want to get packages from koji (koji plugin in dock) +# you need to setup koji hub and root +# this sample is for fedora +koji_root = http://koji.fedoraproject.org/ +koji_hub = http://koji.fedoraproject.org/kojihub +# in case of using artifacts plugin, you should provide a command +# how to fetch artifacts +sources_command = fedpkg sources +# from where should be images pulled and where should be pushed? +# registry_uri = your.example.registry +registry_uri = localhost:5000 +verify_ssl = false +build_type = simple diff --git a/files/sign/bridge.conf.j2 b/files/sign/bridge.conf.j2 index c1df21c029..ef4c26df71 100644 --- a/files/sign/bridge.conf.j2 +++ b/files/sign/bridge.conf.j2 @@ -26,4 +26,5 @@ unix-group: sigul nss-dir: /var/lib/sigul # Password for accessing the NSS database. If not specified, the bridge will # ask on startup -; nss-password: +# Currently no password is used +nss-password: diff --git a/files/sign/bridge.conf.secondary.j2 b/files/sign/bridge.conf.secondary.j2 new file mode 100644 index 0000000000..6225473107 --- /dev/null +++ b/files/sign/bridge.conf.secondary.j2 @@ -0,0 +1,45 @@ +# This is a configuration for the sigul bridge. +# +[bridge] +# Nickname of the bridge's certificate in the NSS database specified below +bridge-cert-nickname: secondary-signer +# Port on which the bridge expects client connections +client-listen-port: 44334 +# Port on which the bridge expects server connections +server-listen-port: 44333 +# A Fedora account system group required for access to the signing server. If +# empty, no Fedora account check is done. +; required-fas-group: +# User name and password for an account on the Fedora account system that can +# be used to verify group memberships +; fas-user-name: +; fas-password: +# +[koji] +# Config file used to connect to the Koji hub +# ; koji-config: ~/.koji/config +# # Recognized alternative instances +koji-instances: ppc s390 arm sparc +# +# # Example configuration of alternative instances: +# # koji-instances: ppc64 s390 +# # Configuration paths for alternative instances: +koji-config-ppc: /etc/koji-ppc.conf +koji-config-s390: /etc/koji-s390.conf +koji-config-arm: /etc/koji-arm.conf +koji-config-sparc: /etc/koji-sparc.conf +# +# +[daemon] +# The user to run as +unix-user: sigul +# The group to run as +unix-group: sigul +# +[nss] +# Path to a directory containing a NSS database +nss-dir: /var/lib/sigul +# Password for accessing the NSS database. If not specified, the bridge will +# ask on startup +# Currently no password is used +nss-password: diff --git a/roles/koji_builder/files/arm-koji.conf b/files/sign/koji-arm.conf similarity index 72% rename from roles/koji_builder/files/arm-koji.conf rename to files/sign/koji-arm.conf index 83eaa2dbef..5fcc48f8c8 100644 --- a/roles/koji_builder/files/arm-koji.conf +++ b/files/sign/koji-arm.conf @@ -8,16 +8,20 @@ server = http://arm.koji.fedoraproject.org/kojihub ;url of web interface weburl = http://arm.koji.fedoraproject.org/koji +;url of package download site +topurl = http://armpkgs.fedoraproject.org/ + ;path to the koji top directory ;topdir = /mnt/koji ;configuration for SSL athentication ;client certificate -;cert = ~/.koji/client.crt +cert = ~/.fedora.cert ;certificate of the CA that issued the client certificate -;ca = ~/.koji/clientca.crt +ca = ~/.fedora-upload-ca.cert ;certificate of the CA that issued the HTTP server certificate -;serverca = ~/.koji/serverca.crt +serverca = ~/.fedora-server-ca.cert + diff --git a/files/sign/koji-ppc.conf b/files/sign/koji-ppc.conf new file mode 100644 index 0000000000..819cd5b1a8 --- /dev/null +++ b/files/sign/koji-ppc.conf @@ -0,0 +1,27 @@ +[koji] + +;configuration for koji cli tool + +;url of XMLRPC server +server = http://ppc.koji.fedoraproject.org/kojihub + +;url of web interface +weburl = http://ppc.koji.fedoraproject.org/koji + +;url of package download site +topurl = http://ppc.koji.fedoraproject.org/ + +;path to the koji top directory +;topdir = /mnt/koji + +;configuration for SSL athentication + +;client certificate +cert = ~/.fedora.cert + +;certificate of the CA that issued the client certificate +ca = ~/.fedora-upload-ca.cert + +;certificate of the CA that issued the HTTP server certificate +serverca = ~/.fedora-server-ca.cert + diff --git a/files/sign/koji-s390.conf b/files/sign/koji-s390.conf new file mode 100644 index 0000000000..09b09ccf86 --- /dev/null +++ b/files/sign/koji-s390.conf @@ -0,0 +1,27 @@ +[koji] + +;configuration for koji cli tool + +;url of XMLRPC server +server = http://s390.koji.fedoraproject.org/kojihub + +;url of web interface +weburl = http://s390.koji.fedoraproject.org/koji + +;url of package download site +topurl = http://s390pkgs.fedoraproject.org/ + +;path to the koji top directory +;topdir = /mnt/koji + +;configuration for SSL athentication + +;client certificate +cert = ~/.fedora.cert + +;certificate of the CA that issued the client certificate +ca = ~/.fedora-upload-ca.cert + +;certificate of the CA that issued the HTTP server certificate +serverca = ~/.fedora-server-ca.cert + diff --git a/files/sign/server.conf b/files/sign/server.conf.primary similarity index 100% rename from files/sign/server.conf rename to files/sign/server.conf.primary diff --git a/files/sign/server.conf.secondary b/files/sign/server.conf.secondary new file mode 100644 index 0000000000..38d6a0cbfc --- /dev/null +++ b/files/sign/server.conf.secondary @@ -0,0 +1,51 @@ +# This is a configuration for the sigul server. + +# FIXME: remove my data + +[server] +# Host name of the publically acessible bridge to clients +bridge-hostname: secondary-signer +# Port on which the bridge expects server connections +; bridge-port: 44333 +# Maximum accepted size of payload stored on disk +max-file-payload-size: 2073741824 +# Maximum accepted size of payload stored in server's memory +max-memory-payload-size: 1048576 +# Nickname of the server's certificate in the NSS database specified below +server-cert-nickname: secondary-signer-server + +signing-timeout: 4000 + +[database] +# Path to a SQLite database +; database-path: /var/lib/sigul/server.conf + +[gnupg] +# Path to a directory containing GPG configuration and keyrings +gnupg-home: /var/lib/sigul/gnupg +# Default primary key type for newly created keys +gnupg-key-type: RSA +# Default primary key length for newly created keys +gnupg-key-length: 4096 +# Default subkey type for newly created keys, empty for no subkey +#gnupg-subkey-type: ELG-E +# Default subkey length for newly created keys if gnupg-subkey-type is not empty +# gnupg-subkey-length: 4096 +# Default key usage flags for newly created keys +gnupg-key-usage: encrypt, sign +# Length of key passphrases used for newsly created keys +; passphrase-length: 64 + +[daemon] +# The user to run as +unix-user: sigul +# The group to run as +unix-group: sigul + +[nss] +# Path to a directory containing a NSS database +nss-dir: /var/lib/sigul +# Password for accessing the NSS database. If not specified, the server will +# ask on startup +; nss-password is not specified by default + diff --git a/files/twisted/ssh-pub-key b/files/twisted/ssh-pub-key new file mode 100644 index 0000000000..01232559f6 --- /dev/null +++ b/files/twisted/ssh-pub-key @@ -0,0 +1 @@ +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDxFYkms3bEGIGpH0Dg0JgvsdHI/pWkS4ynCn/poyVcKc+SL1StKoOFPzzFh7dGVIeQ6q8MLbm246H8Swx57R13Q3bhtTs5Gpy5bNC7HejkWbrrMEJuKxVKhIintbC+tT04OFBFklVePuxacsc3EBdTHSnz9o41MfJnjv58VxJB5bwfgsV7FMDLHnBpujlPPH1hZG5A0fwD8VgCwaRVirIs9Kw35yKEUm8D76vOxjAqm7UTexEcPNFb4tYGzI00hbPS374FzoO4ZuXxv1ymakw9iyL54Hwbyj8JxBbgfZ6TvgLSSN9OU+KRqz1NqfepSj+y8up0Q+W8J5UObvf02VZrJKVgnIVe5gw4iDx/5E7F4qmf8qa5YUlJnP3LWRz6jhtQE+m6Ro7zItnoqPR3EtQZ9rMgaS1+/qPX7hcB35hlGZbhj0IDY+HE98ehUivUuxSoLOp8c+COaJ2b5+wSQigi9jRYx0qPeCOCCtA8vF8z4SOmD3I6IsPzlCiejeC5y3tWoQqJPR430TPBJ7CMNbbHPNF8GyzM7vFukqSpgacLq1f/YgBwqiRLVk+ktgUM/+fHuE6mUDMdE+Ag2lfwHnLI7DOwaJdr7JoAoSi6R+uTRhx1d4AET1sMv/HXKD+4Abu0WyaT3l/xO+hBABz+KO33gPUdCsKOw7lvJFZRC+OSyQ== diff --git a/filter_plugins/fedmsg.py b/filter_plugins/fedmsg.py new file mode 100644 index 0000000000..de31a2a174 --- /dev/null +++ b/filter_plugins/fedmsg.py @@ -0,0 +1,39 @@ +import operator + + +def invert_fedmsg_policy(groups, vars, env): + """ Given hostvars that map hosts -> topics, invert that + and return a dict that maps topics -> hosts. + + Really, returns a list of tuples -- not a dict. + """ + + if env == 'staging': + hosts = groups['staging'] + else: + hosts = [h for h in groups['all'] if h not in groups['staging']] + + inverted = {} + for host in hosts: + prefix = '.'.join([vars[host]['fedmsg_prefix'], + vars[host]['fedmsg_env']]) + fqdn = vars[host].get('fedmsg_fqdn', host) + + for cert in vars[host]['fedmsg_certs']: + for topic in cert.get('can_send', []): + key = prefix + '.' + topic + inverted[key] = inverted.get(key, []) + inverted[key].append(cert['service'] + '-' + fqdn) + + result = inverted.items() + # Sort things so they come out in a reliable order (idempotence) + [inverted[key].sort() for key in inverted] + result.sort(key=operator.itemgetter(0)) + return result + + +class FilterModule(object): + def filters(self): + return { + "invert_fedmsg_policy": invert_fedmsg_policy, + } diff --git a/filter_plugins/oo_filters.py b/filter_plugins/oo_filters.py new file mode 100644 index 0000000000..47033a88e1 --- /dev/null +++ b/filter_plugins/oo_filters.py @@ -0,0 +1,315 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- +# vim: expandtab:tabstop=4:shiftwidth=4 +''' +Custom filters for use in openshift-ansible +''' + +from ansible import errors +from operator import itemgetter +import pdb +import re +import json + + +class FilterModule(object): + ''' Custom ansible filters ''' + + @staticmethod + def oo_pdb(arg): + ''' This pops you into a pdb instance where arg is the data passed in + from the filter. + Ex: "{{ hostvars | oo_pdb }}" + ''' + pdb.set_trace() + return arg + + @staticmethod + def get_attr(data, attribute=None): + ''' This looks up dictionary attributes of the form a.b.c and returns + the value. + Ex: data = {'a': {'b': {'c': 5}}} + attribute = "a.b.c" + returns 5 + ''' + if not attribute: + raise errors.AnsibleFilterError("|failed expects attribute to be set") + + ptr = data + for attr in attribute.split('.'): + ptr = ptr[attr] + + return ptr + + @staticmethod + def oo_flatten(data): + ''' This filter plugin will flatten a list of lists + ''' + if not issubclass(type(data), list): + raise errors.AnsibleFilterError("|failed expects to flatten a List") + + return [item for sublist in data for item in sublist] + + @staticmethod + def oo_collect(data, attribute=None, filters=None): + ''' This takes a list of dict and collects all attributes specified into a + list. If filter is specified then we will include all items that + match _ALL_ of filters. If a dict entry is missing the key in a + filter it will be excluded from the match. + Ex: data = [ {'a':1, 'b':5, 'z': 'z'}, # True, return + {'a':2, 'z': 'z'}, # True, return + {'a':3, 'z': 'z'}, # True, return + {'a':4, 'z': 'b'}, # FAILED, obj['z'] != obj['z'] + ] + attribute = 'a' + filters = {'z': 'z'} + returns [1, 2, 3] + ''' + if not issubclass(type(data), list): + raise errors.AnsibleFilterError("|failed expects to filter on a List") + + if not attribute: + raise errors.AnsibleFilterError("|failed expects attribute to be set") + + if filters is not None: + if not issubclass(type(filters), dict): + raise errors.AnsibleFilterError("|fialed expects filter to be a" + " dict") + retval = [FilterModule.get_attr(d, attribute) for d in data if ( + all([d.get(key, None) == filters[key] for key in filters]))] + else: + retval = [FilterModule.get_attr(d, attribute) for d in data] + + return retval + + @staticmethod + def oo_select_keys(data, keys): + ''' This returns a list, which contains the value portions for the keys + Ex: data = { 'a':1, 'b':2, 'c':3 } + keys = ['a', 'c'] + returns [1, 3] + ''' + + if not issubclass(type(data), dict): + raise errors.AnsibleFilterError("|failed expects to filter on a dict") + + if not issubclass(type(keys), list): + raise errors.AnsibleFilterError("|failed expects first param is a list") + + # Gather up the values for the list of keys passed in + retval = [data[key] for key in keys] + + return retval + + @staticmethod + def oo_prepend_strings_in_list(data, prepend): + ''' This takes a list of strings and prepends a string to each item in the + list + Ex: data = ['cart', 'tree'] + prepend = 'apple-' + returns ['apple-cart', 'apple-tree'] + ''' + if not issubclass(type(data), list): + raise errors.AnsibleFilterError("|failed expects first param is a list") + if not all(isinstance(x, basestring) for x in data): + raise errors.AnsibleFilterError("|failed expects first param is a list" + " of strings") + retval = [prepend + s for s in data] + return retval + + @staticmethod + def oo_combine_key_value(data, joiner='='): + '''Take a list of dict in the form of { 'key': 'value'} and + arrange them as a list of strings ['key=value'] + ''' + if not issubclass(type(data), list): + raise errors.AnsibleFilterError("|failed expects first param is a list") + + rval = [] + for item in data: + rval.append("%s%s%s" % (item['key'], joiner, item['value'])) + + return rval + + @staticmethod + def oo_ami_selector(data, image_name): + ''' This takes a list of amis and an image name and attempts to return + the latest ami. + ''' + if not issubclass(type(data), list): + raise errors.AnsibleFilterError("|failed expects first param is a list") + + if not data: + return None + else: + if image_name is None or not image_name.endswith('_*'): + ami = sorted(data, key=itemgetter('name'), reverse=True)[0] + return ami['ami_id'] + else: + ami_info = [(ami, ami['name'].split('_')[-1]) for ami in data] + ami = sorted(ami_info, key=itemgetter(1), reverse=True)[0][0] + return ami['ami_id'] + + @staticmethod + def oo_ec2_volume_definition(data, host_type, docker_ephemeral=False): + ''' This takes a dictionary of volume definitions and returns a valid ec2 + volume definition based on the host_type and the values in the + dictionary. + The dictionary should look similar to this: + { 'master': + { 'root': + { 'volume_size': 10, 'device_type': 'gp2', + 'iops': 500 + } + }, + 'node': + { 'root': + { 'volume_size': 10, 'device_type': 'io1', + 'iops': 1000 + }, + 'docker': + { 'volume_size': 40, 'device_type': 'gp2', + 'iops': 500, 'ephemeral': 'true' + } + } + } + ''' + if not issubclass(type(data), dict): + raise errors.AnsibleFilterError("|failed expects first param is a dict") + if host_type not in ['master', 'node', 'etcd']: + raise errors.AnsibleFilterError("|failed expects etcd, master or node" + " as the host type") + + root_vol = data[host_type]['root'] + root_vol['device_name'] = '/dev/sda1' + root_vol['delete_on_termination'] = True + if root_vol['device_type'] != 'io1': + root_vol.pop('iops', None) + if host_type == 'node': + docker_vol = data[host_type]['docker'] + docker_vol['device_name'] = '/dev/xvdb' + docker_vol['delete_on_termination'] = True + if docker_vol['device_type'] != 'io1': + docker_vol.pop('iops', None) + if docker_ephemeral: + docker_vol.pop('device_type', None) + docker_vol.pop('delete_on_termination', None) + docker_vol['ephemeral'] = 'ephemeral0' + return [root_vol, docker_vol] + elif host_type == 'etcd': + etcd_vol = data[host_type]['etcd'] + etcd_vol['device_name'] = '/dev/xvdb' + etcd_vol['delete_on_termination'] = True + if etcd_vol['device_type'] != 'io1': + etcd_vol.pop('iops', None) + return [root_vol, etcd_vol] + return [root_vol] + + @staticmethod + def oo_split(string, separator=','): + ''' This splits the input string into a list + ''' + return string.split(separator) + + @staticmethod + def oo_filter_list(data, filter_attr=None): + ''' This returns a list, which contains all items where filter_attr + evaluates to true + Ex: data = [ { a: 1, b: True }, + { a: 3, b: False }, + { a: 5, b: True } ] + filter_attr = 'b' + returns [ { a: 1, b: True }, + { a: 5, b: True } ] + ''' + if not issubclass(type(data), list): + raise errors.AnsibleFilterError("|failed expects to filter on a list") + + if not issubclass(type(filter_attr), str): + raise errors.AnsibleFilterError("|failed expects filter_attr is a str") + + # Gather up the values for the list of keys passed in + return [x for x in data if x[filter_attr]] + + @staticmethod + def oo_parse_heat_stack_outputs(data): + ''' Formats the HEAT stack output into a usable form + + The goal is to transform something like this: + + +---------------+-------------------------------------------------+ + | Property | Value | + +---------------+-------------------------------------------------+ + | capabilities | [] | | + | creation_time | 2015-06-26T12:26:26Z | | + | description | OpenShift cluster | | + | … | … | + | outputs | [ | + | | { | + | | "output_value": "value_A" | + | | "description": "This is the value of Key_A" | + | | "output_key": "Key_A" | + | | }, | + | | { | + | | "output_value": [ | + | | "value_B1", | + | | "value_B2" | + | | ], | + | | "description": "This is the value of Key_B" | + | | "output_key": "Key_B" | + | | }, | + | | ] | + | parameters | { | + | … | … | + +---------------+-------------------------------------------------+ + + into something like this: + + { + "Key_A": "value_A", + "Key_B": [ + "value_B1", + "value_B2" + ] + } + ''' + + # Extract the “outputs” JSON snippet from the pretty-printed array + in_outputs = False + outputs = '' + + line_regex = re.compile(r'\|\s*(.*?)\s*\|\s*(.*?)\s*\|') + for line in data['stdout_lines']: + match = line_regex.match(line) + if match: + if match.group(1) == 'outputs': + in_outputs = True + elif match.group(1) != '': + in_outputs = False + if in_outputs: + outputs += match.group(2) + + outputs = json.loads(outputs) + + # Revamp the “outputs” to put it in the form of a “Key: value” map + revamped_outputs = {} + for output in outputs: + revamped_outputs[output['output_key']] = output['output_value'] + + return revamped_outputs + + def filters(self): + ''' returns a mapping of filters to methods ''' + return { + "oo_select_keys": self.oo_select_keys, + "oo_collect": self.oo_collect, + "oo_flatten": self.oo_flatten, + "oo_pdb": self.oo_pdb, + "oo_prepend_strings_in_list": self.oo_prepend_strings_in_list, + "oo_ami_selector": self.oo_ami_selector, + "oo_ec2_volume_definition": self.oo_ec2_volume_definition, + "oo_combine_key_value": self.oo_combine_key_value, + "oo_split": self.oo_split, + "oo_filter_list": self.oo_filter_list, + "oo_parse_heat_stack_outputs": self.oo_parse_heat_stack_outputs + } diff --git a/filter_plugins/oo_zabbix_filters.py b/filter_plugins/oo_zabbix_filters.py new file mode 100644 index 0000000000..a473993a2f --- /dev/null +++ b/filter_plugins/oo_zabbix_filters.py @@ -0,0 +1,79 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- +# vim: expandtab:tabstop=4:shiftwidth=4 +''' +Custom zabbix filters for use in openshift-ansible +''' + +import pdb + +class FilterModule(object): + ''' Custom zabbix ansible filters ''' + + @staticmethod + def create_data(data, results, key, new_key): + '''Take a dict, filter through results and add results['key'] to dict + ''' + new_list = [app[key] for app in results] + data[new_key] = new_list + return data + + @staticmethod + def oo_set_zbx_trigger_triggerid(item, trigger_results): + '''Set zabbix trigger id from trigger results + ''' + if isinstance(trigger_results, list): + item['triggerid'] = trigger_results[0]['triggerid'] + return item + + item['triggerid'] = trigger_results['triggerids'][0] + return item + + @staticmethod + def oo_set_zbx_item_hostid(item, template_results): + ''' Set zabbix host id from template results + ''' + if isinstance(template_results, list): + item['hostid'] = template_results[0]['templateid'] + return item + + item['hostid'] = template_results['templateids'][0] + return item + + @staticmethod + def oo_pdb(arg): + ''' This pops you into a pdb instance where arg is the data passed in + from the filter. + Ex: "{{ hostvars | oo_pdb }}" + ''' + pdb.set_trace() + return arg + + @staticmethod + def select_by_name(ans_data, data): + ''' test + ''' + for zabbix_item in data: + if ans_data['name'] == zabbix_item: + data[zabbix_item]['params']['hostid'] = ans_data['templateid'] + return data[zabbix_item]['params'] + return None + + @staticmethod + def oo_build_zabbix_list_dict(values, string): + ''' Build a list of dicts with string as key for each value + ''' + rval = [] + for value in values: + rval.append({string: value}) + return rval + + def filters(self): + ''' returns a mapping of filters to methods ''' + return { + "select_by_name": self.select_by_name, + "oo_set_zbx_item_hostid": self.oo_set_zbx_item_hostid, + "oo_set_zbx_trigger_triggerid": self.oo_set_zbx_trigger_triggerid, + "oo_build_zabbix_list_dict": self.oo_build_zabbix_list_dict, + "create_data": self.create_data, + } diff --git a/handlers/restart_services.yml b/handlers/restart_services.yml index d94aeb9cfa..8d951c38fb 100644 --- a/handlers/restart_services.yml +++ b/handlers/restart_services.yml @@ -47,6 +47,18 @@ - name: restart kojid action: service name=kojid state=restarted +- name: restart koschei-polling + action: service name=koschei-polling state=restarted + +- name: restart koschei-resolver + action: service name=koschei-resolver state=restarted + +- name: restart koschei-scheduler + action: service name=koschei-scheduler state=restarted + +- name: restart koschei-watcher + action: service name=koschei-watcher state=restarted + - name: restart libvirtd action: service name=libvirtd state=restarted @@ -73,11 +85,11 @@ action: service name=openvpn@openvpn state=restarted - name: restart openvpn (RHEL6) - when: ansible_distribution == "RedHat" and ansible_distribution_major_version == "6" + when: ansible_distribution == "RedHat" and ansible_distribution_major_version|int == 6 action: service name=openvpn state=restarted - name: restart openvpn (RHEL7) - when: ansible_distribution == "RedHat" and ansible_distribution_major_version == "7" + when: ansible_distribution == "RedHat" and ansible_distribution_major_version|int == 7 action: service name=openvpn@openvpn state=restarted - name: restart postfix @@ -137,10 +149,10 @@ - name: restart bridge shell: /usr/lib/systemd/systemd-sysctl --prefix=/proc/sys/net/bridge -- name: hup libvirtd - command: pkill -HUP libvirtd +- name: reload libvirtd + service: name=libvirtd state=reloaded ignore_errors: true - when: inventory_hostname.startswith('buildhw') + when: ansible_virtualization_role == 'host' - name: restart fcomm-cache-worker service: name=fcomm-cache-worker state=restarted @@ -168,3 +180,8 @@ - name: restart stunnel service: name=stunnel state=restarted + +- name: restart cinder + service: name=openstack-cinder-api state=restarted + service: name=openstack-cinder-scheduler state=restarted + service: name=openstack-cinder-volume state=restarted diff --git a/inventory/backups b/inventory/backups index 32a97dfbb2..e1693d5357 100644 --- a/inventory/backups +++ b/inventory/backups @@ -2,22 +2,24 @@ # This is the list of clients we backup with rdiff-backup. # [backup_clients] -collab04.fedoraproject.org +collab03.fedoraproject.org db01.phx2.fedoraproject.org -db05.phx2.fedoraproject.org +db03.phx2.fedoraproject.org db-datanommer02.phx2.fedoraproject.org db-fas01.phx2.fedoraproject.org -hosted04.fedoraproject.org +hosted03.fedoraproject.org hosted-lists01.fedoraproject.org lockbox01.phx2.fedoraproject.org -people03.fedoraproject.org +pagure01.fedoraproject.org +people01.fedoraproject.org pkgs02.phx2.fedoraproject.org log01.phx2.fedoraproject.org -qadevel.cloud.fedoraproject.org +qadevel.qa.fedoraproject.org:222 db-qa01.qa.fedoraproject.org db-koji01.phx2.fedoraproject.org copr-be.cloud.fedoraproject.org copr-fe.cloud.fedoraproject.org copr-keygen.cloud.fedoraproject.org value01.phx2.fedoraproject.org +taiga.cloud.fedoraproject.org taskotron01.qa.fedoraproject.org diff --git a/inventory/builders b/inventory/builders index cdb5960ff7..6187a8487b 100644 --- a/inventory/builders +++ b/inventory/builders @@ -55,9 +55,20 @@ buildhw-12.phx2.fedoraproject.org buildppc-01.phx2.fedoraproject.org buildppc-02.phx2.fedoraproject.org +[buildppc64] +ppc8-01.qa.fedoraproject.org + [buildaarch64] aarch64-03a.arm.fedoraproject.org aarch64-04a.arm.fedoraproject.org +aarch64-05a.arm.fedoraproject.org +aarch64-06a.arm.fedoraproject.org +aarch64-07a.arm.fedoraproject.org +aarch64-08a.arm.fedoraproject.org +aarch64-09a.arm.fedoraproject.org +aarch64-10a.arm.fedoraproject.org +aarch64-11a.arm.fedoraproject.org +aarch64-12a.arm.fedoraproject.org [bkernel] bkernel01.phx2.fedoraproject.org @@ -186,9 +197,20 @@ arm04-builder21.arm.fedoraproject.org arm04-builder22.arm.fedoraproject.org arm04-builder23.arm.fedoraproject.org +# These hosts get the runroot plugin installed. +# They should be added to their own 'compose' channel in the koji db +# .. and they should not appear in the default channel for builds. +[runroot] +buildvm-01.stg.phx2.fedoraproject.org +buildvm-01.phx2.fedoraproject.org +buildhw-01.phx2.fedoraproject.org +arm04-builder00.arm.fedoraproject.org +arm04-builder01.arm.fedoraproject.org + [builders:children] buildhw buildvm buildppc buildarm buildaarch64 +buildppc64 diff --git a/inventory/group_vars/OSv3 b/inventory/group_vars/OSv3 new file mode 100644 index 0000000000..9a8bacd348 --- /dev/null +++ b/inventory/group_vars/OSv3 @@ -0,0 +1,3 @@ +--- +ansible_ssh_user: root +deployment_type: origin diff --git a/inventory/group_vars/all b/inventory/group_vars/all index 39c3ecf078..5d0a1efcd6 100644 --- a/inventory/group_vars/all +++ b/inventory/group_vars/all @@ -55,6 +55,22 @@ fedmsg_certs: [] # By default, fedmsg should not log debug info. Groups can override this. fedmsg_loglevel: INFO +# By default, fedmsg hosts are in passive mode. External hosts are typically +# active. +fedmsg_active: False + +# Other defaults for fedmsg environments +fedmsg_prefix: org.fedoraproject +fedmsg_env: prod + +# These are used to: +# 1) configure mod_wsgi +# 2) open iptables rules for fedmsg (per wsgi thread) +# 3) declare enough fedmsg endpoints for the service +#wsgi_fedmsg_service: bodhi +#wsgi_procs: 4 +#wsgi_threads: 4 + # By default, nodes don't backup any dbs on them unless they declare it. dbs_to_backup: [] @@ -68,6 +84,7 @@ nrpe_check_postfix_queue_crit: 5 # env is staging or production, we default it to production here. env: production +env_suffix: # nfs mount options, override at the group/host level nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid" diff --git a/inventory/group_vars/anitya-backend b/inventory/group_vars/anitya-backend index f7d9cb5922..92d50347ff 100644 --- a/inventory/group_vars/anitya-backend +++ b/inventory/group_vars/anitya-backend @@ -28,8 +28,13 @@ fedmsg_certs: - service: anitya owner: root group: fedmsg + can_send: + - anitya.project.version.update +fedmsg_prefix: org.release-monitoring +fedmsg_env: prod + # For the MOTD csi_security_category: Low csi_primary_contact: Fedora admins - admin@fedoraproject.org diff --git a/inventory/group_vars/anitya-frontend b/inventory/group_vars/anitya-frontend index c64bda7744..63c115e0e9 100644 --- a/inventory/group_vars/anitya-frontend +++ b/inventory/group_vars/anitya-frontend @@ -30,7 +30,22 @@ fedmsg_certs: - service: anitya owner: root group: apache + can_send: + - anitya.distro.add + - anitya.distro.edit + - anitya.distro.remove + - anitya.project.add + - anitya.project.add.tried + - anitya.project.edit + - anitya.project.map.new + - anitya.project.map.remove + - anitya.project.map.update + - anitya.project.remove + - anitya.project.version.remove + - anitya.project.version.update +fedmsg_prefix: org.release-monitoring +fedmsg_env: prod # For the MOTD csi_security_category: Low diff --git a/inventory/group_vars/ask b/inventory/group_vars/ask index d24a2ed245..11d282d7e9 100644 --- a/inventory/group_vars/ask +++ b/inventory/group_vars/ask @@ -25,6 +25,12 @@ fedmsg_certs: - service: askbot owner: root group: apache + can_send: + - askbot.post.delete + - askbot.post.edit + - askbot.post.flag_offensive.add + - askbot.post.flag_offensive.delete + - askbot.tag.update # For the MOTD diff --git a/inventory/group_vars/ask-stg b/inventory/group_vars/ask-stg index d24a2ed245..95118ec775 100644 --- a/inventory/group_vars/ask-stg +++ b/inventory/group_vars/ask-stg @@ -25,7 +25,12 @@ fedmsg_certs: - service: askbot owner: root group: apache - + can_send: + - askbot.post.delete + - askbot.post.edit + - askbot.post.flag_offensive.add + - askbot.post.flag_offensive.delete + - askbot.tag.update # For the MOTD csi_security_category: Low diff --git a/inventory/group_vars/autosign b/inventory/group_vars/autosign index e4f8dbc01a..952ddee128 100644 --- a/inventory/group_vars/autosign +++ b/inventory/group_vars/autosign @@ -13,10 +13,10 @@ host_group: autosign # For the MOTD csi_security_category: High csi_primary_contact: Release Engineering - rel-eng@lists.fedoraproject.org -csi_purpose: Provides frontend (reverse) proxy for most web applications +csi_purpose: Automatically sign Rawhide and Branched packages csi_relationship: | - This host runs the autosigner.py script which should automatically sign new - rawhide and branched builds. It listens to koji over fedmsg for + This host will run the autosigner.py script which should automatically sign + new rawhide and branched builds. It listens to koji over fedmsg for notifications of new builds, and then asks sigul, the signing server, to sign the rpms and store the new rpm header back in Koji. diff --git a/inventory/group_vars/badges-backend b/inventory/group_vars/badges-backend index f00415f65e..af1e8f8596 100644 --- a/inventory/group_vars/badges-backend +++ b/inventory/group_vars/badges-backend @@ -20,6 +20,9 @@ fedmsg_certs: - service: fedbadges owner: root group: fedmsg + can_send: + - fedbadges.badge.award + - fedbadges.person.rank.advance # For the MOTD diff --git a/inventory/group_vars/badges-backend-stg b/inventory/group_vars/badges-backend-stg index f100c1b380..d336373f17 100644 --- a/inventory/group_vars/badges-backend-stg +++ b/inventory/group_vars/badges-backend-stg @@ -20,6 +20,9 @@ fedmsg_certs: - service: fedbadges owner: root group: fedmsg + can_send: + - fedbadges.badge.award + - fedbadges.person.rank.advance # For the MOTD diff --git a/inventory/group_vars/badges-web b/inventory/group_vars/badges-web index 336d376f7a..43ddf53f01 100644 --- a/inventory/group_vars/badges-web +++ b/inventory/group_vars/badges-web @@ -4,13 +4,15 @@ mem_size: 4096 num_cpus: 2 freezes: false -# for systems that do not match the above - specify the same parameter in -# the host_vars/$hostname file +# Definining these vars has a number of effects +# 1) mod_wsgi is configured to use the vars for its own setup +# 2) iptables opens enough ports for all threads for fedmsg +# 3) roles/fedmsg/base/ declares enough fedmsg endpoints for all threads +wsgi_fedmsg_service: tahrir +wsgi_procs: 2 +wsgi_threads: 2 -tcp_ports: [ 80, 443, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] +tcp_ports: [ 80 ] # Neeed for rsync from log01 for logs. custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] @@ -25,6 +27,10 @@ fedmsg_certs: - service: tahrir owner: root group: tahrir + can_send: + - fedbadges.badge.award + - fedbadges.person.rank.advance + - fedbadges.person.login.first # For the MOTD diff --git a/inventory/group_vars/badges-web-stg b/inventory/group_vars/badges-web-stg index 2bbe4a2e43..360e9e6842 100644 --- a/inventory/group_vars/badges-web-stg +++ b/inventory/group_vars/badges-web-stg @@ -4,13 +4,15 @@ lvm_size: 20000 mem_size: 1024 num_cpus: 2 -# for systems that do not match the above - specify the same parameter in -# the host_vars/$hostname file +# Definining these vars has a number of effects +# 1) mod_wsgi is configured to use the vars for its own setup +# 2) iptables opens enough ports for all threads for fedmsg +# 3) roles/fedmsg/base/ declares enough fedmsg endpoints for all threads +wsgi_fedmsg_service: tahrir +wsgi_procs: 2 +wsgi_threads: 2 -tcp_ports: [ 80, 443, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] +tcp_ports: [ 80 ] # Neeed for rsync from log01 for logs. custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] @@ -25,6 +27,10 @@ fedmsg_certs: - service: tahrir owner: root group: tahrir + can_send: + - fedbadges.badge.award + - fedbadges.person.rank.advance + - fedbadges.person.login.first # For the MOTD diff --git a/inventory/group_vars/bastion b/inventory/group_vars/bastion index 36eeaf8bda..071ac545eb 100644 --- a/inventory/group_vars/bastion +++ b/inventory/group_vars/bastion @@ -19,7 +19,7 @@ custom_rules: [ # # allow a bunch of sysadmin groups here so they can access internal stuff # -fas_client_groups: sysadmin-ask,sysadmin-web,sysadmin-main,sysadmin-cvs,sysadmin-build,sysadmin-noc,sysadmin-releng,sysadmin-dba,sysadmin-hosted,sysadmin-tools,sysadmin-spin,sysadmin-cloud,fi-apprentice,sysadmin-darkserver,sysadmin-badges,sysadmin-troubleshoot,sysadmin-qa,sysadmin-centos,sysadmin-ppc +fas_client_groups: sysadmin-ask,sysadmin-web,sysadmin-main,sysadmin-cvs,sysadmin-build,sysadmin-noc,sysadmin-releng,sysadmin-dba,sysadmin-hosted,sysadmin-tools,sysadmin-spin,sysadmin-cloud,fi-apprentice,sysadmin-darkserver,sysadmin-badges,sysadmin-troubleshoot,sysadmin-qa,sysadmin-centos,sysadmin-ppc,sysadmin-koschei # # This is a postfix gateway. This will pick up gateway postfix config in base diff --git a/inventory/group_vars/beaker-stg b/inventory/group_vars/beaker-stg new file mode 100644 index 0000000000..d467691029 --- /dev/null +++ b/inventory/group_vars/beaker-stg @@ -0,0 +1,29 @@ +--- +lvm_size: 50000 +mem_size: 4096 +num_cpus: 2 + +tcp_ports: [ 80, 443, 8000 ] +udp_ports: [ 69 ] +fas_client_groups: sysadmin-qa,sysadmin-main,fi-apprentice +nrpe_procs_warn: 250 +nrpe_procs_crit: 300 + +freezes: false + +# settings for the beaker db, server and lab controller +beaker_db_host: localhost +beaker_db_name: beaker +beaker_db_user: "{{ stg_beaker_db_user }}" +beaker_db_password: "{{ stg_beaker_db_password }}" +mariadb_root_password: "{{ stg_beaker_mariadb_root_password }}" + +beaker_server_url: "https://beaker.stg.qa.fedoraproject.org" +beaker_server_cname: "beaker.stg.fedoraproject.org" +beaker_server_hostname: "beaker-stg01.qa.fedoraproject.org" +beaker_server_admin_user: "{{ stg_beaker_server_admin_user }}" +beaker_server_admin_pass: "{{ stg_beaker_server_admin_pass }}" +beaker_server_email: "sysadmin-qa-members@fedoraproject.org" + +beaker_lab_controller_username: "host/beaker01.qa.fedoraproject.org" +beaker_lab_controller_password: "{{ stg_beaker_lab_controller_password }}" diff --git a/inventory/group_vars/beaker-virthosts b/inventory/group_vars/beaker-virthosts new file mode 100644 index 0000000000..783fa86669 --- /dev/null +++ b/inventory/group_vars/beaker-virthosts @@ -0,0 +1,10 @@ +--- +virthost: true +nrpe_procs_warn: 900 +nrpe_procs_crit: 1000 + +libvirt_remote_pubkey: 'ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAsxg20+vmLTt/U23x6yBtxU6N2Ool8ddlC5TFwr3FktCM7hcxkQ/funJ3VD5v9iN7Qg09g2YsPaPTfvmOPOP4bzX+/Fk8vJJb5nVg++XbS80Uw62eofr8g68ZPf6IWLEBiZ8/hmumK3TxTmsj/jn17bZBFTcQL7sB7Q4y7TxODt+5W9/0mJTLXbKoCvV+BCpxEfokx+50vVcX5CxXLHdgrdhPzKHcBHKtX6d2W8xzFj2dCThgAXl5tULYI1xP0BYTOtG+RaTNQWme4JxNlQZB8xbCxN2U+e1NpZl1Hn7Y9MbRL+nLfMIuWNJjYzUTGP3o9m2Tl9RCc2nhuS652rjfcQ== tflink@imagebuilder.qa.fedoraproject.org' +libvirt_user: "{{ beaker_libvirt_user }}" + +# beaker is not a production service, so the virthosts aren't frozen +freezes: false diff --git a/inventory/group_vars/bkernel b/inventory/group_vars/bkernel index 3d86bd862e..a75ec90ca2 100644 --- a/inventory/group_vars/bkernel +++ b/inventory/group_vars/bkernel @@ -1,2 +1,6 @@ --- host_group: kojibuilder + +koji_server_url: "http://koji.fedoraproject.org/kojihub" +koji_weburl: "http://koji.fedoraproject.org/koji" +koji_topurl: "http://kojipkgs.fedoraproject.org/" diff --git a/inventory/group_vars/bodhi b/inventory/group_vars/bodhi index 9909650ff8..cdf09d1b60 100644 --- a/inventory/group_vars/bodhi +++ b/inventory/group_vars/bodhi @@ -7,8 +7,7 @@ lvm_size: 40000 mem_size: 4096 num_cpus: 2 -# for systems that do not match the above - specify the same parameter in -# the host_vars/$hostname file +# for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file tcp_ports: [ 80, 443, # These 16 ports are used by fedmsg. One for each wsgi thread. @@ -28,3 +27,30 @@ fedmsg_certs: - service: bodhi owner: root group: bodhi + can_send: + - bodhi.buildroot_override.tag + - bodhi.buildroot_override.untag + - bodhi.stack.delete + - bodhi.stack.save + - bodhi.update.comment + - bodhi.update.complete.testing + - bodhi.update.edit + - bodhi.update.karma.threshold + - bodhi.update.request.obsolete + - bodhi.update.request.revoke + - bodhi.update.request.stable + - bodhi.update.request.testing + - bodhi.update.request.unpush + + # Things that only the mash does - not the web UI + #- bodhi.mashtask.complete + #- bodhi.mashtask.mashing + #- bodhi.mashtask.start + #- bodhi.mashtask.sync.done + #- bodhi.mashtask.sync.wait + #- bodhi.errata.publish + #- bodhi.update.eject + + # Rsync messages that get run from somewhere else entirely. + #- bodhi.updates.epel.sync + #- bodhi.updates.fedora.sync diff --git a/inventory/group_vars/bodhi-backend b/inventory/group_vars/bodhi-backend new file mode 100644 index 0000000000..fb647dddee --- /dev/null +++ b/inventory/group_vars/bodhi-backend @@ -0,0 +1,49 @@ +--- +# common items for the releng-* boxes +lvm_size: 100000 +mem_size: 16384 +num_cpus: 16 +nm: 255.255.255.0 +gw: 10.5.125.254 +dns: 10.5.126.21 + +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ + +virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }} + --disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }} + --vcpus={{ num_cpus }} -l {{ ks_repo }} -x + "ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0 + hostname={{ inventory_hostname }} nameserver={{ dns }} + ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none + ip={{ eth1_ip }}:::{{ nm }}:{{ inventory_hostname }}-nfs:eth1:none" + --network=bridge=br0,model=virtio --network=bridge=br1,model=virtio + --autostart --noautoconsole + +# With 16 cpus, theres a bunch more kernel threads +nrpe_procs_warn: 900 +nrpe_procs_crit: 1000 + +host_group: releng + +# These are consumed by a task in roles/fedmsg/base/main.yml +fedmsg_certs: +- service: shell + owner: root + group: root +- service: bodhi + owner: root + group: masher + can_send: + - bodhi.mashtask.complete + - bodhi.mashtask.mashing + - bodhi.mashtask.start + - bodhi.mashtask.sync.done + - bodhi.mashtask.sync.wait + - bodhi.errata.publish + - bodhi.update.eject + # The ftp sync messages get run here too. + - bodhi.updates.epel.sync + - bodhi.updates.fedora.sync + +nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3" diff --git a/inventory/group_vars/bodhi-backend-stg b/inventory/group_vars/bodhi-backend-stg new file mode 100644 index 0000000000..662032dee3 --- /dev/null +++ b/inventory/group_vars/bodhi-backend-stg @@ -0,0 +1,46 @@ +--- +# common items for the releng-* boxes +lvm_size: 100000 +mem_size: 4096 +num_cpus: 2 +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 + +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ + +virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }} + --disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }} + --vcpus={{ num_cpus }} -l {{ ks_repo }} -x + "ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0 + hostname={{ inventory_hostname }} nameserver={{ dns }} + ip={{ eth0_ip }} netmask={{ nm }} gateway={{ gw }} dns={{ dns }}" + --network=bridge=br0,model=virtio --network=bridge=br1,model=virtio + --autostart --noautoconsole + +# With 16 cpus, theres a bunch more kernel threads +nrpe_procs_warn: 900 +nrpe_procs_crit: 1000 + +host_group: releng + +# These are consumed by a task in roles/fedmsg/base/main.yml +fedmsg_certs: +- service: shell + owner: root + group: root +#- service: bodhi +# owner: root +# group: masher +# can_send: +# - bodhi.mashtask.complete +# - bodhi.mashtask.mashing +# - bodhi.mashtask.start +# - bodhi.mashtask.sync.done +# - bodhi.mashtask.sync.wait +# - bodhi.errata.publish +# - bodhi.update.eject +# # The ftp sync messages get run here too. +# - bodhi.updates.epel.sync +# - bodhi.updates.fedora.sync diff --git a/inventory/group_vars/bodhi-stg b/inventory/group_vars/bodhi-stg index 9909650ff8..329ad6aa04 100644 --- a/inventory/group_vars/bodhi-stg +++ b/inventory/group_vars/bodhi-stg @@ -28,3 +28,30 @@ fedmsg_certs: - service: bodhi owner: root group: bodhi + can_send: + - bodhi.buildroot_override.tag + - bodhi.buildroot_override.untag + - bodhi.stack.delete + - bodhi.stack.save + - bodhi.update.comment + - bodhi.update.complete.testing + - bodhi.update.edit + - bodhi.update.karma.threshold + - bodhi.update.request.obsolete + - bodhi.update.request.revoke + - bodhi.update.request.stable + - bodhi.update.request.testing + - bodhi.update.request.unpush + + # Things that only the mash does - not the web UI + #- bodhi.mashtask.complete + #- bodhi.mashtask.mashing + #- bodhi.mashtask.start + #- bodhi.mashtask.sync.done + #- bodhi.mashtask.sync.wait + #- bodhi.errata.publish + #- bodhi.update.eject + + # Rsync messages that get run from somewhere else entirely. + #- bodhi.updates.epel.sync + #- bodhi.updates.fedora.sync diff --git a/inventory/group_vars/bodhi2-stg b/inventory/group_vars/bodhi2-stg new file mode 100644 index 0000000000..b1b3a99672 --- /dev/null +++ b/inventory/group_vars/bodhi2-stg @@ -0,0 +1,34 @@ +--- +# Define resources for this group of hosts here. +jobrunner: false +epelmasher: false + +lvm_size: 40000 +mem_size: 4096 +num_cpus: 2 + +# for systems that do not match the above - specify the same parameter in +# the host_vars/$hostname file + +tcp_ports: [ 80, 443, + # These 16 ports are used by fedmsg. One for each wsgi thread. + 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, + 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] + +# Neeed for rsync from log01 for logs. +custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] + +fas_client_groups: sysadmin-noc + +# These are consumed by a task in roles/fedmsg/base/main.yml +fedmsg_certs: +- service: shell + owner: root + group: sysadmin +- service: bodhi + owner: root + group: bodhi + +# Mount /mnt/fedora_koji as read-only in staging +nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid" +datacenter: staging diff --git a/inventory/group_vars/bugzilla2fedmsg b/inventory/group_vars/bugzilla2fedmsg index c3af917c33..9622f8e93a 100644 --- a/inventory/group_vars/bugzilla2fedmsg +++ b/inventory/group_vars/bugzilla2fedmsg @@ -19,6 +19,9 @@ fedmsg_certs: - service: bugzilla2fedmsg owner: root group: fedmsg + can_send: + - bugzilla.bug.new + - bugzilla.bug.update # For the MOTD csi_security_category: Low diff --git a/inventory/group_vars/bugzilla2fedmsg-stg b/inventory/group_vars/bugzilla2fedmsg-stg index 6035ef5955..901380bf32 100644 --- a/inventory/group_vars/bugzilla2fedmsg-stg +++ b/inventory/group_vars/bugzilla2fedmsg-stg @@ -19,6 +19,9 @@ fedmsg_certs: - service: bugzilla2fedmsg owner: root group: fedmsg + can_send: + - bugzilla.bug.new + - bugzilla.bug.update # For the MOTD csi_security_category: Low diff --git a/inventory/group_vars/buildaarch64 b/inventory/group_vars/buildaarch64 index d44142a7ac..c87ba0b0b1 100644 --- a/inventory/group_vars/buildaarch64 +++ b/inventory/group_vars/buildaarch64 @@ -1,4 +1,8 @@ --- host_group: kojibuilder -fas_client_groups: sysadmin-releng +fas_client_groups: sysadmin-releng,sysadmin-secondary sudoers: "{{ private }}/files/sudo/buildaarch64-sudoers" + +koji_server_url: "http://arm.koji.fedoraproject.org/kojihub" +koji_weburl: "http://arm.koji.fedoraproject.org/koji" +koji_topurl: "http://armpkgs.fedoraproject.org/" diff --git a/inventory/group_vars/buildarm b/inventory/group_vars/buildarm index c420539fb4..3090d56eb2 100644 --- a/inventory/group_vars/buildarm +++ b/inventory/group_vars/buildarm @@ -1,3 +1,7 @@ host_group: kojibuilder fas_client_groups: sysadmin-releng sudoers: "{{ private }}/files/sudo/arm-releng-sudoers" + +koji_server_url: "http://koji.fedoraproject.org/kojihub" +koji_weburl: "http:/koji.fedoraproject.org/koji" +koji_topurl: "http://kojipkgs.fedoraproject.org/" diff --git a/inventory/group_vars/buildhw b/inventory/group_vars/buildhw index 1beb81c50f..ae54fda25d 100644 --- a/inventory/group_vars/buildhw +++ b/inventory/group_vars/buildhw @@ -3,3 +3,7 @@ host_group: kojibuilder fas_client_groups: sysadmin-releng sudoers: "{{ private }}/files/sudo/arm-releng-sudoers" freezes: true + +koji_server_url: "http://koji.fedoraproject.org/kojihub" +koji_weburl: "http://koji.fedoraproject.org/koji" +koji_topurl: "http://kojipkgs.fedoraproject.org/" diff --git a/inventory/group_vars/buildppc b/inventory/group_vars/buildppc new file mode 100644 index 0000000000..a369965d0f --- /dev/null +++ b/inventory/group_vars/buildppc @@ -0,0 +1,7 @@ +host_group: kojibuilder +fas_client_groups: sysadmin-releng +#sudoers: "{{ private }}/files/sudo/ppc-releng-sudoers" + +koji_server_url: "http://koji.fedoraproject.org/kojihub" +koji_weburl: "http://koji.fedoraproject.org/koji" +koji_topurl: "http://kojipkgs.fedoraproject.org/" diff --git a/inventory/group_vars/buildppc64 b/inventory/group_vars/buildppc64 new file mode 100644 index 0000000000..5861afe8c5 --- /dev/null +++ b/inventory/group_vars/buildppc64 @@ -0,0 +1,8 @@ +--- +host_group: kojibuilder +fas_client_groups: sysadmin-releng,sysadmin-secondary +#sudoers: "{{ private }}/files/sudo/buildppc64-sudoers" + +koji_server_url: "http://ppc.koji.fedoraproject.org/kojihub" +koji_weburl: "http://ppc.koji.fedoraproject.org/koji" +koji_topurl: "http://ppcpkgs.fedoraproject.org/" diff --git a/inventory/group_vars/buildvm b/inventory/group_vars/buildvm index dbef487046..5f58b3e764 100644 --- a/inventory/group_vars/buildvm +++ b/inventory/group_vars/buildvm @@ -25,3 +25,7 @@ virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }} host_group: kojibuilder fas_client_groups: sysadmin-releng sudoers: "{{ private }}/files/sudo/arm-releng-sudoers" + +koji_server_url: "http://koji.fedoraproject.org/kojihub" +koji_weburl: "http://koji.fedoraproject.org/koji" +koji_topurl: "http://kojipkgs.fedoraproject.org/" diff --git a/inventory/group_vars/buildvm-stg b/inventory/group_vars/buildvm-stg index 9bc9a95522..73dac8572b 100644 --- a/inventory/group_vars/buildvm-stg +++ b/inventory/group_vars/buildvm-stg @@ -25,3 +25,7 @@ fas_client_groups: sysadmin-releng sudoers: "{{ private }}/files/sudo/arm-releng-sudoers" datacenter: staging nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid" + +koji_server_url: "http://koji.stg.fedoraproject.org/kojihub" +koji_weburl: "http://koji.stg.fedoraproject.org/koji" +koji_topurl: "http://kojipkgs.stg.fedoraproject.org/" diff --git a/inventory/group_vars/composers b/inventory/group_vars/composers new file mode 100644 index 0000000000..d27d2dfa38 --- /dev/null +++ b/inventory/group_vars/composers @@ -0,0 +1,58 @@ +--- +# common items for the releng-* boxes +lvm_size: 100000 +mem_size: 16384 +num_cpus: 16 +nm: 255.255.255.0 +gw: 10.5.125.254 +dns: 10.5.126.21 + +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ + +virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }} + --disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }} + --vcpus={{ num_cpus }} -l {{ ks_repo }} -x + "ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0 + hostname={{ inventory_hostname }} nameserver={{ dns }} + ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none + ip={{ eth1_ip }}:::{{ nm }}:{{ inventory_hostname }}-nfs:eth1:none" + --network=bridge=br0,model=virtio --network=bridge=br1,model=virtio + --autostart --noautoconsole + +# With 16 cpus, theres a bunch more kernel threads +nrpe_procs_warn: 900 +nrpe_procs_crit: 1000 + +host_group: releng + +# These are consumed by a task in roles/fedmsg/base/main.yml +fedmsg_certs: +- service: shell + owner: root + group: root +- service: bodhi + owner: root + group: masher + can_send: + - compose.branched.complete + - compose.branched.mash.complete + - compose.branched.mash.start + - compose.branched.pungify.complete + - compose.branched.pungify.start + - compose.branched.rsync.complete + - compose.branched.rsync.start + - compose.branched.start + - compose.epelbeta.complete + - compose.rawhide.complete + - compose.rawhide.mash.complete + - compose.rawhide.mash.start + - compose.rawhide.rsync.complete + - compose.rawhide.rsync.start + - compose.rawhide.start + +nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3" + +koji_server_url: "http://koji.fedoraproject.org/kojihub" +koji_weburl: "http://koji.fedoraproject.org/koji" +koji_topurl: "http://kojipkgs.fedoraproject.org/" diff --git a/inventory/group_vars/composers-stg b/inventory/group_vars/composers-stg new file mode 100644 index 0000000000..beae20fab5 --- /dev/null +++ b/inventory/group_vars/composers-stg @@ -0,0 +1,4 @@ +--- +koji_server_url: "http://koji.stg.fedoraproject.org/kojihub" +koji_weburl: "http://koji.stg.fedoraproject.org/koji" +koji_topurl: "http://kojipkgs.fedoraproject.org/" diff --git a/inventory/group_vars/copr b/inventory/group_vars/copr index d679f280f9..d691316077 100644 --- a/inventory/group_vars/copr +++ b/inventory/group_vars/copr @@ -3,9 +3,14 @@ devel: false _forward_src: "forward" # don't forget to update ip in ./copr-keygen, due to custom firewall rules -copr_backend_ips: "172.16.5.5 209.132.184.142" -keygen_host: "172.16.5.25" + +copr_backend_ips: ["172.25.32.4", "209.132.184.48"] +keygen_host: "172.25.32.5" + resolvconf: "resolv.conf/cloud" backend_base_url: "https://copr-be.cloud.fedoraproject.org" postfix_maincf: "postfix/main.cf/main.cf.copr" + +frontend_base_url: "https://copr.fedoraproject.org" +dist_git_base_url: "copr-dist-git.fedorainfracloud.org" diff --git a/inventory/group_vars/copr-back b/inventory/group_vars/copr-back index 1c6b0c420b..77123560aa 100644 --- a/inventory/group_vars/copr-back +++ b/inventory/group_vars/copr-back @@ -1,15 +1,17 @@ --- _lighttpd_conf_src: "lighttpd/lighttpd.conf" -copr_nova_auth_url: "https://fed-cloud09.cloud.fedoraproject.org:5000/v2.0" +copr_nova_auth_url: "https://fedorainfracloud.org:5000/v2.0" copr_nova_tenant_id: "undefined_tenant_id" copr_nova_tenant_name: "copr" copr_nova_username: "copr" -copr_builder_image_name: "builder_base_image_2015_04_01" -copr_builder_flavor_name: "m1.builder" +# copr_builder_image_name: "Fedora-Cloud-Base-20141203-21" +copr_builder_image_name: "builder-2015-05-27" +copr_builder_flavor_name: "ms2.builder" copr_builder_network_name: "copr-net" copr_builder_key_name: "buildsys" +copr_builder_security_groups: "ssh-anywhere-copr,default,ssh-from-persistent-copr" fedmsg_enabled: "true" diff --git a/inventory/group_vars/copr-back-stg b/inventory/group_vars/copr-back-stg index 7084af84cb..e61d7ac135 100644 --- a/inventory/group_vars/copr-back-stg +++ b/inventory/group_vars/copr-back-stg @@ -1,19 +1,20 @@ --- _lighttpd_conf_src: "lighttpd/lighttpd_dev.conf" -copr_nova_auth_url: "https://fed-cloud09.cloud.fedoraproject.org:5000/v2.0" +copr_nova_auth_url: "https://fedorainfracloud.org:5000/v2.0" copr_nova_tenant_id: "566a072fb1694950998ad191fee3833b" copr_nova_tenant_name: "coprdev" copr_nova_username: "copr" -copr_builder_image_name: "builder_base_image_2015_04_01" -copr_builder_flavor_name: "m1.builder" +copr_builder_image_name: "builder-2015-05-27" +copr_builder_flavor_name: "ms2.builder" copr_builder_network_name: "coprdev-net" copr_builder_key_name: "buildsys" +copr_builder_security_groups: "ssh-anywhere-coprdev,default,ssh-from-persistent-coprdev" fedmsg_enabled: "false" -do_sign: "false" +do_sign: "true" spawn_in_advance: "true" frontend_base_url: "http://copr-fe-dev.cloud.fedoraproject.org" diff --git a/inventory/group_vars/copr-dist-git b/inventory/group_vars/copr-dist-git new file mode 100644 index 0000000000..4c68998422 --- /dev/null +++ b/inventory/group_vars/copr-dist-git @@ -0,0 +1,5 @@ +--- +tcp_ports: [22, 80] +datacenter: cloud +freezes: false + diff --git a/inventory/group_vars/copr-dist-git-stg b/inventory/group_vars/copr-dist-git-stg new file mode 100644 index 0000000000..90a2dc104b --- /dev/null +++ b/inventory/group_vars/copr-dist-git-stg @@ -0,0 +1,4 @@ +--- +tcp_ports: [22, 80] +datacenter: cloud +freezes: false diff --git a/inventory/group_vars/copr-keygen b/inventory/group_vars/copr-keygen index 1bf9586765..822e397e4b 100644 --- a/inventory/group_vars/copr-keygen +++ b/inventory/group_vars/copr-keygen @@ -2,10 +2,10 @@ tcp_ports: [22] # http + signd dest ports -custom_rules: [ '-A INPUT -p tcp -m tcp -s 172.16.5.5 --dport 80 -j ACCEPT', - '-A INPUT -p tcp -m tcp -s 209.132.184.142 --dport 80 -j ACCEPT', - '-A INPUT -p tcp -m tcp -s 172.16.5.5 --dport 5167 -j ACCEPT', - '-A INPUT -p tcp -m tcp -s 209.132.184.142 --dport 5167 -j ACCEPT'] +custom_rules: [ '-A INPUT -p tcp -m tcp -s 172.25.32.4 --dport 80 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 209.132.184.48 --dport 80 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 172.25.32.4 --dport 5167 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 209.132.184.48 --dport 5167 -j ACCEPT'] datacenter: cloud diff --git a/inventory/group_vars/copr-keygen-stg b/inventory/group_vars/copr-keygen-stg index c145a19463..c54e78e23e 100644 --- a/inventory/group_vars/copr-keygen-stg +++ b/inventory/group_vars/copr-keygen-stg @@ -3,10 +3,10 @@ copr_hostbase: copr-keygen-dev tcp_ports: [] # http + signd dest ports -custom_rules: [ '-A INPUT -p tcp -m tcp -s 172.16.5.24 --dport 80 -j ACCEPT', - '-A INPUT -p tcp -m tcp -s 209.132.184.179 --dport 80 -j ACCEPT', - '-A INPUT -p tcp -m tcp -s 172.16.5.24 --dport 5167 -j ACCEPT', - '-A INPUT -p tcp -m tcp -s 209.132.184.179 --dport 5167 -j ACCEPT'] +custom_rules: [ '-A INPUT -p tcp -m tcp -s 172.25.32.13 --dport 80 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 209.132.184.53 --dport 80 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 172.25.32.13 --dport 5167 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 209.132.184.53 --dport 5167 -j ACCEPT'] datacenter: cloud diff --git a/inventory/group_vars/copr-stg b/inventory/group_vars/copr-stg index 9addbe656f..d1d01953df 100644 --- a/inventory/group_vars/copr-stg +++ b/inventory/group_vars/copr-stg @@ -4,9 +4,14 @@ devel: true _forward_src: "forward_dev" # don't forget to update ip in ./copr-keygen-stg, due to custom firewall rules -copr_backend_ips: "172.16.5.24 209.132.184.179" -keygen_host: "172.16.1.6" + +copr_backend_ips: ["172.25.32.13", "209.132.184.53"] +keygen_host: "172.25.32.11" + resolvconf: "resolv.conf/cloud" backend_base_url: "http://copr-be-dev.cloud.fedoraproject.org" postfix_maincf: "postfix/main.cf/main.cf.copr" + +frontend_base_url: "http://copr-fe-dev.cloud.fedoraproject.org" +dist_git_base_url: "copr-dist-git-dev.fedorainfracloud.org" diff --git a/inventory/group_vars/dns b/inventory/group_vars/dns index f32edc5a82..17da9d09bf 100644 --- a/inventory/group_vars/dns +++ b/inventory/group_vars/dns @@ -14,3 +14,5 @@ fas_client_groups: sysadmin-main,sysadmin-dns nrpe_procs_warn: 300 nrpe_procs_crit: 500 + +sudoers: "{{ private }}/files/sudo/sysadmin-dns" diff --git a/inventory/group_vars/download-phx2 b/inventory/group_vars/download-phx2 index 86384c4681..111eeca3d1 100644 --- a/inventory/group_vars/download-phx2 +++ b/inventory/group_vars/download-phx2 @@ -6,4 +6,4 @@ nrpe_procs_warn: 900 nrpe_procs_crit: 1000 # nfs mount options, overrides the all/default -nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid,actimeo=600" +nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid,actimeo=600,nfsvers=3" diff --git a/inventory/group_vars/download-rdu2 b/inventory/group_vars/download-rdu2 index a9c5350867..7a7f06d5a6 100644 --- a/inventory/group_vars/download-rdu2 +++ b/inventory/group_vars/download-rdu2 @@ -6,4 +6,4 @@ nrpe_procs_warn: 900 nrpe_procs_crit: 1000 # nfs mount options, overrides the all/default -nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid,actimeo=600" +nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid,actimeo=600,nfsvers=3" diff --git a/inventory/group_vars/elections b/inventory/group_vars/elections index b7963f88c0..11d50ea49c 100644 --- a/inventory/group_vars/elections +++ b/inventory/group_vars/elections @@ -4,11 +4,11 @@ lvm_size: 20000 mem_size: 2048 num_cpus: 2 -tcp_ports: [ 80, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] +wsgi_fedmsg_service: fedora_elections +wsgi_procs: 2 +wsgi_threads: 2 +tcp_ports: [ 80 ] # Neeed for rsync from log01 for logs. custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] @@ -25,4 +25,9 @@ fedmsg_certs: - service: fedora_elections owner: root group: apache - + can_send: + - fedora_elections.candidate.delete + - fedora_elections.candidate.edit + - fedora_elections.candidate.new + - fedora_elections.election.edit + - fedora_elections.election.new diff --git a/inventory/group_vars/elections-stg b/inventory/group_vars/elections-stg index f2664bcd6a..97e0345ed6 100644 --- a/inventory/group_vars/elections-stg +++ b/inventory/group_vars/elections-stg @@ -4,10 +4,11 @@ lvm_size: 20000 mem_size: 1024 num_cpus: 2 -tcp_ports: [ 80, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] +wsgi_fedmsg_service: fedora_elections +wsgi_procs: 2 +wsgi_threads: 2 + +tcp_ports: [ 80 ] # Neeed for rsync from log01 for logs. custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] @@ -24,4 +25,9 @@ fedmsg_certs: - service: fedora_elections owner: root group: apache - + can_send: + - fedora_elections.candidate.delete + - fedora_elections.candidate.edit + - fedora_elections.candidate.new + - fedora_elections.election.edit + - fedora_elections.election.new diff --git a/inventory/group_vars/fas b/inventory/group_vars/fas index efe273e1e7..61c9b6a8fe 100644 --- a/inventory/group_vars/fas +++ b/inventory/group_vars/fas @@ -7,15 +7,11 @@ num_cpus: 4 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -tcp_ports: [ 80, 873, 8443, 8444, - # fas has 40 wsgi processes, each of which need their own port - # open for outbound fedmsg messages. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015, - 3016, 3017, 3018, 3019, 3020, 3021, 3022, 3023, - 3024, 3025, 3026, 3027, 3028, 3029, 3030, 3031, - 3032, 3033, 3034, 3035, 3036, 3037, 3038, 3039, - ] +wsgi_fedmsg_service: fas +wsgi_procs: 40 +wsgi_threads: 1 + +tcp_ports: [ 80, 873, 8443, 8444 ] fas_client_groups: sysadmin-main,sysadmin-accounts @@ -36,3 +32,12 @@ fedmsg_certs: - service: fas owner: root group: fas + can_send: + - fas.group.create + - fas.group.member.apply + - fas.group.member.remove + - fas.group.member.sponsor + - fas.group.update + - fas.role.update + - fas.user.create + - fas.user.update diff --git a/inventory/group_vars/fas-stg b/inventory/group_vars/fas-stg index 3906a8f0bf..8c45063615 100644 --- a/inventory/group_vars/fas-stg +++ b/inventory/group_vars/fas-stg @@ -7,15 +7,11 @@ num_cpus: 2 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -tcp_ports: [ 80, 873, 8443, 8444, - # fas has 40 wsgi processes, each of which need their own port - # open for outbound fedmsg messages. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015, - 3016, 3017, 3018, 3019, 3020, 3021, 3022, 3023, - 3024, 3025, 3026, 3027, 3028, 3029, 3030, 3031, - 3032, 3033, 3034, 3035, 3036, 3037, 3038, 3039, - ] +wsgi_fedmsg_service: fas +wsgi_procs: 40 +wsgi_threads: 1 + +tcp_ports: [ 80, 873, 8443, 8444 ] fas_client_groups: sysadmin-main,sysadmin-accounts @@ -36,3 +32,12 @@ fedmsg_certs: - service: fas owner: root group: fas + can_send: + - fas.group.create + - fas.group.member.apply + - fas.group.member.remove + - fas.group.member.sponsor + - fas.group.update + - fas.role.update + - fas.user.create + - fas.user.update diff --git a/inventory/group_vars/fedimg b/inventory/group_vars/fedimg index 748f24feb2..0e124c5584 100644 --- a/inventory/group_vars/fedimg +++ b/inventory/group_vars/fedimg @@ -6,7 +6,11 @@ num_cpus: 2 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -tcp_ports: [ 3000 ] +tcp_ports: [ + # These are all for outgoing fedmsg. + 3000, 3001, 3002, 3003, 3004, 3005, 3006, + 3007, 3008, 3009, 3010, 3011, 3012, 3013, +] # TODO, restrict this down to just sysadmin-releng fas_client_groups: sysadmin-datanommer,sysadmin-releng,sysadmin-fedimg @@ -19,3 +23,6 @@ fedmsg_certs: - service: fedimg owner: root group: fedmsg + can_send: + - fedimg.image.test + - fedimg.image.upload diff --git a/inventory/group_vars/fedimg-stg b/inventory/group_vars/fedimg-stg index 748f24feb2..0e124c5584 100644 --- a/inventory/group_vars/fedimg-stg +++ b/inventory/group_vars/fedimg-stg @@ -6,7 +6,11 @@ num_cpus: 2 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -tcp_ports: [ 3000 ] +tcp_ports: [ + # These are all for outgoing fedmsg. + 3000, 3001, 3002, 3003, 3004, 3005, 3006, + 3007, 3008, 3009, 3010, 3011, 3012, 3013, +] # TODO, restrict this down to just sysadmin-releng fas_client_groups: sysadmin-datanommer,sysadmin-releng,sysadmin-fedimg @@ -19,3 +23,6 @@ fedmsg_certs: - service: fedimg owner: root group: fedmsg + can_send: + - fedimg.image.test + - fedimg.image.upload diff --git a/inventory/group_vars/fedoauth-stg b/inventory/group_vars/fedoauth-stg deleted file mode 100644 index 828c0859ff..0000000000 --- a/inventory/group_vars/fedoauth-stg +++ /dev/null @@ -1,15 +0,0 @@ ---- -# Define resources for this group of hosts here. -lvm_size: 20000 -mem_size: 1024 -num_cpus: 2 - -# for systems that do not match the above - specify the same parameter in -# the host_vars/$hostname file - -tcp_ports: [ 80, 443 ] - -# Neeed for rsync from log01 for logs. -custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] - -fas_client_groups: sysadmin-main,sysadmin-accounts diff --git a/inventory/group_vars/fedocal b/inventory/group_vars/fedocal index cb8eba3da2..9aea15e7a3 100644 --- a/inventory/group_vars/fedocal +++ b/inventory/group_vars/fedocal @@ -27,3 +27,13 @@ fedmsg_certs: - service: fedocal owner: root group: apache + can_send: + - fedocal.calendar.clear + - fedocal.calendar.delete + - fedocal.calendar.new + - fedocal.calendar.update + - fedocal.calendar.upload + - fedocal.meeting.delete + - fedocal.meeting.new + - fedocal.meeting.reminder + - fedocal.meeting.update diff --git a/inventory/group_vars/fedocal-stg b/inventory/group_vars/fedocal-stg index eab10b82bd..3f59abea65 100644 --- a/inventory/group_vars/fedocal-stg +++ b/inventory/group_vars/fedocal-stg @@ -27,3 +27,13 @@ fedmsg_certs: - service: fedocal owner: root group: apache + can_send: + - fedocal.calendar.clear + - fedocal.calendar.delete + - fedocal.calendar.new + - fedocal.calendar.update + - fedocal.calendar.upload + - fedocal.meeting.delete + - fedocal.meeting.new + - fedocal.meeting.reminder + - fedocal.meeting.update diff --git a/inventory/group_vars/github2fedmsg b/inventory/group_vars/github2fedmsg index 133ea41526..90d2c36e6f 100644 --- a/inventory/group_vars/github2fedmsg +++ b/inventory/group_vars/github2fedmsg @@ -4,13 +4,15 @@ lvm_size: 20000 mem_size: 2048 num_cpus: 2 -# for systems that do not match the above - specify the same parameter in -# the host_vars/$hostname file +# Definining these vars has a number of effects +# 1) mod_wsgi is configured to use the vars for its own setup +# 2) iptables opens enough ports for all threads for fedmsg +# 3) roles/fedmsg/base/ declares enough fedmsg endpoints for all threads +wsgi_fedmsg_service: github2fedmsg +wsgi_procs: 2 +wsgi_threads: 2 -tcp_ports: [ 80, 443, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] +tcp_ports: [ 80 ] # Neeed for rsync from log01 for logs. custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] @@ -25,3 +27,21 @@ fedmsg_certs: - service: github2fedmsg owner: root group: apache + can_send: + - github.commit_comment + - github.create + - github.delete + - github.fork + - github.issue.comment + - github.issue.reopened + - github.member + - github.page_build + - github.pull_request.closed + - github.pull_request_review_comment + - github.push + - github.release + - github.star + - github.status + - github.team_add + - github.webhook + - github.gollum diff --git a/inventory/group_vars/github2fedmsg-stg b/inventory/group_vars/github2fedmsg-stg index 3c0756a371..0eb48f2dfb 100644 --- a/inventory/group_vars/github2fedmsg-stg +++ b/inventory/group_vars/github2fedmsg-stg @@ -4,13 +4,15 @@ lvm_size: 20000 mem_size: 1024 num_cpus: 1 -# for systems that do not match the above - specify the same parameter in -# the host_vars/$hostname file +# Definining these vars has a number of effects +# 1) mod_wsgi is configured to use the vars for its own setup +# 2) iptables opens enough ports for all threads for fedmsg +# 3) roles/fedmsg/base/ declares enough fedmsg endpoints for all threads +wsgi_fedmsg_service: github2fedmsg +wsgi_procs: 2 +wsgi_threads: 2 -tcp_ports: [ 80, 443, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] +tcp_ports: [ 80 ] # Neeed for rsync from log01 for logs. custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] @@ -25,3 +27,21 @@ fedmsg_certs: - service: github2fedmsg owner: root group: apache + can_send: + - github.commit_comment + - github.create + - github.delete + - github.fork + - github.issue.comment + - github.issue.reopened + - github.member + - github.page_build + - github.pull_request.closed + - github.pull_request_review_comment + - github.push + - github.release + - github.star + - github.status + - github.team_add + - github.webhook + - github.gollum diff --git a/inventory/group_vars/hosted b/inventory/group_vars/hosted new file mode 100644 index 0000000000..5f63f720ae --- /dev/null +++ b/inventory/group_vars/hosted @@ -0,0 +1,27 @@ + + +# Even though the hosted nodes are still deployed with puppet, we have this +# definition here so that the fedmsg authz policy can be generated correctly. +# ... when we eventually fully ansibilize these hosts, just fill out the rest of +# this file with the other vars we need. --threebean +fedmsg_certs: +- service: shell + owner: root + group: sysadmin +- service: trac + owner: root + group: apache + can_send: + - trac.ticket.delete + - trac.ticket.new + - trac.ticket.update + - trac.wiki.page.delete + - trac.wiki.page.new + - trac.wiki.page.rename + - trac.wiki.page.update + - trac.wiki.page.version.delete +- service: git + owner: root + group: cla_done + can_send: + - trac.git.receive diff --git a/inventory/group_vars/hotness b/inventory/group_vars/hotness index 17153b9eea..6fbf6abad0 100644 --- a/inventory/group_vars/hotness +++ b/inventory/group_vars/hotness @@ -19,3 +19,8 @@ fedmsg_certs: - service: hotness owner: root group: fedmsg + can_send: + - hotness.project.map + - hotness.update.bug.file + - hotness.update.bug.followup + - hotness.update.drop diff --git a/inventory/group_vars/hotness-stg b/inventory/group_vars/hotness-stg index 40f5e1a2b7..785ec261f9 100644 --- a/inventory/group_vars/hotness-stg +++ b/inventory/group_vars/hotness-stg @@ -19,3 +19,8 @@ fedmsg_certs: - service: hotness owner: root group: fedmsg + can_send: + - hotness.project.map + - hotness.update.bug.file + - hotness.update.bug.followup + - hotness.update.drop diff --git a/inventory/group_vars/fedoauth b/inventory/group_vars/ipsilon similarity index 100% rename from inventory/group_vars/fedoauth rename to inventory/group_vars/ipsilon diff --git a/inventory/group_vars/jenkins-cloud b/inventory/group_vars/jenkins-cloud index 964d8868db..2c41c4ee1e 100644 --- a/inventory/group_vars/jenkins-cloud +++ b/inventory/group_vars/jenkins-cloud @@ -1,4 +1,5 @@ postfix_group: jenkins-cloud +freezes: false tcp_ports: [22, 80, 443] @@ -10,3 +11,10 @@ fedmsg_certs: - service: jenkins owner: root group: jenkins + can_send: + - jenkins.build.aborted + - jenkins.build.failed + - jenkins.build.notbuilt + - jenkins.build.passed + - jenkins.build.start + - jenkins.build.unstable diff --git a/inventory/group_vars/jenkins-dev b/inventory/group_vars/jenkins-dev new file mode 100644 index 0000000000..122c4c5ed8 --- /dev/null +++ b/inventory/group_vars/jenkins-dev @@ -0,0 +1,184 @@ +--- +datacenter: fedorainfracloud +freezes: false + +slaves: +- name: EL6 + host: jenkins-slave-el6.fedorainfracloud.org + description: CentOS 6.6 + labels: el EL el6 EL6 centos CentOS centos6 CentOS6 +- name: EL7 + host: jenkins-slave-el7.fedorainfracloud.org + description: Red Hat Enterprise Linux Server 7.1 + labels: el EL el7 EL7 rhel RHEL rhel7 RHEL7 +- name: F22 + host: jenkins-slave-f22.fedorainfracloud.org + description: Fedora 22 + labels: fedora Fedora fedora22 Fedora22 + +# Packages installed on all Jenkins slaves (Fedora, CentOS) +slave_packages_common: +- java-1.8.0-openjdk-devel +- vim +- subversion +- bzr +- git +- rpmlint +- rpmdevtools +- mercurial +- mock +- gcc +- gcc-c++ +- libjpeg-turbo-devel +- python-bugzilla +- python-pip +- python-virtualenv +- python-coverage +- pylint +- python-argparse +- python-nose +- python-BeautifulSoup +- python-fedora +- python-unittest2 +- python-pep8 +- python-psycopg2 +- postgresql-devel # Required to install python-psycopg2 w/in a venv +- docbook-style-xsl # Required by gimp-help-2 +- make # Required by gimp-help-2 +- automake # Required by gimp-help-2 +- libcurl-devel # Required by blockerbugs +- python-formencode # Required by javapackages-tools +- asciidoc # Required by javapackages-tools +- xmlto # Required by javapackages-tools +- pycairo-devel # Required by dogtail +- packagedb-cli # Required by FedoraReview +- xorg-x11-server-Xvfb # Required by fedora-rube +- libffi-devel # Required by bodhi/cffi/cryptography +- openssl-devel # Required by bodhi/cffi/cryptography +- redis # Required by copr +- createrepo_c # Required by bodhi2 +- python-createrepo_c # Required by bodhi2 +- python-straight-plugin +- pyflakes # Requested by user rholy (ticket #4175) +- koji # Required by koschei (ticket #4852) +- python-hawkey # Required by koschei (ticket #4852) +- python-librepo # Required by koschei (ticket #4852) +- rpm-python # Required by koschei (ticket #4852) + +# Packages installed only on Fedora Jenkins slaves +slave_packages_fedora: +- python3 +- python-nose-cover3 +- python3-nose-cover3 +- glibc.i686 +- glibc-devel.i686 +- libstdc++.i686 +- zlib-devel.i686 +- ncurses-devel.i686 +- libX11-devel.i686 +- libXrender.i686 +- libXrandr.i686 +- nspr-devel ## Requested by 389-ds-base +- nss-devel +- svrcore-devel +- openldap-devel +- libdb-devel +- cyrus-sasl-devel +- icu +- libicu-devel +- gcc-c++ +- net-snmp-devel +- lm_sensors-devel +- bzip2-devel +- zlib-devel +- openssl-devel +- tcp_wrappers +- pam-devel +- systemd-units +- policycoreutils-python +- openldap-clients +- perl-Mozilla-LDAP +- nss-tools +- cyrus-sasl-gssapi +- cyrus-sasl-md5 +- libdb-utils +- systemd-units +- perl-Socket +- perl-NetAddr-IP +- pcre-devel ## End of request list for 389-ds-base +- maven # Required by xmvn https://fedorahosted.org/fedora-infrastructure/ticket/4054 +- gtk3-devel # Required by dogtail +- glib2-devel # Required by Cockpit +- libgudev1-devel +- json-glib-devel +- gobject-introspection-devel +- libudisks2-devel +- NetworkManager-glib-devel +- systemd-devel +- accountsservice-devel +- pam-devel +- autoconf +- libtool +- intltool +- jsl +- python-scss +- gtk-doc +- krb5-devel +- sshpass +- perl-Locale-PO +- perl-JSON +- glib-networking +- realmd +- udisks2 +- mdadm +- lvm2 +- sshpass # End requires for Cockpit +- tito # Requested by msrb for javapackages-tools and xmvn (ticket#4113) +- pyflakes # Requested by user rholy (ticket #4175) +- devscripts-minimal # Required by FedoraReview +- firefox # Required for rube +- python-devel # Required for mpi4py +- python3-devel # Required for mpi4py +- pwgen # Required for mpi4py +- openmpi-devel # Required for mpi4py +- mpich2-devel # Required for mpi4py +- pylint # Required by Ipsilon +- python-pep8 +- nodejs-less +- python-openid +- python-openid-teams +- python-openid-cla +- python-cherrypy +- m2crypto +- lasso-python +- python-sqlalchemy +- python-ldap +- python-pam +- python-fedora +- freeipa-python +- httpd +- mod_auth_mellon +- postgresql-server +- openssl +- mod_wsgi +- python-jinja2 +- python-psycopg2 +- sssd +- libsss_simpleifp +- openldap-servers +- mod_auth_gssapi +- krb5-server +- socket_wrapper +- nss_wrapper +- python-requests-kerberos +- python-lesscpy # End requires for Ipsilon +- libxml2-python # Required by gimp-docs +- createrepo # Required by dnf +- dia # Required by javapackages-tools ticket #4279 + +# Packages installed only on CentOS Jenkins slaves +slave_packages_centos: +# "setup" is just a placeholder value +- setup +# el7-only +# - python-webob1.4 # Required by bodhi2 diff --git a/inventory/group_vars/kerneltest b/inventory/group_vars/kerneltest index 064983b7ae..ef3c52d0b0 100644 --- a/inventory/group_vars/kerneltest +++ b/inventory/group_vars/kerneltest @@ -4,13 +4,15 @@ lvm_size: 20000 mem_size: 1024 num_cpus: 1 -# for systems that do not match the above - specify the same parameter in -# the host_vars/$hostname file +# Definining these vars has a number of effects +# 1) mod_wsgi is configured to use the vars for its own setup +# 2) iptables opens enough ports for all threads for fedmsg +# 3) roles/fedmsg/base/ declares enough fedmsg endpoints for all threads +wsgi_fedmsg_service: kerneltest +wsgi_procs: 2 +wsgi_threads: 1 -tcp_ports: [ 80, 443, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] +tcp_ports: [ 80 ] # Neeed for rsync from log01 for logs. custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] @@ -25,3 +27,7 @@ fedmsg_certs: - service: kerneltest owner: root group: apache + can_send: + - kerneltest.release.edit + - kerneltest.release.new + - kerneltest.upload.new diff --git a/inventory/group_vars/kerneltest-stg b/inventory/group_vars/kerneltest-stg index 064983b7ae..ef3c52d0b0 100644 --- a/inventory/group_vars/kerneltest-stg +++ b/inventory/group_vars/kerneltest-stg @@ -4,13 +4,15 @@ lvm_size: 20000 mem_size: 1024 num_cpus: 1 -# for systems that do not match the above - specify the same parameter in -# the host_vars/$hostname file +# Definining these vars has a number of effects +# 1) mod_wsgi is configured to use the vars for its own setup +# 2) iptables opens enough ports for all threads for fedmsg +# 3) roles/fedmsg/base/ declares enough fedmsg endpoints for all threads +wsgi_fedmsg_service: kerneltest +wsgi_procs: 2 +wsgi_threads: 1 -tcp_ports: [ 80, 443, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] +tcp_ports: [ 80 ] # Neeed for rsync from log01 for logs. custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] @@ -25,3 +27,7 @@ fedmsg_certs: - service: kerneltest owner: root group: apache + can_send: + - kerneltest.release.edit + - kerneltest.release.new + - kerneltest.upload.new diff --git a/inventory/group_vars/koji b/inventory/group_vars/koji index 39c51af417..d24f2f3840 100644 --- a/inventory/group_vars/koji +++ b/inventory/group_vars/koji @@ -26,8 +26,17 @@ fedmsg_certs: - service: koji owner: root group: apache + can_send: + - buildsys.build.state.change + - buildsys.package.list.change + - buildsys.repo.done + - buildsys.repo.init + - buildsys.rpm.sign + - buildsys.tag + - buildsys.task.state.change + - buildsys.untag -nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid" +nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3" virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }} --disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }} @@ -38,3 +47,5 @@ virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }} ip={{ eth1_ip }}:::{{ nm }}:{{ inventory_hostname }}-nfs:eth1:none" --network=bridge=br0,model=virtio --network=bridge=br1,model=virtio --autostart --noautoconsole + +sudoers: "{{ private }}/files/sudo/arm-releng-sudoers" diff --git a/inventory/group_vars/koji-not-yet-ansibilized b/inventory/group_vars/koji-not-yet-ansibilized new file mode 100644 index 0000000000..aa5bf62eda --- /dev/null +++ b/inventory/group_vars/koji-not-yet-ansibilized @@ -0,0 +1,17 @@ +# See the comment with the explanation of this group in ``inventory/inventory`` +fedmsg_certs: +- service: shell + owner: root + group: sysadmin +- service: koji + owner: root + group: apache + can_send: + - buildsys.build.state.change + - buildsys.package.list.change + - buildsys.repo.done + - buildsys.repo.init + - buildsys.rpm.sign + - buildsys.tag + - buildsys.task.state.change + - buildsys.untag diff --git a/inventory/group_vars/koji-stg b/inventory/group_vars/koji-stg index 8009bc0adf..f6203b89fb 100644 --- a/inventory/group_vars/koji-stg +++ b/inventory/group_vars/koji-stg @@ -1,7 +1,7 @@ --- # Define resources for this group of hosts here. lvm_size: 30000 -mem_size: 2048 +mem_size: 4096 num_cpus: 2 # for systems that do not match the above - specify the same parameter in @@ -22,5 +22,20 @@ fedmsg_certs: - service: koji owner: root group: apache + can_send: + - buildsys.build.state.change + - buildsys.package.list.change + - buildsys.repo.done + - buildsys.repo.init + - buildsys.rpm.sign + - buildsys.tag + - buildsys.task.state.change + - buildsys.untag -nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid" +# NOTE -- staging mounts read-only +nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid" +sudoers: "{{ private }}/files/sudo/arm-releng-sudoers" + +koji_server_url: "http://koji.stg.fedoraproject.org/kojihub" +koji_weburl: "http://koji.stg.fedoraproject.org/koji" +koji_topurl: "http://kojipkgs.fedoraproject.org/" diff --git a/inventory/group_vars/kojipkgs b/inventory/group_vars/kojipkgs index 09490d3c4a..e78b43745d 100644 --- a/inventory/group_vars/kojipkgs +++ b/inventory/group_vars/kojipkgs @@ -29,6 +29,7 @@ csi_relationship: | - Things that rely on this host: - all koji builders/buildsystem + - koschei - external users downloading packages from koji. # Need a eth0/eth1 install here. diff --git a/inventory/group_vars/koschei b/inventory/group_vars/koschei new file mode 100644 index 0000000000..ad14e01f52 --- /dev/null +++ b/inventory/group_vars/koschei @@ -0,0 +1,56 @@ +--- +# Define resources for this group of hosts here. +lvm_size: 20000 +mem_size: 4096 +num_cpus: 4 + +# for systems that do not match the above - specify the same parameter in +# the host_vars/$hostname file + +koschei_topurl: https://apps.fedoraproject.org/koschei +koschei_pgsql_hostname: db01.phx2.fedoraproject.org +koschei_koji_hub: koji02.phx2.fedoraproject.org +koschei_kojipkgs: kojipkgs.fedoraproject.org +koschei_koji_web: koji.fedoraproject.org +koschei_koji_tag: f24 +koschei_openid_provider: id.fedoraproject.org +koschei_bugzilla: bugzilla.redhat.com + + +tcp_ports: [ 80, 443, + # These 4 are for fedmsg. See also /etc/fedmsg.d/endpoints.py + 3000, 3001, 3002, 3003, +] + +custom_rules: [ + # Need for rsync from log01 for logs. + '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT', + ] + +fas_client_groups: sysadmin-koschei,fi-apprentice + +freezes: false + +# These are consumed by a task in roles/fedmsg/base/main.yml +fedmsg_certs: +- service: shell + owner: root + group: sysadmin +- service: koschei + owner: root + group: koschei + can_send: + - koschei.package.state.change + +# For the MOTD +csi_security_category: Low +csi_primary_contact: Fedora admins - admin@fedoraproject.org +csi_purpose: Koschei continuous integration system +csi_relationship: | + This machine depends on: + - PostgreSQL DB server + - Koji hub and kojipkgs + - fedmsg hub + - pkgdb2 + - bastion (for mail relay) diff --git a/inventory/group_vars/koschei-stg b/inventory/group_vars/koschei-stg new file mode 100644 index 0000000000..b297f0f081 --- /dev/null +++ b/inventory/group_vars/koschei-stg @@ -0,0 +1,56 @@ +--- +# Define resources for this group of hosts here. +lvm_size: 20000 +mem_size: 2048 +num_cpus: 2 + +# for systems that do not match the above - specify the same parameter in +# the host_vars/$hostname file + +koschei_topurl: https://apps.stg.fedoraproject.org/koschei +koschei_pgsql_hostname: db01.stg.phx2.fedoraproject.org +koschei_koji_hub: koji01.stg.phx2.fedoraproject.org +koschei_kojipkgs: koji01.stg.phx2.fedoraproject.org +koschei_koji_web: koji.stg.fedoraproject.org +koschei_koji_tag: f23 +koschei_openid_provider: id.stg.fedoraproject.org +koschei_bugzilla: partner-bugzilla.redhat.com + + +tcp_ports: [ 80, 443, + # These 4 are for fedmsg. See also /etc/fedmsg.d/endpoints.py + 3000, 3001, 3002, 3003 +] + +custom_rules: [ + # Need for rsync from log01 for logs. + '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT', + ] + +fas_client_groups: sysadmin-koschei,fi-apprentice + +freezes: false + +# These are consumed by a task in roles/fedmsg/base/main.yml +fedmsg_certs: +- service: shell + owner: root + group: sysadmin +- service: koschei + owner: root + group: koschei + can_send: + - koschei.package.state.change + +# For the MOTD +csi_security_category: Low +csi_primary_contact: Fedora admins - admin@fedoraproject.org +csi_purpose: Koschei continuous integration system +csi_relationship: | + This machine depends on: + - PostgreSQL DB server + - Koji hub and kojipkgs + - fedmsg hub + - pkgdb2 + - bastion (for mail relay) diff --git a/inventory/group_vars/lockbox b/inventory/group_vars/lockbox index 7c82a435ae..7d05524f89 100644 --- a/inventory/group_vars/lockbox +++ b/inventory/group_vars/lockbox @@ -7,3 +7,21 @@ num_cpus: 2 tcp_ports: [ 443 ] fas_client_groups: sysadmin-noc,sysadmin-qa,fi-apprentice + +# These are consumed by a task in roles/fedmsg/base/main.yml +# We don't really use the announce cert.. but it was supposed to be a way for +# the FPL and other powers that be to broadcast announcements, like the FCC's +# emergency broadcast system. The cert are group are here.. but no tools on the +# client side are configured to do anything with this yet. +fedmsg_certs: +- service: shell + owner: root + group: sysadmin + can_send: + - ansible.playbook.complete + - ansible.playbook.start +- service: announce + owner: root + group: fedmsg-announce + can_send: + - announce.announcement diff --git a/inventory/group_vars/mailman b/inventory/group_vars/mailman index d9b70f97d2..1fd616fceb 100644 --- a/inventory/group_vars/mailman +++ b/inventory/group_vars/mailman @@ -18,6 +18,8 @@ fedmsg_certs: - service: mailman owner: mailman group: mailman + can_send: + - mailman.receive # Postfix main.cf postfix_group: mailman diff --git a/inventory/group_vars/mailman-stg b/inventory/group_vars/mailman-stg index 503e85666f..72f2a220b6 100644 --- a/inventory/group_vars/mailman-stg +++ b/inventory/group_vars/mailman-stg @@ -17,6 +17,8 @@ fedmsg_certs: - service: mailman owner: mailman group: mailman + can_send: + - mailman.receive # default virt install command is for a single nic-device # define in another group file for more nics (see buildvm) diff --git a/inventory/group_vars/memcached-stg b/inventory/group_vars/memcached-stg new file mode 100644 index 0000000000..581a0248b4 --- /dev/null +++ b/inventory/group_vars/memcached-stg @@ -0,0 +1,12 @@ +--- +# Define resources for this group of hosts here. +lvm_size: 10000 +mem_size: 1536 +num_cpus: 1 + +# for systems that do not match the above - specify the same parameter in +# the host_vars/$hostname file + +tcp_ports: [ 11211 ] + +fas_client_groups: sysadmin-noc,fi-apprentice,sysadmin-web diff --git a/inventory/group_vars/mirrorlist2 b/inventory/group_vars/mirrorlist2 index 0884e9ed94..8b5854536d 100644 --- a/inventory/group_vars/mirrorlist2 +++ b/inventory/group_vars/mirrorlist2 @@ -5,7 +5,19 @@ num_cpus: 4 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -custom_rules: [ '-A INPUT -p tcp -m tcp -s 192.168.0.0/16 --dport 80 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 10.5.126.0/24 --dport 80 -j ACCEPT' ] +custom_rules: [ '-A INPUT -p tcp -m tcp -s 192.168.0.0/16 --dport 80 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 192.168.0.0/16 --dport 443 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 10.5.126.0/24 --dport 80 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 10.5.126.0/24 --dport 443 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 67.219.144.68/32 --dport 443 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 5.175.150.50/32 --dport 443 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 152.19.134.142/32 --dport 443 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 140.211.169.196/32 --dport 443 -j ACCEPT', ] + +custom6_rules: [ '-A INPUT -p tcp -m tcp -s 2610:28:3090:3001:dead:beef:cafe:fed3 --dport 443 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 2604:1580:fe00:0:5054:ff:feae:702c --dport 443 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 2a00:d1a0:1::131 --dport 443 -j ACCEPT', ] + collectd_apache: true fas_client_groups: sysadmin-noc,fi-apprentice nrpe_procs_warn: 500 diff --git a/inventory/group_vars/mirrorlist2-stg b/inventory/group_vars/mirrorlist2-stg index c955c26746..8e42b4f5e8 100644 --- a/inventory/group_vars/mirrorlist2-stg +++ b/inventory/group_vars/mirrorlist2-stg @@ -5,7 +5,10 @@ num_cpus: 4 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -custom_rules: [ '-A INPUT -p tcp -m tcp -s 192.168.0.0/16 --dport 80 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 10.5.126.0/24 --dport 80 -j ACCEPT' ] +custom_rules: [ '-A INPUT -p tcp -m tcp -s 192.168.0.0/16 --dport 80 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 10.5.126.0/24 --dport 80 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 10.5.126.0/24 --dport 443 -j ACCEPT' ] + collectd_apache: true fas_client_groups: sysadmin-noc,fi-apprentice,sysadmin-web nrpe_procs_warn: 500 diff --git a/inventory/group_vars/mm b/inventory/group_vars/mm new file mode 100644 index 0000000000..ffdcc7353f --- /dev/null +++ b/inventory/group_vars/mm @@ -0,0 +1,4 @@ +--- +# Define resources for this group of hosts here. +fas_client_groups: sysadmin-noc,sysadmin-web +sudoers: "{{ private }}/files/sudo/mm2-sudoers" diff --git a/inventory/group_vars/mm-backend b/inventory/group_vars/mm-backend new file mode 100644 index 0000000000..919c59b61d --- /dev/null +++ b/inventory/group_vars/mm-backend @@ -0,0 +1,21 @@ +--- +mem_size: 6144 + +fedmsg_certs: +- service: shell + alias: mirrormanager + owner: mirrormanager + group: sysadmin + can_send: + - mirrormanager.netblocks.get + +# For the MOTD +csi_security_category: Medium +csi_primary_contact: Fedora admin - admin@fedoraproject.org +csi_purpose: Run mirrormanager backend cron tasks +csi_relationship: | + TODO - we should document: + + * what kinds of processes run here + * what other services they depend on + * what other services depend on it diff --git a/inventory/group_vars/mm-backend-stg b/inventory/group_vars/mm-backend-stg new file mode 100644 index 0000000000..f553876be0 --- /dev/null +++ b/inventory/group_vars/mm-backend-stg @@ -0,0 +1,19 @@ +--- + +fedmsg_certs: +- service: shell + owner: mirrormanager + group: sysadmin + can_send: + - mirrormanager.netblocks.get + +# For the MOTD +csi_security_category: Medium +csi_primary_contact: Fedora admin - admin@fedoraproject.org +csi_purpose: Run mirrormanager backend cron tasks +csi_relationship: | + TODO - we should document: + + * what kinds of processes run here + * what other services they depend on + * what other services depend on it diff --git a/inventory/group_vars/mm-crawler b/inventory/group_vars/mm-crawler new file mode 100644 index 0000000000..9cef392746 --- /dev/null +++ b/inventory/group_vars/mm-crawler @@ -0,0 +1,23 @@ +--- + +fedmsg_certs: +- service: shell + owner: mirrormanager + group: sysadmin + can_send: + - mirrormanager.crawler.complete + - mirrormanager.crawler.start + +# For the MOTD +csi_security_category: Medium +csi_primary_contact: Fedora admin - admin@fedoraproject.org +csi_purpose: Run mirrormanager crawlers +csi_relationship: | + TODO - we should document: + + * what kinds of processes run here + * what other services they depend on + * what other services depend on it + +rsyncd_conf: "rsyncd.conf.crawler" +tcp_ports: [ 873 ] diff --git a/inventory/group_vars/mm-crawler-stg b/inventory/group_vars/mm-crawler-stg new file mode 100644 index 0000000000..1aeb70cb4c --- /dev/null +++ b/inventory/group_vars/mm-crawler-stg @@ -0,0 +1,20 @@ +--- + +fedmsg_certs: +- service: shell + owner: mirrormanager + group: sysadmin + can_send: + - mirrormanager.crawler.complete + - mirrormanager.crawler.start + +# For the MOTD +csi_security_category: Medium +csi_primary_contact: Fedora admin - admin@fedoraproject.org +csi_purpose: Run mirrormanager crawlers +csi_relationship: | + TODO - we should document: + + * what kinds of processes run here + * what other services they depend on + * what other services depend on it diff --git a/inventory/group_vars/mm-frontend b/inventory/group_vars/mm-frontend new file mode 100644 index 0000000000..92bfbc2d24 --- /dev/null +++ b/inventory/group_vars/mm-frontend @@ -0,0 +1,28 @@ +--- +mem_size: 4096 + +tcp_ports: [ 80, + # These 2 ports are used by fedmsg. + # One for each wsgi thread. + 3000, 3001, + ] + +fedmsg_certs: +- service: shell + owner: root + group: sysadmin +- service: mirrormanager2 + owner: root + group: apache + + +# For the MOTD +csi_security_category: Medium +csi_primary_contact: Fedora admin - admin@fedoraproject.org +csi_purpose: Run mirrormanager frontend WSGI app +csi_relationship: | + TODO - we should document: + + * what kinds of processes run here + * what other services they depend on + * what other services depend on it diff --git a/inventory/group_vars/mm-frontend-stg b/inventory/group_vars/mm-frontend-stg new file mode 100644 index 0000000000..e6acaea883 --- /dev/null +++ b/inventory/group_vars/mm-frontend-stg @@ -0,0 +1,27 @@ +--- + +tcp_ports: [ 80, + # These 2 ports are used by fedmsg. + # One for each wsgi thread. + 3000, 3001, + ] + +fedmsg_certs: +- service: shell + owner: root + group: sysadmin +- service: mirrormanager2 + owner: root + group: apache + + +# For the MOTD +csi_security_category: Medium +csi_primary_contact: Fedora admin - admin@fedoraproject.org +csi_purpose: Run mirrormanager frontend WSGI app +csi_relationship: | + TODO - we should document: + + * what kinds of processes run here + * what other services they depend on + * what other services depend on it diff --git a/inventory/group_vars/notifs-backend b/inventory/group_vars/notifs-backend index 6d325e28cb..23dcaf5620 100644 --- a/inventory/group_vars/notifs-backend +++ b/inventory/group_vars/notifs-backend @@ -23,3 +23,8 @@ fedmsg_certs: - service: fmn owner: root group: fedmsg + can_send: + - fmn.filter.update + - fmn.preference.update + - fmn.rule.update + - fmn.confirmation.update diff --git a/inventory/group_vars/notifs-backend-stg b/inventory/group_vars/notifs-backend-stg index 320260a3bc..0ff8e0a515 100644 --- a/inventory/group_vars/notifs-backend-stg +++ b/inventory/group_vars/notifs-backend-stg @@ -19,3 +19,8 @@ fedmsg_certs: - service: fmn owner: root group: fedmsg + can_send: + - fmn.filter.update + - fmn.preference.update + - fmn.rule.update + - fmn.confirmation.update diff --git a/inventory/group_vars/notifs-web b/inventory/group_vars/notifs-web index 56a3e692fb..5c14dbb78f 100644 --- a/inventory/group_vars/notifs-web +++ b/inventory/group_vars/notifs-web @@ -7,10 +7,11 @@ num_cpus: 2 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -tcp_ports: [ 80, 443, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] +wsgi_fedmsg_service: fmn +wsgi_procs: 2 +wsgi_threads: 2 + +tcp_ports: [ 80 ] fas_client_groups: sysadmin-noc,sysadmin-datanommer @@ -22,3 +23,8 @@ fedmsg_certs: - service: fmn owner: root group: apache + can_send: + - fmn.filter.update + - fmn.preference.update + - fmn.rule.update + - fmn.confirmation.update diff --git a/inventory/group_vars/notifs-web-stg b/inventory/group_vars/notifs-web-stg index 56a3e692fb..5c14dbb78f 100644 --- a/inventory/group_vars/notifs-web-stg +++ b/inventory/group_vars/notifs-web-stg @@ -7,10 +7,11 @@ num_cpus: 2 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -tcp_ports: [ 80, 443, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] +wsgi_fedmsg_service: fmn +wsgi_procs: 2 +wsgi_threads: 2 + +tcp_ports: [ 80 ] fas_client_groups: sysadmin-noc,sysadmin-datanommer @@ -22,3 +23,8 @@ fedmsg_certs: - service: fmn owner: root group: apache + can_send: + - fmn.filter.update + - fmn.preference.update + - fmn.rule.update + - fmn.confirmation.update diff --git a/inventory/group_vars/nuancier b/inventory/group_vars/nuancier index 0153fd0522..c774420efc 100644 --- a/inventory/group_vars/nuancier +++ b/inventory/group_vars/nuancier @@ -4,15 +4,18 @@ lvm_size: 20000 mem_size: 2048 num_cpus: 2 -# for systems that do not match the above - specify the same parameter in -# the host_vars/$hostname file +# Definining these vars has a number of effects +# 1) mod_wsgi is configured to use the vars for its own setup +# 2) iptables opens enough ports for all threads for fedmsg +# 3) roles/fedmsg/base/ declares enough fedmsg endpoints for all threads +wsgi_fedmsg_service: nuancier +wsgi_procs: 2 +wsgi_threads: 2 -tcp_ports: [ 80, 443, +tcp_ports: [ 80, # This port is required by gluster 6996, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] + ] fas_client_groups: sysadmin-noc,sysadmin-web @@ -26,3 +29,9 @@ fedmsg_certs: - service: nuancier owner: root group: apache + can_send: + - nuancier.candidate.approved + - nuancier.candidate.denied + - nuancier.candidate.new + - nuancier.election.new + - nuancier.election.update diff --git a/inventory/group_vars/nuancier-stg b/inventory/group_vars/nuancier-stg index b045e3dd96..6d2faaf9cd 100644 --- a/inventory/group_vars/nuancier-stg +++ b/inventory/group_vars/nuancier-stg @@ -4,15 +4,18 @@ lvm_size: 20000 mem_size: 1024 num_cpus: 2 -# for systems that do not match the above - specify the same parameter in -# the host_vars/$hostname file +# Definining these vars has a number of effects +# 1) mod_wsgi is configured to use the vars for its own setup +# 2) iptables opens enough ports for all threads for fedmsg +# 3) roles/fedmsg/base/ declares enough fedmsg endpoints for all threads +wsgi_fedmsg_service: nuancier +wsgi_procs: 2 +wsgi_threads: 2 -tcp_ports: [ 80, 443, +tcp_ports: [ 80, # This port is required by gluster 6996, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] + ] fas_client_groups: sysadmin-noc,sysadmin-web @@ -26,3 +29,9 @@ fedmsg_certs: - service: nuancier owner: root group: apache + can_send: + - nuancier.candidate.approved + - nuancier.candidate.denied + - nuancier.candidate.new + - nuancier.election.new + - nuancier.election.update diff --git a/inventory/group_vars/openstack-compute b/inventory/group_vars/openstack-compute index ca2d561131..dde4d96fa3 100644 --- a/inventory/group_vars/openstack-compute +++ b/inventory/group_vars/openstack-compute @@ -1,2 +1,4 @@ --- host_group: openstack-compute +nrpe_procs_warn: 900 +nrpe_procs_crit: 1000 diff --git a/inventory/group_vars/osbs-stg b/inventory/group_vars/osbs-stg new file mode 100644 index 0000000000..ec7a549201 --- /dev/null +++ b/inventory/group_vars/osbs-stg @@ -0,0 +1,10 @@ +--- +# Define resources for this group of hosts here. +lvm_size: 60000 +mem_size: 8192 +num_cpus: 2 + +tcp_ports: [ 80, 443 ] + +fas_client_groups: sysadmin-releng,fi-apprentice +sudoers: "{{ private }}/files/sudo/arm-releng-sudoers" diff --git a/inventory/group_vars/pagure b/inventory/group_vars/pagure index cb2d281323..9f30a7785e 100644 --- a/inventory/group_vars/pagure +++ b/inventory/group_vars/pagure @@ -1,28 +1,71 @@ --- # Define resources for this group of hosts here. lvm_size: 20000 -mem_size: 2048 -num_cpus: 2 +mem_size: 8192 +num_cpus: 6 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -tcp_ports: [ 22, 80, 443, 9418, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015 ] +tcp_ports: [ 22, 25, 80, 443, 9418, + # Used for the eventsource + 8088, + # This is for the pagure public fedmsg relay + 9940] + +stunnel_service: "eventsource" +stunnel_source_port: 8088 +stunnel_destination_port: 8080 + +# These are consumed by a task in roles/fedmsg/base/main.yml +fedmsg_certs: +- service: shell + owner: root + group: sysadmin +- service: pagure + owner: git + group: apache + can_send: + - pagure.issue.assigned.added + - pagure.issue.assigned.reset + - pagure.issue.comment.added + - pagure.issue.dependency.added + - pagure.issue.dependency.removed + - pagure.issue.edit + - pagure.issue.new + - pagure.issue.tag.added + - pagure.issue.tag.removed + - pagure.project.edit + - pagure.project.forked + - pagure.project.new + - pagure.project.tag.edited + - pagure.project.tag.removed + - pagure.project.user.added + - pagure.pull-request.closed + - pagure.pull-request.comment.added + - pagure.pull-request.flag.added + - pagure.pull-request.flag.updated + - pagure.pull-request.new + + +fedmsg_prefix: io.pagure +fedmsg_env: prod fas_client_groups: sysadmin-noc,sysadmin-web -freezes: false +freezes: true postfix_group: vpn.pagure +host_backup_targets: ['/srv/git', '/var/www/releases'] +dbs_to_backup: ['pagure'] + # Configuration for the git-daemon/server git_group: git git_port: 9418 git_server: /usr/libexec/git-core/git-daemon git_server_args: --export-all --syslog --inetd --verbose git_basepath: /srv/git/repositories +git_daemon_user: git # For the MOTD csi_security_category: Low diff --git a/inventory/group_vars/pagure-stg b/inventory/group_vars/pagure-stg index 3e4b4efd67..466e0f9d55 100644 --- a/inventory/group_vars/pagure-stg +++ b/inventory/group_vars/pagure-stg @@ -7,16 +7,54 @@ num_cpus: 2 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -tcp_ports: [ 22, 80, 443, 9418, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015 ] +tcp_ports: [ 22, 25, 80, 443, 9418, + # Used for the eventsource server + 8088, + # This is for the pagure public fedmsg relay + 9940] + +stunnel_service: "eventsource" +stunnel_source_port: 8088 +stunnel_destination_port: 8080 + +# These are consumed by a task in roles/fedmsg/base/main.yml +fedmsg_certs: +- service: shell + owner: root + group: sysadmin +- service: pagure + owner: git + group: apache + can_send: + - pagure.issue.assigned.added + - pagure.issue.assigned.reset + - pagure.issue.comment.added + - pagure.issue.dependency.added + - pagure.issue.dependency.removed + - pagure.issue.edit + - pagure.issue.new + - pagure.issue.tag.added + - pagure.issue.tag.removed + - pagure.project.edit + - pagure.project.forked + - pagure.project.new + - pagure.project.tag.edited + - pagure.project.tag.removed + - pagure.project.user.added + - pagure.pull-request.closed + - pagure.pull-request.comment.added + - pagure.pull-request.flag.added + - pagure.pull-request.flag.updated + - pagure.pull-request.new + +fedmsg_prefix: io.pagure +fedmsg_env: stg fas_client_groups: sysadmin-noc,sysadmin-web freezes: false env: pagure-staging -postfix_group: vpn.pagure +postfix_group: vpn.pagure-stg # Configuration for the git-daemon/server git_group: git @@ -24,6 +62,7 @@ git_port: 9418 git_server: /usr/libexec/git-core/git-daemon git_server_args: --export-all --syslog --inetd --verbose git_basepath: /srv/git/repositories +git_daemon_user: git # For the MOTD csi_security_category: Low diff --git a/inventory/group_vars/people b/inventory/group_vars/people new file mode 100644 index 0000000000..9a6054415b --- /dev/null +++ b/inventory/group_vars/people @@ -0,0 +1,40 @@ +--- +clamscan_mailto: admin@fedoraproject.org +clamscan_paths: +- /srv/ + +# Neeed for rsync from log01 for logs. +custom_rules: [ '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] + +git_port: 9418 +git_server: /usr/libexec/git-core/git-daemon +git_server_args: --export-all --syslog --inetd --verbose +git_basepath: / +git_daemon_user: nobody + +fas_client_groups: "@all" + +fedmsg_certs: +- service: shell + owner: root + group: sysadmin +- service: planet + owner: root + group: planet-user + can_send: + - planet.post.new + +# For the MOTD +csi_security_category: Low +csi_primary_contact: Fedora admins - adminfedoraproject.org +csi_purpose: Provide hosting space for Fedora contributors and Fedora Planet + +csi_relationship: | + - shell accounts and web space for fedora contributors + - web space for personal yum repos + - shared space for small group/personal git repos + + Please be aware that this is a shared server, and you should not upload + Private/Secret SSH or GPG keys onto this system. Any such keys found + will be deleted. + diff --git a/inventory/group_vars/pkgdb b/inventory/group_vars/pkgdb index cee66e63fa..d6e9196ad7 100644 --- a/inventory/group_vars/pkgdb +++ b/inventory/group_vars/pkgdb @@ -7,10 +7,11 @@ num_cpus: 2 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -tcp_ports: [ 80, 443, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] +wsgi_fedmsg_service: pkgdb2 +wsgi_procs: 3 +wsgi_threads: 2 + +tcp_ports: [ 80 ] fas_client_groups: sysadmin-noc,sysadmin-web,sysadmin-releng,sysadmin-cvs @@ -22,3 +23,24 @@ fedmsg_certs: - service: pkgdb owner: root group: apache + alias: pkgdb2 + can_send: + - pkgdb.acl.delete + - pkgdb.acl.update + - pkgdb.admin.action.status.update + - pkgdb.branch.complete + - pkgdb.branch.start + - pkgdb.collection.new + - pkgdb.collection.update + - pkgdb.owner.update + - pkgdb.package.branch.delete + - pkgdb.package.branch.new + - pkgdb.package.branch.request + - pkgdb.package.critpath.update + - pkgdb.package.delete + - pkgdb.package.monitor.update + - pkgdb.package.new + - pkgdb.package.new.request + - pkgdb.package.unretire.request + - pkgdb.package.update + - pkgdb.package.update.status diff --git a/inventory/group_vars/pkgdb-stg b/inventory/group_vars/pkgdb-stg index cee66e63fa..8cef22cecb 100644 --- a/inventory/group_vars/pkgdb-stg +++ b/inventory/group_vars/pkgdb-stg @@ -7,10 +7,11 @@ num_cpus: 2 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -tcp_ports: [ 80, 443, - # These 16 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] +wsgi_fedmsg_service: pkgdb2 +wsgi_procs: 2 +wsgi_threads: 2 + +tcp_ports: [ 80 ] fas_client_groups: sysadmin-noc,sysadmin-web,sysadmin-releng,sysadmin-cvs @@ -22,3 +23,24 @@ fedmsg_certs: - service: pkgdb owner: root group: apache + alias: pkgdb2 + can_send: + - pkgdb.acl.delete + - pkgdb.acl.update + - pkgdb.admin.action.status.update + - pkgdb.branch.complete + - pkgdb.branch.start + - pkgdb.collection.new + - pkgdb.collection.update + - pkgdb.owner.update + - pkgdb.package.branch.delete + - pkgdb.package.branch.new + - pkgdb.package.branch.request + - pkgdb.package.critpath.update + - pkgdb.package.delete + - pkgdb.package.monitor.update + - pkgdb.package.new + - pkgdb.package.new.request + - pkgdb.package.unretire.request + - pkgdb.package.update + - pkgdb.package.update.status diff --git a/inventory/group_vars/pkgs b/inventory/group_vars/pkgs index a899b94283..8f995e9094 100644 --- a/inventory/group_vars/pkgs +++ b/inventory/group_vars/pkgs @@ -19,6 +19,7 @@ git_port: 9418 git_server: /usr/libexec/git-core/git-daemon git_server_args: --export-all --syslog --inetd --verbose git_basepath: /srv/git/rpms +git_daemon_user: nobody clamscan_mailto: admin@fedoraproject.org clamscan_paths: @@ -41,9 +42,19 @@ fedmsg_certs: - service: shell owner: root group: sysadmin + can_send: + - git.branch + - git.mass_branch.complete + - git.mass_branch.start + - git.pkgdb2branch.complete + - git.pkgdb2branch.start - service: scm owner: root group: packager + can_send: + - git.receive - service: lookaside owner: root group: apache + can_send: + - git.lookaside.new diff --git a/inventory/group_vars/pkgs-stg b/inventory/group_vars/pkgs-stg index a899b94283..60f797c139 100644 --- a/inventory/group_vars/pkgs-stg +++ b/inventory/group_vars/pkgs-stg @@ -19,6 +19,7 @@ git_port: 9418 git_server: /usr/libexec/git-core/git-daemon git_server_args: --export-all --syslog --inetd --verbose git_basepath: /srv/git/rpms +git_daemon_user: nodoby clamscan_mailto: admin@fedoraproject.org clamscan_paths: @@ -44,6 +45,15 @@ fedmsg_certs: - service: scm owner: root group: packager + can_send: + - git.branch + - git.mass_branch.complete + - git.mass_branch.start + - git.pkgdb2branch.complete + - git.pkgdb2branch.start + - git.receive - service: lookaside owner: root group: apache + can_send: + - git.lookaside.new diff --git a/inventory/group_vars/proxies b/inventory/group_vars/proxies index c86440a74d..448eaf305e 100644 --- a/inventory/group_vars/proxies +++ b/inventory/group_vars/proxies @@ -42,20 +42,24 @@ custom_rules: [ '-A INPUT -p tcp -m tcp -s 192.168.1.0/24 --dport 6081 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 10.5.126.0/24 --dport 6081 -j ACCEPT', - # Allow koschei.cloud to talk to the inbound fedmsg relay. - '-A INPUT -p tcp -m tcp --dport 9941 -s 209.132.184.151 -j ACCEPT', # Allow jenkins.cloud to talk to the inbound fedmsg relay. '-A INPUT -p tcp -m tcp --dport 9941 -s 209.132.184.153 -j ACCEPT', # Allow copr-be.cloud to talk to the inbound fedmsg relay. - '-A INPUT -p tcp -m tcp --dport 9941 -s 209.132.184.131 -j ACCEPT', + '-A INPUT -p tcp -m tcp --dport 9941 -s 209.132.184.48 -j ACCEPT', # Also, ppc-composer.qa.fedoraproject.org (secondary arch) '-A INPUT -p tcp -m tcp --dport 9941 -s 209.132.181.33 -j ACCEPT', # Also, ppc-hub.qa.fedoraproject.org (secondary arch koji) '-A INPUT -p tcp -m tcp --dport 9941 -s 209.132.181.21 -j ACCEPT', - # Also, s390-hub01.qa.fedoraproject.org (secondary arch) - '-A INPUT -p tcp -m tcp --dport 9941 -s 209.132.181.18 -j ACCEPT', # Also, arm-hub01.qa.fedoraproject.org (secondary arch) '-A INPUT -p tcp -m tcp --dport 9941 -s 209.132.181.31 -j ACCEPT', + + # Allow retrace/faf to talk to the inbound fedmsg relay. + # retrace01.qa.fedoraproject.org + '-A INPUT -p tcp -m tcp --dport 9941 -s 10.5.124.171 -j ACCEPT', + # retrace02.qa.fedoraproject.org + '-A INPUT -p tcp -m tcp --dport 9941 -s 10.5.124.172 -j ACCEPT', + # Also, s390-hub01.qa.fedoraproject.org (secondary arch) + '-A INPUT -p tcp -m tcp --dport 9941 -s 10.5.124.191 -j ACCEPT', ] fas_client_groups: sysadmin-noc,fi-apprentice diff --git a/inventory/group_vars/proxies-stg b/inventory/group_vars/proxies-stg index 88b255ff06..b3ee79a735 100644 --- a/inventory/group_vars/proxies-stg +++ b/inventory/group_vars/proxies-stg @@ -41,8 +41,6 @@ custom_rules: [ '-A INPUT -p tcp -m tcp -s 192.168.1.0/24 --dport 6081 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 10.5.126.0/24 --dport 6081 -j ACCEPT', - # Allow koschei.cloud to talk to the inbound fedmsg relay. - '-A INPUT -p tcp -m tcp --dport 9941 -s 209.132.184.151 -j ACCEPT', # Allow jenkins.cloud to talk to the inbound fedmsg relay. '-A INPUT -p tcp -m tcp --dport 9941 -s 209.132.184.153 -j ACCEPT', # Allow copr-be.cloud to talk to the inbound fedmsg relay. @@ -57,7 +55,16 @@ custom_rules: [ '-A INPUT -p tcp -m tcp --dport 9941 -s 209.132.181.31 -j ACCEPT', # Allow stg.fedoramagazine.org running at vultr.com to talk inbound fedmsg + # Contact cydrobolt about the status of this. It hasn't hit prod status + # yet as of 2015-04-27 (threebean). '-A INPUT -p tcp -m tcp --dport 9941 -s 104.207.133.220 -j ACCEPT', + + # Allow retrace/faf to talk to the inbound fedmsg relay. + # retrace01.qa.fedoraproject.org + '-A INPUT -p tcp -m tcp --dport 9941 -s 209.132.181.28 -j ACCEPT', + # retrace02.qa.fedoraproject.org + '-A INPUT -p tcp -m tcp --dport 9941 -s 209.132.181.34 -j ACCEPT', + ] fas_client_groups: sysadmin-noc,fi-apprentice diff --git a/inventory/group_vars/qadevel-stg b/inventory/group_vars/qa-stg similarity index 61% rename from inventory/group_vars/qadevel-stg rename to inventory/group_vars/qa-stg index 2cc208190a..dfbcd25160 100644 --- a/inventory/group_vars/qadevel-stg +++ b/inventory/group_vars/qa-stg @@ -7,7 +7,7 @@ num_cpus: 1 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -fas_client_groups: sysadmin-qa +fas_client_groups: sysadmin-qa,sysadmin-main,fi-apprentice # default virt install command is for a single nic-device # define in another group file for more nics (see buildvm) @@ -19,30 +19,44 @@ virt_install_command: /usr/bin/virt-install -n {{ inventory_hostname }} -r {{ me ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none" --network=bridge=br0,model=virtio --autostart --noautoconsole -sshd_config: ssh/sshd_config.qadevel -external_hostname: qadevel-stg.qa.fedoraproject.org +sshd_config: ssh/sshd_config.qa-stg +sshd_port: 222 +external_hostname: qadevel-stg.cloud.fedoraproject.org -mariadb_host: db-qa01.qa.fedoraproject.org -mariadb_user: '{{ qadevel_stg_mariadb_user }}' -mariadb_password: '{{ qadevel_stg_mariadb_password }}' -phabricator_db_prefix: 'phabricatorstg' -enable_phabricator_git: False +sslcertfile: qa-stg.qa.fedoraproject.org.cert +sslkeyfile: qa-stg.qa.fedoraproject.org.key +sslintermediatecertfile: '' + +mariadb_host: localhost +mariadb_config: my.cnf.phabricator +mariadb_user: '{{ qa_stg_mariadb_user }}' +mariadb_password: '{{ qa_stg_mariadb_password }}' + +# phabricator config +phabricator_db_prefix: 'phabricator' +enable_phabricator_git: True phabricator_vcs_user: git +phabricator_vcs_user_password: '{{ qa_stg_vcs_user_password }}' phabricator_daemon_user: phabdaemon phabroot: /usr/share/ phabricator_filedir: /var/lib/phabricator/files phabricator_repodir: /var/lib/phabricator/repos -phabricator_config_filename: qadevelconfig +phabricator_config_filename: qaconfig phabricator_header_color: 'fluttershy' phabricator_mail_enabled: False +phabricator_mail_domain: stg.fedoraproject.org ircnick: fedoraqabot +phabricator_mysqldump_filename: 'qadevel-stg_phabricator.sql' + +# backup details (for parity with prod, not actually used) backup_dir: /srv/backup backup_username: root backup_ssh_pubkey: ssh-dss AAAAB3NzaC1kc3MAAACBAJr3xqn/hHIXeth+NuXPu9P91FG9jozF3Q1JaGmg6szo770rrmhiSsxso/Ibm2mObqQLCyfm/qSOQRynv6tL3tQVHA6EEx0PNacnBcOV7UowR5kd4AYv82K1vQhof3YTxOMmNIOrdy6deDqIf4sLz1TDHvEDwjrxtFf8ugyZWNbTAAAAFQCS5puRZF4gpNbaWxe6gLzm3rBeewAAAIBcEd6pRatE2Qc/dW0YwwudTEaOCUnHmtYs2PHKbOPds0+Woe1aWH38NiE+CmklcUpyRsGEf3O0l5vm3VrVlnfuHpgt/a/pbzxm0U6DGm2AebtqEmaCX3CIuYzKhG5wmXqJ/z+Hc5MDj2mn2TchHqsk1O8VZM+1Ml6zX3Hl4vvBsQAAAIALDt5NFv6GLuid8eik/nn8NORd9FJPDBJxgVqHNIm08RMC6aI++fqwkBhVPFKBra5utrMKQmnKs/sOWycLYTqqcSMPdWSkdWYjBCSJ/QNpyN4laCmPWLgb3I+2zORgR0EjeV2e/46geS0MWLmeEsFwztpSj4Tv4e18L8Dsp2uB2Q== root@backup03-rdiff-backup +# buildmaster details buildmaster_db_host: localhost buildmaster_template: ci.master.cfg.j2 -buildmaster_endpoint: taskmaster +buildmaster_endpoint: builds buildslave_ssh_pubkey: '' buildslave_port: 9989 buildmaster_dir: /home/buildmaster/master @@ -50,7 +64,24 @@ buildslave_dir: /home/buildslave/slave buildslave_poll_interval: 1800 master_dir: /home/buildmaster/master master_user: buildmaster -deployment_type: qadevel-stg -tcp_ports: [ 80, 222, 443, "{{ buildslave_port }}", 222 ] + +# build details +repo_base: 'https://git.qadevel-stg.cloud.fedoraproject.org/diffusion' +docs_build_dir: /var/www/docs/ + +# for now, we're just doing a local slave so we need the slave vars in here +slave_home: /home/buildslave/ +slave_dir: /home/buildslave/slave +slave_user: buildslave +buildslave_name: 'qa-stg01' + +deployment_type: qa-stg +tcp_ports: [ 80, 222, 443, "{{ buildslave_port }}", 3306 ] + +# static sites +static_sites: + - name: docs.{{ external_hostname }} + document_root: /var/www/docs +sslonly: false freezes: false diff --git a/inventory/group_vars/qadevel b/inventory/group_vars/qadevel index 010befc04c..f0327b7243 100644 --- a/inventory/group_vars/qadevel +++ b/inventory/group_vars/qadevel @@ -7,20 +7,24 @@ num_cpus: 2 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file +deployment_type: qadevel-prod fas_client_groups: sysadmin-qa - host_group: qadevel +freezes: false -# default virt install command is for a single nic-device -# define in another group file for more nics (see buildvm) -virt_install_command: /usr/sbin/virt-install -n {{ inventory_hostname }} -r {{ mem_size }} - --disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }} - --vcpus={{ num_cpus }} -l {{ ks_repo }} -x - "ks={{ ks_url }} ip={{ eth0_ip }} netmask={{ nm }} - gateway={{ gw }} dns={{ dns }} console=tty0 console=ttyS0 - hostname={{ inventory_hostname }}" - --network=bridge=br0 --autostart --noautoconsole +tcp_ports: [ 80, 222, 443, "{{ buildslave_port }}", 222] +sshd_port: 222 +sshd_config: ssh/sshd_config.qadevel + +sslcertfile: wildcard.qadevel.cloud.fedoraproject.org.crt +sslkeyfile: wildcard.qadevel.cloud.fedoraproject.org.key +sslintermediatecertfile: wildcard.qadevel.cloud.fedoraproject.org.intermediate.crt + +################################################################################ +# Buildbot Settings +################################################################################ +# buildmaster_db_host: localhost buildmaster_template: ci.master.cfg.j2 buildmaster_endpoint: buildmaster @@ -31,13 +35,59 @@ buildslave_dir: /home/buildslave/slave buildslave_poll_interval: 1800 master_dir: /home/buildmaster/master master_user: buildmaster -external_hostname: qadevel.qa.fedoraproject.org -deployment_type: qadevel-prod -tcp_ports: [ 80, 222, 443, "{{ buildslave_port }}" ] +external_hostname: qadevel.cloud.fedoraproject.org # for now, we're just doing a local slave so we need the slave vars in here slave_home: /home/buildslave/ slave_dir: /home/buildslave/slave slave_user: buildslave -freezes: false + +################################################################################ +# MariaDB Settings +################################################################################ + +mariadb_host: localhost +mariadb_config: my.cnf.phabricator +mariadb_user: '{{ qadevel_mariadb_user }}' +mariadb_password: '{{ qadevel_mariadb_password }}' + + +################################################################################ +# Phabricator Settings +################################################################################ +phabricator_db_prefix: 'phabricator' +enable_phabricator_git: True +phabricator_vcs_user: git +phabricator_vcs_user_password: '{{ qadevel_vcs_user_password }}' +phabricator_daemon_user: phabdaemon +phabroot: /usr/share/ +phabricator_filedir: /var/lib/phabricator/files +phabricator_repodir: /var/lib/phabricator/repos +phabricator_config_filename: qaconfig +phabricator_header_color: 'blue' +phabricator_mail_enabled: True +phabricator_mail_domain: fedoraproject.org +phabricator_mysqldump_filename: 'qadevel_phabricator.sql' +ircnick: fedoraqabot + + +################################################################################ +# Backup Settings +################################################################################ + +backup_dir: /srv/backup +backup_username: root +backup_ssh_pubkey: ssh-dss AAAAB3NzaC1kc3MAAACBAJr3xqn/hHIXeth+NuXPu9P91FG9jozF3Q1JaGmg6szo770rrmhiSsxso/Ibm2mObqQLCyfm/qSOQRynv6tL3tQVHA6EEx0PNacnBcOV7UowR5kd4AYv82K1vQhof3YTxOMmNIOrdy6deDqIf4sLz1TDHvEDwjrxtFf8ugyZWNbTAAAAFQCS5puRZF4gpNbaWxe6gLzm3rBeewAAAIBcEd6pRatE2Qc/dW0YwwudTEaOCUnHmtYs2PHKbOPds0+Woe1aWH38NiE+CmklcUpyRsGEf3O0l5vm3VrVlnfuHpgt/a/pbzxm0U6DGm2AebtqEmaCX3CIuYzKhG5wmXqJ/z+Hc5MDj2mn2TchHqsk1O8VZM+1Ml6zX3Hl4vvBsQAAAIALDt5NFv6GLuid8eik/nn8NORd9FJPDBJxgVqHNIm08RMC6aI++fqwkBhVPFKBra5utrMKQmnKs/sOWycLYTqqcSMPdWSkdWYjBCSJ/QNpyN4laCmPWLgb3I+2zORgR0EjeV2e/46geS0MWLmeEsFwztpSj4Tv4e18L8Dsp2uB2Q== root@backup03-rdiff-backup +host_backup_targets: ['/var/lib/phabricator/files', '/var/lib/phabricator/repos', '/srv/backup'] + + +################################################################################ +# Static Site Settings +################################################################################ + +static_sites: + - name: docs.{{ external_hostname }} + document_root: /var/www/docs +sslonly: false + diff --git a/inventory/group_vars/releng b/inventory/group_vars/releng index 82cd018cad..3b691fe842 100644 --- a/inventory/group_vars/releng +++ b/inventory/group_vars/releng @@ -34,5 +34,19 @@ fedmsg_certs: - service: bodhi owner: root group: masher + can_send: + - bodhi.mashtask.complete + - bodhi.mashtask.mashing + - bodhi.mashtask.start + - bodhi.mashtask.sync.done + - bodhi.mashtask.sync.wait + - bodhi.errata.publish + - bodhi.update.eject +- service: ftpsync + owner: root + group: ftpsync + can_send: + - bodhi.updates.epel.sync + - bodhi.updates.fedora.sync nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3" diff --git a/inventory/group_vars/releng-compose b/inventory/group_vars/releng-compose index 451d5aad90..b676bd3c9f 100644 --- a/inventory/group_vars/releng-compose +++ b/inventory/group_vars/releng-compose @@ -7,3 +7,9 @@ sudoers: "{{ private }}/files/sudo/arm-releng-sudoers" # For the mock config kojipkgs_url: kojipkgs.fedoraproject.org kojihub_url: koji.fedoraproject.org/kojihub +kojihub_scheme: https + +# for kojid config +koji_server_url: "http://koji.fedoraproject.org/kojihub" +koji_weburl: "http://koji.fedoraproject.org/koji" +koji_topurl: "http://kojipkgs.fedoraproject.org/" diff --git a/inventory/group_vars/resultsdb-dev b/inventory/group_vars/resultsdb-dev index 82e6d9ba89..139aed8c72 100644 --- a/inventory/group_vars/resultsdb-dev +++ b/inventory/group_vars/resultsdb-dev @@ -7,7 +7,7 @@ num_cpus: 4 # the host_vars/$hostname file tcp_ports: [ 80, 443, "{{ resultsdb_db_port }}", "{{ execdb_db_port }}" ] -fas_client_groups: sysadmin-qa,sysadmin-main +fas_client_groups: sysadmin-qa,sysadmin-main,fi-apprentice nrpe_procs_warn: 250 nrpe_procs_crit: 300 diff --git a/inventory/group_vars/resultsdb-prod b/inventory/group_vars/resultsdb-prod index b0ed877c5f..3b147a8a8a 100644 --- a/inventory/group_vars/resultsdb-prod +++ b/inventory/group_vars/resultsdb-prod @@ -11,7 +11,7 @@ fas_client_groups: sysadmin-qa nrpe_procs_warn: 250 nrpe_procs_crit: 300 -virt_install_command: /usr/bin/virt-install -n {{ inventory_hostname }} -r {{ mem_size }} +virt_install_command: /usr/sbin/virt-install -n {{ inventory_hostname }} -r {{ mem_size }} --disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }} --vcpus={{ num_cpus }} -l {{ ks_repo }} -x "ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0 @@ -20,10 +20,19 @@ virt_install_command: /usr/bin/virt-install -n {{ inventory_hostname }} -r {{ me --network=bridge=br0,model=virtio --autostart --noautoconsole deployment_type: prod + resultsdb_db_host: db-qa01.qa.fedoraproject.org resultsdb_db_port: 5432 resultsdb_endpoint: 'resultsdb_api' resultsdb_fe_endpoint: 'resultsdb' resultsdb_db_name: resultsdb + +execdb_db_host: db-qa01.qa.fedoraproject.org +execdb_db_port: 5432 +execdb_endpoint: 'execdb' +execdb_db_name: execdb + +external_hostname: taskotron.fedoraproject.org + allowed_hosts: - 10.5.124 diff --git a/inventory/group_vars/retrace b/inventory/group_vars/retrace index 145ec48aab..ea0f29f91c 100644 --- a/inventory/group_vars/retrace +++ b/inventory/group_vars/retrace @@ -11,3 +11,35 @@ custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.78.11 --dport 2049 -j ACCEPT', nrpe_procs_warn: 900 nrpe_procs_crit: 1000 + +# Since retrace is on the qa network, it needs to actively connect to our +# inbound relay. +fedmsg_active: True +fedmsg_cert_prefix: faf + +# Declare fedmsg certs that should be put in /etc/pki/fedmsg/ +# These are consumed by a task in roles/fedmsg/base/main.yml +fedmsg_certs: +- service: shell + owner: root + group: retrace +- service: faf + owner: root + group: faf + can_send: + - faf.report.threshold1 + - faf.report.threshold10 + - faf.report.threshold100 + - faf.report.threshold1000 + - faf.report.threshold1000 + - faf.report.threshold10000 + - faf.report.threshold100000 + - faf.report.threshold1000000 + - faf.problem.threshold1 + - faf.problem.threshold10 + - faf.problem.threshold100 + - faf.problem.threshold1000 + - faf.problem.threshold1000 + - faf.problem.threshold10000 + - faf.problem.threshold100000 + - faf.problem.threshold1000000 diff --git a/inventory/group_vars/secondary b/inventory/group_vars/secondary index 3369ae61f4..2969328863 100644 --- a/inventory/group_vars/secondary +++ b/inventory/group_vars/secondary @@ -1,15 +1,13 @@ --- -# Define resources for this group of hosts here. -lvm_size: 30000 -mem_size: 8192 -num_cpus: 4 +datacenter: phx2 +tcp_ports: [80, 443, 873] +rsyncd_conf: "rsyncd.conf.download-{{ datacenter }}" +nrpe_procs_warn: 900 +nrpe_procs_crit: 1000 -# for systems that do not match the above - specify the same parameter in -# the host_vars/$hostname file -tcp_ports: [ 80, 443, 111, 2049 ] - -udp_ports: [ 111, 2049 ] +# nfs mount options, overrides the all/default +nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid,actimeo=600,nfsvers=3" fas_client_groups: sysadmin-noc,alt-sugar,alt-k12linux,altvideos,hosted-content,mips-content,s390_content,fi-apprentice,qa-deltaisos -nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid" +host_group: secondary diff --git a/inventory/group_vars/sign-bridge b/inventory/group_vars/sign-bridge index e2c8617a34..4ceeb28b09 100644 --- a/inventory/group_vars/sign-bridge +++ b/inventory/group_vars/sign-bridge @@ -1,14 +1,13 @@ --- freezes: true postfix_group: sign -host_group: sign # Define resources for this group of hosts here. -lvm_size: 10000 +lvm_size: 50000 mem_size: 4096 num_cpus: 4 -tcp_ports: [ 22, 44333, 44334 ] +tcp_ports: [ 44333, 44334 ] fas_client_groups: sysadmin-releng sudoers: "{{ private }}/files/sudo/arm-releng-sudoers" diff --git a/inventory/group_vars/staging b/inventory/group_vars/staging index 0172e4f8c5..cd9c3a2cb2 100644 --- a/inventory/group_vars/staging +++ b/inventory/group_vars/staging @@ -1,7 +1,14 @@ --- freezes: false env: staging +env_suffix: .stg host_group: staging # This is the wildcard certname for our stg proxies. wildcard_cert_name: wildcard-2014.stg.fedoraproject.org + +# This only does anything if the host is not RHEL6 +collectd_graphite: True + +fedmsg_prefix: org.fedoraproject +fedmsg_env: stg diff --git a/inventory/group_vars/statscache-stg b/inventory/group_vars/statscache-stg new file mode 100644 index 0000000000..2461d1e1dc --- /dev/null +++ b/inventory/group_vars/statscache-stg @@ -0,0 +1,12 @@ +--- +# Define resources for this group of hosts here. +lvm_size: 20000 +mem_size: 1024 +num_cpus: 2 + +# for systems that do not match the above - specify the same parameter in +# the host_vars/$hostname file + +tcp_ports: [ 80 ] + +fas_client_groups: sysadmin-noc,sysadmin-datanommer diff --git a/inventory/group_vars/summershum b/inventory/group_vars/summershum index d5b76e67a0..d6ad956851 100644 --- a/inventory/group_vars/summershum +++ b/inventory/group_vars/summershum @@ -9,7 +9,7 @@ num_cpus: 2 tcp_ports: [ 3000 ] -fas_client_groups: sysadmin-noc,sysadmin-badges +fas_client_groups: sysadmin-noc,sysadmin-datanommer # These are consumed by a task in roles/fedmsg/base/main.yml fedmsg_certs: @@ -19,3 +19,7 @@ fedmsg_certs: - service: summershum owner: root group: fedmsg + can_send: + - summershum.ingest.complete + - summershum.ingest.fail + - summershum.ingest.start diff --git a/inventory/group_vars/summershum-stg b/inventory/group_vars/summershum-stg index d5b76e67a0..d6ad956851 100644 --- a/inventory/group_vars/summershum-stg +++ b/inventory/group_vars/summershum-stg @@ -9,7 +9,7 @@ num_cpus: 2 tcp_ports: [ 3000 ] -fas_client_groups: sysadmin-noc,sysadmin-badges +fas_client_groups: sysadmin-noc,sysadmin-datanommer # These are consumed by a task in roles/fedmsg/base/main.yml fedmsg_certs: @@ -19,3 +19,7 @@ fedmsg_certs: - service: summershum owner: root group: fedmsg + can_send: + - summershum.ingest.complete + - summershum.ingest.fail + - summershum.ingest.start diff --git a/inventory/group_vars/sundries b/inventory/group_vars/sundries index 6a3697bfbc..6f294078ef 100644 --- a/inventory/group_vars/sundries +++ b/inventory/group_vars/sundries @@ -8,7 +8,7 @@ num_cpus: 2 # the host_vars/$hostname file tcp_ports: [ 80, 873 ] -fas_client_groups: sysadmin-noc,fi-apprentice +fas_client_groups: sysadmin-noc,fi-apprentice,sysadmin-web # This gets overridden by whichever node we want to run special cronjobs. master_sundries_node: False @@ -18,3 +18,5 @@ rsync_group: sundries nrpe_procs_warn: 300 nrpe_procs_crit: 500 + +sudoers: "{{ private }}/files/sudo/sundries-sudoers" diff --git a/inventory/group_vars/sundries-stg b/inventory/group_vars/sundries-stg index 6a3697bfbc..6f294078ef 100644 --- a/inventory/group_vars/sundries-stg +++ b/inventory/group_vars/sundries-stg @@ -8,7 +8,7 @@ num_cpus: 2 # the host_vars/$hostname file tcp_ports: [ 80, 873 ] -fas_client_groups: sysadmin-noc,fi-apprentice +fas_client_groups: sysadmin-noc,fi-apprentice,sysadmin-web # This gets overridden by whichever node we want to run special cronjobs. master_sundries_node: False @@ -18,3 +18,5 @@ rsync_group: sundries nrpe_procs_warn: 300 nrpe_procs_crit: 500 + +sudoers: "{{ private }}/files/sudo/sundries-sudoers" diff --git a/inventory/group_vars/tagger b/inventory/group_vars/tagger index 181a715aef..a23d66ad34 100644 --- a/inventory/group_vars/tagger +++ b/inventory/group_vars/tagger @@ -7,12 +7,11 @@ num_cpus: 2 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -tcp_ports: [ 80, 443, - # These 32 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015, - 3016, 3017, 3018, 3019, 3020, 3021, 3022, 3023, - 3024, 3025, 3026, 3027, 3028, 3029, 3030, 3031] +wsgi_fedmsg_service: fedoratagger +wsgi_procs: 2 +wsgi_threads: 2 + +tcp_ports: [ 80 ] # Neeed for rsync from log01 for logs. custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] @@ -27,3 +26,9 @@ fedmsg_certs: - service: fedoratagger owner: root group: fedoratagger + can_send: + - fedoratagger.rating.update + - fedoratagger.tag.create + - fedoratagger.tag.update + - fedoratagger.usage.toggle + - fedoratagger.user.rank.update diff --git a/inventory/group_vars/tagger-stg b/inventory/group_vars/tagger-stg index 7864af635f..4a1b0fac1b 100644 --- a/inventory/group_vars/tagger-stg +++ b/inventory/group_vars/tagger-stg @@ -7,12 +7,11 @@ num_cpus: 2 # for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file -tcp_ports: [ 80, 443, - # These 32 ports are used by fedmsg. One for each wsgi thread. - 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, - 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015, - 3016, 3017, 3018, 3019, 3020, 3021, 3022, 3023, - 3024, 3025, 3026, 3027, 3028, 3029, 3030, 3031] +wsgi_fedmsg_service: fedoratagger +wsgi_procs: 2 +wsgi_threads: 2 + +tcp_ports: [ 80 ] # Neeed for rsync from log01 for logs. custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] @@ -27,3 +26,9 @@ fedmsg_certs: - service: fedoratagger owner: root group: fedoratagger + can_send: + - fedoratagger.rating.update + - fedoratagger.tag.create + - fedoratagger.tag.update + - fedoratagger.usage.toggle + - fedoratagger.user.rank.update diff --git a/inventory/group_vars/taskotron-dev b/inventory/group_vars/taskotron-dev index 200c7511a0..45a4cc031b 100644 --- a/inventory/group_vars/taskotron-dev +++ b/inventory/group_vars/taskotron-dev @@ -1,4 +1,6 @@ --- +fas_client_groups: sysadmin-qa,sysadmin-main,fi-apprentice + grokmirror_basedir: /var/lib/git/mirror grokmirror_user: grokmirror grokmirror_repos: @@ -25,6 +27,7 @@ execdb_server: http://resultsdb-dev01.qa.fedoraproject.org/execdb execdb_statuspush: http://resultsdb-dev01.qa.fedoraproject.org/execdb/buildbottest execdb_endpoint: execdb resultsdb_url: http://resultsdb-dev01.qa.fedoraproject.org/resultsdb_api/api/v1.0 +resultsdb_host: http://resultsdb-dev01.qa.fedoraproject.org/resultsdb_api/ resultsdb_frontend_url: http://resultsdb-dev01.qa.fedoraproject.org/resultsdb/ resultsdb_external_url: https://taskotron-dev.fedoraproject.org/resultsdb/ resultsdb_fe_endpoint: resultsdb @@ -39,4 +42,4 @@ fakefedorainfra_url: https://taskotron-dev.fedoraproject.org/fakefedorainfra taskotron_docs_url: https://docs.qadevel.cloud.fedoraproject.org/libtaskotron/latest/ freezes: false public_artifacts_dir: /srv/taskotron/artifacts -execdb_server: http://resultsdb-dev01.qa.fedoraproject.org/execdb +robots_path: /var/www/html diff --git a/inventory/group_vars/taskotron-dev-clients b/inventory/group_vars/taskotron-dev-clients index f40a460cc4..b3f2885869 100644 --- a/inventory/group_vars/taskotron-dev-clients +++ b/inventory/group_vars/taskotron-dev-clients @@ -5,11 +5,13 @@ num_cpus: 2 slave_user: buildslave taskotron_fas_user: taskotron -execdb_server: http://resultsdb-dev01.qa.fedoraproject.org/execdb -resultsdb_server: http://resultsdb-dev01.qa.fedoraproject.org/resultsdb_api/api/v1.0/ -bodhi_server: http://10.5.124.181/fakefedorainfra/bodhi/ +execdb_external_url: http://taskotron-dev.fedoraproject.org/execdb +resultsdb_server: http://resultsdb-dev01.qa.fedoraproject.org/resultsdb_api/api/v1.0 +bodhi_server: http://10.5.124.181/fakefedorainfra/bodhi kojihub_url: http://koji.fedoraproject.org/kojihub -taskotron_master: http://taskotron-dev.fedoraproject.org/taskmaster/ +taskotron_master: http://taskotron-dev.fedoraproject.org/taskmaster +resultsdb_external_url: https://taskotron-dev.fedoraproject.org/resultsdb +artifacts_base_url: http://taskotron-dev.fedoraproject.org/artifacts deployment_type: dev slave_home: /home/buildslave/ slave_dir: /home/buildslave/slave diff --git a/inventory/group_vars/taskotron-prod b/inventory/group_vars/taskotron-prod index a4b3c223a2..f63af06b30 100644 --- a/inventory/group_vars/taskotron-prod +++ b/inventory/group_vars/taskotron-prod @@ -27,3 +27,6 @@ deployment_type: prod tcp_ports: [ 80, 443, "{{ buildslave_port }}" ] taskotron_docs_url: https://docs.qadevel.cloud.fedoraproject.org/libtaskotron/latest/ public_artifacts_dir: /srv/taskotron/artifacts +execdb_server: http://resultsdb01.qa.fedoraproject.org/execdb +execdb_statuspush: http://resultsdb01.qa.fedoraproject.org/execdb/buildbottest +robots_path: /var/www/html diff --git a/inventory/group_vars/taskotron-prod-clients b/inventory/group_vars/taskotron-prod-clients index a8c77f0ed0..201d467e7e 100644 --- a/inventory/group_vars/taskotron-prod-clients +++ b/inventory/group_vars/taskotron-prod-clients @@ -5,6 +5,7 @@ num_cpus: 2 slave_user: buildslave taskotron_fas_user: taskotron +execdb_external_url: https://taskotron.fedoraproject.org/execdb/ resultsdb_server: http://resultsdb01.qa.fedoraproject.org/resultsdb_api/api/v1.0/ # this is proxy01.phx2 bodhi_server: https://admin.fedoraproject.org/updates @@ -21,4 +22,4 @@ buildslave_private_sshkey_file: prod-buildslave-sshkey/prod_buildslave buildslave_public_sshkey_file: prod-buildslave-sshkey/prod_buildslave.pub taskotron_admin_email: taskotron-admin-members@fedoraproject.org sudoers: "{{ private }}/files/sudo/qavirt-sudoers" -buildmaster_pubkey: 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBlB0+PK20wI+MN1eYTDCjpnRZCo3eEdAwR2yuOFhm5BdMvdAokpS3CjA6KSKPQjgTc9UHz4WjwGVysV0sns9h0=' +buildmaster_pubkey: 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCtxF0c0WKY/awWpDyTsN9WLXKthHpKiT1gWfbJ6yNCXXwgC0r7B77kxvMUnr8SMnlrTk0+GHTx9XACMDdOcctBnnH0dGqq1k9ux6egRBSY4X9PG1udiNHzKfVU7CIghHJf9JaXr7k1DoyVj4MAXgGVQocBZqXkULieGYAOcHfBqnrf477wrljQUx2Unso1xqGhigJoIIO848fEAK0b87OXlxDdiRYhqLcHXelAE0RhUhr4Vkuot8bhVqtANJBzc4/whRKSyrTMPbRwQdIK8Sclp3sEZE1NgtYKr8GhTXd9IEsudz1i4vYB94fw7gSYoVAbpRw5JRVw0iJOOFDZLojr' diff --git a/inventory/group_vars/taskotron-stg b/inventory/group_vars/taskotron-stg index b74f429cc3..8c3f4ae279 100644 --- a/inventory/group_vars/taskotron-stg +++ b/inventory/group_vars/taskotron-stg @@ -33,3 +33,5 @@ taskotron_docs_url: https://docs.qadevel.cloud.fedoraproject.org/libtaskotron/la freezes: false public_artifacts_dir: /srv/taskotron/artifacts execdb_server: http://resultsdb-stg01.qa.fedoraproject.org/execdb +execdb_statuspush: http://resultsdb-stg01.qa.fedoraproject.org/execdb/buildbottest +robots_path: /var/www/html diff --git a/inventory/group_vars/taskotron-stg-clients b/inventory/group_vars/taskotron-stg-clients index 116e6b77fb..c9073cb88a 100644 --- a/inventory/group_vars/taskotron-stg-clients +++ b/inventory/group_vars/taskotron-stg-clients @@ -5,11 +5,13 @@ num_cpus: 2 slave_user: buildslave taskotron_fas_user: taskotron -execdb_server: http://resultsdb-stg01.qa.fedoraproject.org/execdb +execdb_external_url: https://taskotron.stg.fedoraproject.org/execdb/ resultsdb_server: http://resultsdb-stg01.qa.fedoraproject.org/resultsdb_api/api/v1.0/ bodhi_server: http://10.5.124.232/fakefedorainfra/bodhi/ kojihub_url: http://koji.fedoraproject.org/kojihub taskotron_master: https://taskotron.stg.fedoraproject.org/taskmaster/ +resultsdb_external_url: https://taskotron.stg.fedoraproject.org/resultsdb +artifacts_base_url: http://taskotron.stg.fedoraproject.org/artifacts deployment_type: stg slave_home: /home/buildslave/ slave_dir: /home/buildslave/slave diff --git a/inventory/group_vars/torrent b/inventory/group_vars/torrent new file mode 100644 index 0000000000..896df92415 --- /dev/null +++ b/inventory/group_vars/torrent @@ -0,0 +1,13 @@ +--- +# Define resources for this group of hosts here. +lvm_size: 750000 +mem_size: 4096 +num_cpus: 2 + +tcp_ports: [ 53, 80, 443, 873, "6881:6999" ] +udp_ports: [ 53 ] + +fas_client_groups: sysadmin-web,torrentadmin,sysadmin-noc,torrent-cc,fi-apprentice + +nrpe_procs_warn: 300 +nrpe_procs_crit: 500 diff --git a/inventory/group_vars/twisted-buildbots b/inventory/group_vars/twisted-buildbots new file mode 100644 index 0000000000..3d8f2c30da --- /dev/null +++ b/inventory/group_vars/twisted-buildbots @@ -0,0 +1,2 @@ +--- +freezes: false diff --git a/inventory/group_vars/value b/inventory/group_vars/value index 72d949a66b..b73d95e748 100644 --- a/inventory/group_vars/value +++ b/inventory/group_vars/value @@ -12,10 +12,10 @@ tcp_ports: [ 80, 443, 5050, 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015] -# Neeed for rsync from log01 for logs. +# Needed for rsync from log01 for logs. custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] -fas_client_groups: sysadmin-noc,fi-apprentice +fas_client_groups: sysadmin-noc,fi-apprentice,sysadmin-web,sysadmin-mote # These are consumed by a task in roles/fedmsg/base/main.yml fedmsg_certs: @@ -25,3 +25,19 @@ fedmsg_certs: - service: supybot owner: root group: daemon + can_send: + # cookies! + - irc.karma + # standard meetbot stuff + - meetbot.meeting.complete + - meetbot.meeting.start + - meetbot.meeting.topic.update + # meetbot line items + - meeting.item.agreed + - meeting.item.accepted + - meeting.item.rejected + - meeting.item.action + - meeting.item.info + - meeting.item.idea + - meeting.item.help + - meeting.item.link diff --git a/inventory/group_vars/value-stg b/inventory/group_vars/value-stg index 72d949a66b..5ecf089f0c 100644 --- a/inventory/group_vars/value-stg +++ b/inventory/group_vars/value-stg @@ -15,7 +15,7 @@ tcp_ports: [ 80, 443, 5050, # Neeed for rsync from log01 for logs. custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ] -fas_client_groups: sysadmin-noc,fi-apprentice +fas_client_groups: sysadmin-noc,fi-apprentice,sysadmin-web,sysadmin-mote # These are consumed by a task in roles/fedmsg/base/main.yml fedmsg_certs: @@ -25,3 +25,19 @@ fedmsg_certs: - service: supybot owner: root group: daemon + can_send: + # cookies! + - irc.karma + # standard meetbot stuff + - meetbot.meeting.complete + - meetbot.meeting.start + - meetbot.meeting.topic.update + # meetbot line items + - meeting.item.agreed + - meeting.item.accepted + - meeting.item.rejected + - meeting.item.action + - meeting.item.info + - meeting.item.idea + - meeting.item.help + - meeting.item.link diff --git a/inventory/group_vars/wiki b/inventory/group_vars/wiki index f16944c851..6e388bcdab 100644 --- a/inventory/group_vars/wiki +++ b/inventory/group_vars/wiki @@ -23,5 +23,8 @@ fedmsg_certs: - service: mediawiki owner: root group: apache + can_send: + - wiki.article.edit + - wiki.upload.complete -nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid" +nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3" diff --git a/inventory/group_vars/wiki-stg b/inventory/group_vars/wiki-stg index 5950013e90..c1c265d5c3 100644 --- a/inventory/group_vars/wiki-stg +++ b/inventory/group_vars/wiki-stg @@ -23,5 +23,8 @@ fedmsg_certs: - service: mediawiki owner: root group: apache + can_send: + - wiki.article.edit + - wiki.upload.complete -nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid" +nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3" diff --git a/inventory/hardware b/inventory/hardware index ba441cd254..04abfffdd7 100644 --- a/inventory/hardware +++ b/inventory/hardware @@ -12,8 +12,13 @@ virthost15.phx2.fedoraproject.org virthost16.phx2.fedoraproject.org virthost17.phx2.fedoraproject.org virthost18.phx2.fedoraproject.org +virthost19.phx2.fedoraproject.org +virthost20.phx2.fedoraproject.org +virthost21.phx2.fedoraproject.org +virthost22.phx2.fedoraproject.org bvirthost07.phx2.fedoraproject.org ibiblio04.fedoraproject.org +ibiblio05.fedoraproject.org virthost-comm03.qa.fedoraproject.org virthost-comm04.qa.fedoraproject.org fed-cloud09.cloud.fedoraproject.org @@ -48,7 +53,6 @@ virthost08.phx2.fedoraproject.org virthost09.phx2.fedoraproject.org virthost10.phx2.fedoraproject.org virthost12.phx2.fedoraproject.org -virthost-comm01.qa.fedoraproject.org ibiblio01.fedoraproject.org ibiblio02.fedoraproject.org ibiblio03.fedoraproject.org @@ -64,7 +68,7 @@ fed-cloud07.cloud.fedoraproject.org fed-cloud08.cloud.fedoraproject.org [powerpc] -ppc8-01-fsp.phx2.fedoraproject.org -ppc8-02-fsp.phx2.fedoraproject.org -ppc8-03-fsp.phx2.fedoraproject.org -ppc8-04-fsp.phx2.fedoraproject.org +ppc8-01-fsp.qa.fedoraproject.org +ppc8-02-fsp.qa.fedoraproject.org +ppc8-03-fsp.qa.fedoraproject.org +ppc8-04-fsp.qa.fedoraproject.org diff --git a/inventory/host_vars/209.132.184.143 b/inventory/host_vars/209.132.184.143 deleted file mode 100644 index aa0ef6e282..0000000000 --- a/inventory/host_vars/209.132.184.143 +++ /dev/null @@ -1,12 +0,0 @@ ---- -instance_type: m1.small -image: "{{ el6_qcow_id }}" -keypair: fedora-admin-20130801 -security_group: webserver -zone: nova -hostbase: artboard- -public_ip: 209.132.184.143 -root_auth_users: duffy kevin -description: artboard cloud instance for the fedora art group -volumes: ['-d /dev/vdb vol-00000009'] -freezes: false diff --git a/inventory/host_vars/209.132.184.146 b/inventory/host_vars/209.132.184.146 deleted file mode 100644 index 07aaceba68..0000000000 --- a/inventory/host_vars/209.132.184.146 +++ /dev/null @@ -1,12 +0,0 @@ ---- -instance_type: m1.large -image: "{{ f20_qcow_id }}" -keypair: fedora-admin-20130801 -security_group: logstash -zone: nova -hostbase: logstash- -public_ip: 209.132.184.146 -root_auth_users: lmacken -description: cloud instance for developing/testing logstash -volumes: ['-d /dev/vdb vol-0000000d'] -freezes: false diff --git a/inventory/host_vars/209.132.184.147 b/inventory/host_vars/209.132.184.147 deleted file mode 100644 index 80b5f027e2..0000000000 --- a/inventory/host_vars/209.132.184.147 +++ /dev/null @@ -1,12 +0,0 @@ ---- -instance_type: m1.small -image: "{{ el6_qcow_id }}" -keypair: fedora-admin-20130801 -security_group: webserver -zone: nova -hostbase: fedocal-dev- -public_ip: 209.132.184.147 -root_auth_users: pingou -description: fedocal dev server -volumes: ['-d /dev/vdb vol-00000010'] -freezes: false diff --git a/inventory/host_vars/209.132.184.148 b/inventory/host_vars/209.132.184.148 deleted file mode 100644 index 65c26f1421..0000000000 --- a/inventory/host_vars/209.132.184.148 +++ /dev/null @@ -1,16 +0,0 @@ -# 2cpus, 3GB of ram 20GB of ephemeral space -instance_type: m1.large -# image id -image: "{{ el6_qcow_id }}" -keypair: fedora-admin-20130801 -# what security group to add the host to -security_group: webserver -zone: fedoracloud -# instance id will be appended -hostbase: darkserver-dev- -# ip should be in the 209.132.184.XXX range -public_ip: 209.132.184.148 -# users/groups who should have root ssh access -root_auth_users: kushal @sysadmin-main sayanchowdhury -description: darkserver dev server -freezes: false diff --git a/inventory/host_vars/209.132.184.150 b/inventory/host_vars/209.132.184.150 deleted file mode 120000 index c3328866fa..0000000000 --- a/inventory/host_vars/209.132.184.150 +++ /dev/null @@ -1 +0,0 @@ -209.132.184.144 \ No newline at end of file diff --git a/inventory/host_vars/209.132.184.153 b/inventory/host_vars/209.132.184.153 index 1d4dff3e99..6ba6b33efd 100644 --- a/inventory/host_vars/209.132.184.153 +++ b/inventory/host_vars/209.132.184.153 @@ -6,7 +6,9 @@ security_group: jenkins zone: nova hostbase: jenkins-master- public_ip: 209.132.184.153 -root_auth_users: pingou puiterwijk +root_auth_users: pingou puiterwijk mizdebsk description: jenkins cloud master volumes: ['-d /dev/vdb vol-00000011'] freezes: false + +fedmsg_fqdn: jenkins.cloud.fedoraproject.org diff --git a/inventory/host_vars/209.132.184.162 b/inventory/host_vars/209.132.184.162 deleted file mode 100644 index 75cc49137d..0000000000 --- a/inventory/host_vars/209.132.184.162 +++ /dev/null @@ -1,12 +0,0 @@ ---- -instance_type: m1.small -image: "{{ el6_qcow_id }}" -keypair: fedora-admin-20130801 -security_group: webserver -zone: nova -hostbase: elections-dev- -public_ip: 209.132.184.162 -root_auth_users: toshio fchiulli -description: cloud instance for developing the next version of the elections app -volumes: ['-d /dev/vdb vol-0000000e'] -freezes: false diff --git a/inventory/host_vars/209.132.184.166 b/inventory/host_vars/209.132.184.166 deleted file mode 100644 index 7f6cc25ed9..0000000000 --- a/inventory/host_vars/209.132.184.166 +++ /dev/null @@ -1,18 +0,0 @@ -# 2cpus, 3GB of ram 20GB of ephemeral space -instance_type: m1.large -# image id -image: "{{ el7_qcow_id }}" -keypair: fedora-admin-20130801 -# what security group to add the host to -security_group: webserver -zone: fedoracloud -# instance id will be appended -hostbase: devpi- -# ip should be in the 209.132.184.XXX range -public_ip: 209.132.184.166 -# users/groups who should have root ssh access -root_auth_users: bkabrda ncoghlan -description: devpi test server -freezes: false -# 5gb persistent storage -volumes: ['-d /dev/vdb vol-0000002d'] diff --git a/inventory/host_vars/209.132.184.209 b/inventory/host_vars/209.132.184.209 deleted file mode 100644 index 7a8dd6ccc7..0000000000 --- a/inventory/host_vars/209.132.184.209 +++ /dev/null @@ -1,11 +0,0 @@ ---- -instance_type: m1.xlarge -image: "{{ f20_qcow_id }}" -keypair: fedora-admin-20130801 -security_group: jenkins -zone: nova -hostbase: jenkins-f20 -public_ip: 209.132.184.209 -root_auth_users: pingou -description: jenkins f20 worker/slave -freezes: false diff --git a/inventory/host_vars/209.132.184.49 b/inventory/host_vars/209.132.184.49 index 7ee4cd37fd..d7df3957f8 100644 --- a/inventory/host_vars/209.132.184.49 +++ b/inventory/host_vars/209.132.184.49 @@ -1,23 +1,28 @@ --- -# TODO: remove me! -instance_type: m1.xlarge -flavor_id: 2 -# image: "{{ f20_qcow_id }}" -image: "86422ca2-6eeb-435c-87e8-402b3c7c3b7b" -keypair: "fedora-admin-20130801" -security_group: ssh-anywhere-coprdev,default -OS_TENANT_ID: "566a072fb1694950998ad191fee3833b" -inventory_tenant: "coprdev" +# remove me after transition of copr-keygen to the new cloud is done +instance_type: ms1.small +image: "{{ fedora21_x86_64 }}" +keypair: fedora-admin-20130801 zone: nova -hostbase: copr-be-dev2- +hostbase: copr-keygen- +# public_ip: 209.132.184.159 public_ip: 209.132.184.49 -root_auth_users: bkabrda msuchy tradej pingou vgologuz -description: copr dispatcher and repo server - dev instance -tcp_ports: ['22', '80', '443'] +root_auth_users: msuchy vgologuz +description: copr key gen instance +# volumes: ['-d /dev/vdc vol-0000002e'] +volumes: [] +# security_group: default +security_group: web-80-anywhere-persistent,web-443-anywhere-persistent,ssh-anywhere-persistent,default,allow-nagios-persistent + +inventory_tenant: persistent +# name of machine in OpenStack +inventory_instance_name: copr-keygen cloud_networks: - - net-id: "53fb02fd-6e97-4cfc-b298-a2ff867daa52" - - net-id: "6797b4e3-28a0-4a14-9689-e506d0cca90d" + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + +host_backup_targets: ['/backup/'] +datacenter: cloud # Copr vars -copr_hostbase: copr-be-dev -_copr_be_conf: copr-be.conf-dev +copr_hostbase: copr-keygen diff --git a/inventory/host_vars/anitya-backend01.fedoraproject.org b/inventory/host_vars/anitya-backend01.fedoraproject.org index 499ddd72ed..6ab9f27883 100644 --- a/inventory/host_vars/anitya-backend01.fedoraproject.org +++ b/inventory/host_vars/anitya-backend01.fedoraproject.org @@ -7,6 +7,7 @@ volgroup: /dev/vg_guests eth0_ip: 140.211.169.230 ansible_ssh_host: anitya-backend01.fedoraproject.org +fedmsg_fqdn: anitya-backend01.vpn.fedoraproject.org postfix_group: vpn diff --git a/inventory/host_vars/anitya-frontend01.fedoraproject.org b/inventory/host_vars/anitya-frontend01.fedoraproject.org index 887474d29c..be9cc6ca3e 100644 --- a/inventory/host_vars/anitya-frontend01.fedoraproject.org +++ b/inventory/host_vars/anitya-frontend01.fedoraproject.org @@ -9,6 +9,7 @@ volgroup: /dev/vg_guests eth0_ip: 140.211.169.229 ansible_ssh_host: anitya-frontend01.fedoraproject.org +fedmsg_fqdn: anitya-frontend01.vpn.fedoraproject.org postfix_group: vpn diff --git a/inventory/host_vars/arm04-builder00.arm.fedoraproject.org b/inventory/host_vars/arm04-builder00.arm.fedoraproject.org new file mode 100644 index 0000000000..0ddade218a --- /dev/null +++ b/inventory/host_vars/arm04-builder00.arm.fedoraproject.org @@ -0,0 +1,7 @@ +--- +# +# We need to mount koji storage rw here so run_root can work. +# The rest of the group can be ro, it's only builders in the +# compose channel that need a rw mount + +nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3" diff --git a/inventory/host_vars/arm04-builder01.arm.fedoraproject.org b/inventory/host_vars/arm04-builder01.arm.fedoraproject.org new file mode 100644 index 0000000000..0ddade218a --- /dev/null +++ b/inventory/host_vars/arm04-builder01.arm.fedoraproject.org @@ -0,0 +1,7 @@ +--- +# +# We need to mount koji storage rw here so run_root can work. +# The rest of the group can be ro, it's only builders in the +# compose channel that need a rw mount + +nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3" diff --git a/inventory/host_vars/artboard.fedorainfracloud.org b/inventory/host_vars/artboard.fedorainfracloud.org new file mode 100644 index 0000000000..d2949c8069 --- /dev/null +++ b/inventory/host_vars/artboard.fedorainfracloud.org @@ -0,0 +1,22 @@ +--- +image: rhel7-20141015 +instance_type: m1.small +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,default,web-443-anywhere-persistent +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: artboard +hostbase: artboard +public_ip: 209.132.184.61 +root_auth_users: duffy +description: artboard server + +volumes: + - volume_id: 44956766-0ecb-496d-8d3c-f43e89b7f268 + device: /dev/vdc + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" diff --git a/inventory/host_vars/backup01.phx2.fedoraproject.org b/inventory/host_vars/backup01.phx2.fedoraproject.org new file mode 100644 index 0000000000..1b8db23784 --- /dev/null +++ b/inventory/host_vars/backup01.phx2.fedoraproject.org @@ -0,0 +1,5 @@ +--- +nrpe_procs_warn: 1200 +nrpe_procs_crit: 1400 + +datacenter: phx2 diff --git a/inventory/host_vars/bastion-comm01.qa.fedoraproject.org b/inventory/host_vars/bastion-comm01.qa.fedoraproject.org index 3d8f2c30da..843564153a 100644 --- a/inventory/host_vars/bastion-comm01.qa.fedoraproject.org +++ b/inventory/host_vars/bastion-comm01.qa.fedoraproject.org @@ -1,2 +1,14 @@ --- freezes: false +nm: 255.255.255.0 +gw: 10.5.124.254 +dns: 10.5.126.21 + +volgroup: /dev/VirtGuests + +eth0_ip: 10.5.124.132 + +vmhost: virthost-comm03.qa.fedoraproject.org +datacenter: phx2 + +fas_client_groups: sysadmin-main,sysadmin-noc,sysadmin-qa,fi-apprentice,sysadmin-releng,sysadmin-kernel,arm-qa,sysadmin-centos,qa-automation-shell,sysadmin-troubleshoot,sysadmin-atomic,sysadmin-ppc diff --git a/inventory/host_vars/beaker-stg01.qa.fedoraproject.org b/inventory/host_vars/beaker-stg01.qa.fedoraproject.org new file mode 100644 index 0000000000..d6aee3e464 --- /dev/null +++ b/inventory/host_vars/beaker-stg01.qa.fedoraproject.org @@ -0,0 +1,12 @@ +--- +nm: 255.255.255.0 +gw: 10.5.124.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_guests +eth0_ip: 10.5.124.141 +vmhost: virthost-comm04.qa.fedoraproject.org +datacenter: phx2 +fas_client_groups: sysadmin-qa,sysadmin-main + diff --git a/inventory/host_vars/beaker01.qa.fedoraproject.org b/inventory/host_vars/beaker01.qa.fedoraproject.org index 6b455fbf83..9e14770848 100644 --- a/inventory/host_vars/beaker01.qa.fedoraproject.org +++ b/inventory/host_vars/beaker01.qa.fedoraproject.org @@ -4,7 +4,7 @@ gw: 10.5.124.254 dns: 10.5.126.21 ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-6 ks_repo: http://10.5.126.23/repo/rhel/RHEL6-x86_64/ -volgroup: /dev/Guests00 +volgroup: /dev/vg_guests eth0_ip: 10.5.124.228 -vmhost: virthost-comm01.qa.fedoraproject.org +vmhost: virthost-comm04.qa.fedoraproject.org datacenter: phx2 diff --git a/inventory/host_vars/blockerbugs-dev.cloud.fedoraproject.org b/inventory/host_vars/blockerbugs-dev.cloud.fedoraproject.org index 929e97e161..cbfb1aa9db 100644 --- a/inventory/host_vars/blockerbugs-dev.cloud.fedoraproject.org +++ b/inventory/host_vars/blockerbugs-dev.cloud.fedoraproject.org @@ -10,4 +10,4 @@ root_auth_users: tflink mkrizek islamgulov description: blockerbugs-dev tcp_ports: ['22', '80', '443'] volumes: ['-d /dev/vdb vol-00000021'] - +datacenter: cloud diff --git a/inventory/host_vars/bodhi-backend01.phx2.fedoraproject.org b/inventory/host_vars/bodhi-backend01.phx2.fedoraproject.org new file mode 100644 index 0000000000..2b8564166b --- /dev/null +++ b/inventory/host_vars/bodhi-backend01.phx2.fedoraproject.org @@ -0,0 +1,10 @@ +--- +nm: 255.255.255.0 +gw: 10.5.125.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_Server +eth0_ip: 10.5.125.135 +eth1_ip: 10.5.127.61 +vmhost: bvirthost10.phx2.fedoraproject.org diff --git a/inventory/host_vars/bodhi-backend01.stg.phx2.fedoraproject.org b/inventory/host_vars/bodhi-backend01.stg.phx2.fedoraproject.org new file mode 100644 index 0000000000..db2da49902 --- /dev/null +++ b/inventory/host_vars/bodhi-backend01.stg.phx2.fedoraproject.org @@ -0,0 +1,10 @@ +--- +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_guests +eth0_ip: 10.5.126.90 +eth1_ip: 10.5.127.65 +vmhost: virthost12.phx2.fedoraproject.org diff --git a/inventory/host_vars/bodhi-backend02.phx2.fedoraproject.org b/inventory/host_vars/bodhi-backend02.phx2.fedoraproject.org new file mode 100644 index 0000000000..2727e60d0e --- /dev/null +++ b/inventory/host_vars/bodhi-backend02.phx2.fedoraproject.org @@ -0,0 +1,10 @@ +--- +nm: 255.255.255.0 +gw: 10.5.125.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_bvirthost06 +eth0_ip: 10.5.125.136 +eth1_ip: 10.5.127.62 +vmhost: bvirthost06.phx2.fedoraproject.org diff --git a/inventory/host_vars/bodhi.dev.fedoraproject.org b/inventory/host_vars/bodhi.dev.fedoraproject.org deleted file mode 100644 index afa45a0fc2..0000000000 --- a/inventory/host_vars/bodhi.dev.fedoraproject.org +++ /dev/null @@ -1,11 +0,0 @@ ---- -instance_type: m1.medium -image: "{{ el6_qcow_id }}" -keypair: fedora-admin-20130801 -security_group: webserver -zone: nova -hostbase: bodhi.dev -public_ip: 209.132.184.215 -root_auth_users: lmacken -description: bodhi2 dev instance -tcp_ports: ['22', '443'] diff --git a/inventory/host_vars/bodhi02.stg.phx2.fedoraproject.org b/inventory/host_vars/bodhi02.stg.phx2.fedoraproject.org index 10a3f9ea68..6ed880e506 100644 --- a/inventory/host_vars/bodhi02.stg.phx2.fedoraproject.org +++ b/inventory/host_vars/bodhi02.stg.phx2.fedoraproject.org @@ -7,4 +7,25 @@ ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ volgroup: /dev/vg_guests eth0_ip: 10.5.126.169 vmhost: virthost10.phx2.fedoraproject.org -datacenter: phx2 + +# These are consumed by a task in roles/fedmsg/base/main.yml +fedmsg_certs: +- service: shell + owner: root + group: root +- service: bodhi + owner: root + group: bodhi + can_send: + - bodhi.mashtask.mashing + - bodhi.mashtask.complete + - bodhi.mashtask.sync.wait + - bodhi.mashtask.sync.done + - bodhi.update.eject + - bodhi.update.complete.testing + - bodhi.update.complete.stable + - bodhi.update.request.testing + - bodhi.update.request.stable + - bodhi.update.comment + - bodhi.stack.save + - bodhi.stack.delete diff --git a/inventory/host_vars/bodhi03.phx2.fedoraproject.org b/inventory/host_vars/bodhi03.phx2.fedoraproject.org new file mode 100644 index 0000000000..ab390b8043 --- /dev/null +++ b/inventory/host_vars/bodhi03.phx2.fedoraproject.org @@ -0,0 +1,32 @@ +--- +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_virthost01 +eth0_ip: 10.5.126.115 +vmhost: virthost01.phx2.fedoraproject.org +datacenter: phx2 + +# These are consumed by a task in roles/fedmsg/base/main.yml +fedmsg_certs: +- service: shell + owner: root + group: root +- service: bodhi + owner: root + group: bodhi + can_send: + - bodhi.mashtask.mashing + - bodhi.mashtask.complete + - bodhi.mashtask.sync.wait + - bodhi.mashtask.sync.done + - bodhi.update.eject + - bodhi.update.complete.testing + - bodhi.update.complete.stable + - bodhi.update.request.testing + - bodhi.update.request.stable + - bodhi.update.comment + - bodhi.stack.save + - bodhi.stack.delete diff --git a/inventory/host_vars/bodhi04.phx2.fedoraproject.org b/inventory/host_vars/bodhi04.phx2.fedoraproject.org new file mode 100644 index 0000000000..586c3afa22 --- /dev/null +++ b/inventory/host_vars/bodhi04.phx2.fedoraproject.org @@ -0,0 +1,32 @@ +--- +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_guests +eth0_ip: 10.5.126.116 +vmhost: virthost02.phx2.fedoraproject.org +datacenter: phx2 + +# These are consumed by a task in roles/fedmsg/base/main.yml +fedmsg_certs: +- service: shell + owner: root + group: root +- service: bodhi + owner: root + group: bodhi + can_send: + - bodhi.mashtask.mashing + - bodhi.mashtask.complete + - bodhi.mashtask.sync.wait + - bodhi.mashtask.sync.done + - bodhi.update.eject + - bodhi.update.complete.testing + - bodhi.update.complete.stable + - bodhi.update.request.testing + - bodhi.update.request.stable + - bodhi.update.comment + - bodhi.stack.save + - bodhi.stack.delete diff --git a/inventory/host_vars/branched-composer.phx2.fedoraproject.org b/inventory/host_vars/branched-composer.phx2.fedoraproject.org index 61443cd6fb..a5d351468d 100644 --- a/inventory/host_vars/branched-composer.phx2.fedoraproject.org +++ b/inventory/host_vars/branched-composer.phx2.fedoraproject.org @@ -6,12 +6,4 @@ volgroup: /dev/vg_bvirthost08 kojipkgs_url: kojipkgs.fedoraproject.org kojihub_url: koji.fedoraproject.org/kojihub - -# These are consumed by a task in roles/fedmsg/base/main.yml -fedmsg_certs: -- service: shell - owner: root - group: root -- service: bodhi - owner: root - group: masher +kojihub_scheme: https diff --git a/inventory/host_vars/buildhw-01.phx2.fedoraproject.org b/inventory/host_vars/buildhw-01.phx2.fedoraproject.org new file mode 100644 index 0000000000..0ddade218a --- /dev/null +++ b/inventory/host_vars/buildhw-01.phx2.fedoraproject.org @@ -0,0 +1,7 @@ +--- +# +# We need to mount koji storage rw here so run_root can work. +# The rest of the group can be ro, it's only builders in the +# compose channel that need a rw mount + +nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3" diff --git a/inventory/host_vars/buildvm-01.phx2.fedoraproject.org b/inventory/host_vars/buildvm-01.phx2.fedoraproject.org index e205b72230..de153204e4 100644 --- a/inventory/host_vars/buildvm-01.phx2.fedoraproject.org +++ b/inventory/host_vars/buildvm-01.phx2.fedoraproject.org @@ -2,3 +2,10 @@ vmhost: buildvmhost-10.phx2.fedoraproject.org eth0_ip: 10.5.125.98 eth1_ip: 10.5.127.158 + +# +# We need to mount koji storage rw here so run_root can work. +# The rest of the group can be ro, it's only builders in the +# compose channel that need a rw mount + +nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3" diff --git a/inventory/host_vars/collab04.fedoraproject.org b/inventory/host_vars/collab03.fedoraproject.org similarity index 100% rename from inventory/host_vars/collab04.fedoraproject.org rename to inventory/host_vars/collab03.fedoraproject.org diff --git a/inventory/host_vars/communityblog.fedorainfracloud.org b/inventory/host_vars/communityblog.fedorainfracloud.org new file mode 100644 index 0000000000..702353c731 --- /dev/null +++ b/inventory/host_vars/communityblog.fedorainfracloud.org @@ -0,0 +1,18 @@ +--- +image: rhel7-20141015 +instance_type: m1.small +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,default +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: communityblog +hostbase: faitout +public_ip: 209.132.184.207 +root_auth_users: nb chrisroberts +description: fedora community blog + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" diff --git a/inventory/host_vars/compose-x86-01.phx2.fedoraproject.org b/inventory/host_vars/compose-x86-01.phx2.fedoraproject.org index 87d1160b71..529bd02403 100644 --- a/inventory/host_vars/compose-x86-01.phx2.fedoraproject.org +++ b/inventory/host_vars/compose-x86-01.phx2.fedoraproject.org @@ -34,5 +34,6 @@ fas_client_groups: sysadmin-noc,sysadmin-releng kojipkgs_url: kojipkgs.fedoraproject.org kojihub_url: koji.fedoraproject.org/kojihub +kojihub_scheme: https nfs_mount_opts: rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3 diff --git a/inventory/host_vars/composer.stg.phx2.fedoraproject.org b/inventory/host_vars/composer.stg.phx2.fedoraproject.org index 5a0ddda747..fb1d7985e0 100644 --- a/inventory/host_vars/composer.stg.phx2.fedoraproject.org +++ b/inventory/host_vars/composer.stg.phx2.fedoraproject.org @@ -12,8 +12,9 @@ datacenter: staging fas_client_groups: sysadmin-noc,sysadmin-releng,sysadmin-fedimg -kojipkgs_url: koji.stg.fedoraproject.org +kojipkgs_url: kojipkgs.fedoraproject.org kojihub_url: koji.stg.fedoraproject.org/kojihub +kojihub_scheme: http nfs_mount_opts: rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=4 diff --git a/inventory/host_vars/copr-be-dev.cloud.fedoraproject.org b/inventory/host_vars/copr-be-dev.cloud.fedoraproject.org index bf241c86e2..280840e167 100644 --- a/inventory/host_vars/copr-be-dev.cloud.fedoraproject.org +++ b/inventory/host_vars/copr-be-dev.cloud.fedoraproject.org @@ -1,25 +1,25 @@ --- instance_type: m1.xlarge -#image: "{{ f20_qcow_id }}" -image: "Fedora-Cloud-Base-20141203-21" +image: "{{ fedora21_x86_64 }}" keypair: fedora-admin-20130801 security_group: web-80-anywhere-persistent,web-443-anywhere-persistent,ssh-anywhere-persistent,default,allow-nagios-persistent zone: nova hostbase: copr-be-dev- public_ip: 209.132.184.53 -root_auth_users: bkabrda msuchy tradej pingou vgologuz +root_auth_users: bkabrda msuchy tradej pingou vgologuz frostyx asamalik description: copr dispatcher and repo server - dev instance tcp_ports: ['22', '80', '443'] # volumes: copr-be-dev-data -volumes: [ {volume_id: '0541a477-6ecf-4b27-ad76-5eff313abf9b', device: '/dev/vdc'} ] +volumes: [ {volume_id: '98372b76-b82c-4a03-9708-17af7d01e1e2', device: '/dev/vdc'} ] + inventory_tenant: persistent # name of machine in OpenStack inventory_instance_name: copr-be-dev cloud_networks: # persistent-net - - net-id: "a4c8289e-eeaa-4070-9ea7-789ed5da6632" + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" # coprdev-net - - net-id: "53fb02fd-6e97-4cfc-b298-a2ff867daa52" + - net-id: "a440568f-b90a-46af-8ca6-d8fa743a7e7a" # Copr vars copr_hostbase: copr-be-dev diff --git a/inventory/host_vars/copr-be.cloud.fedoraproject.org b/inventory/host_vars/copr-be.cloud.fedoraproject.org index aa94798cd9..b421302e40 100644 --- a/inventory/host_vars/copr-be.cloud.fedoraproject.org +++ b/inventory/host_vars/copr-be.cloud.fedoraproject.org @@ -1,14 +1,22 @@ --- -instance_type: m1.xlarge -image: "{{ f20_qcow_id }}" + +instance_type: ms1.xlarge +image: "{{ fedora21_x86_64 }}" keypair: fedora-admin-20130801 -security_group: webserver +security_group: web-80-anywhere-persistent,web-443-anywhere-persistent,ssh-anywhere-persistent,default,allow-nagios-persistent zone: nova hostbase: copr-be- -public_ip: 209.132.184.142 -root_auth_users: bkabrda msuchy pingou msuchy sgallagh nb asamalik vgologuz +public_ip: 209.132.184.48 +root_auth_users: msuchy pingou msuchy sgallagh nb asamalik vgologuz description: copr dispatcher and repo server -volumes: ['-d /dev/vdc vol-00000028'] +volumes: [ {volume_id: '63c3a40c-e228-417a-97a2-e2c34730bf3b', device: '/dev/vdc'} ] +inventory_tenant: persistent +inventory_instance_name: copr-be +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + # copr-net + - net-id: "24699649-0e05-4fd3-98a3-86a75ec49f6e" tcp_ports: [ 22, 80, 443, # These 8 ports are used by fedmsg. One for each wsgi thread. @@ -22,6 +30,11 @@ fedmsg_certs: - service: copr owner: root group: copr + can_send: + - copr.build.start + - copr.build.end + - copr.chroot.start + - copr.worker.create # Copr vars diff --git a/inventory/host_vars/copr-dist-git-dev.fedorainfracloud.org b/inventory/host_vars/copr-dist-git-dev.fedorainfracloud.org new file mode 100644 index 0000000000..fb97722458 --- /dev/null +++ b/inventory/host_vars/copr-dist-git-dev.fedorainfracloud.org @@ -0,0 +1,23 @@ +--- +instance_type: ms1.small +#image: "{{ centos70_x86_64 }}" +image: rhel7-20141015 +keypair: fedora-admin-20130801 +security_group: web-80-anywhere-persistent,ssh-anywhere-persistent,default +zone: nova +hostbase: copr-dist-git-dev- +public_ip: 209.132.184.179 +root_auth_users: bkabrda ryanlerch pingou msuchy tradej asamalik vgologuz frostyx +description: dist-git for copr service - dev instance +tcp_ports: [22, 80] +# volumes: copr-dist-git-dev +volumes: [ {volume_id: '64f21445-d758-4b19-8401-e497cd0ae012', device: '/dev/vdc'} ] +inventory_tenant: persistent +# name of machine in OpenStack +inventory_instance_name: copr-dist-git-dev +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + +# Copr vars +copr_hostbase: copr-dist-git-dev diff --git a/inventory/host_vars/copr-dist-git.fedorainfracloud.org b/inventory/host_vars/copr-dist-git.fedorainfracloud.org new file mode 100644 index 0000000000..c62468f6f5 --- /dev/null +++ b/inventory/host_vars/copr-dist-git.fedorainfracloud.org @@ -0,0 +1,23 @@ +--- +instance_type: ms1.medium +#image: "{{ centos70_x86_64 }}" +image: rhel7-20141015 +keypair: fedora-admin-20130801 +security_group: web-80-anywhere-persistent,ssh-anywhere-persistent,default +zone: nova +hostbase: copr-dist-git- +public_ip: 209.132.184.163 +root_auth_users: bkabrda ryanlerch pingou msuchy tradej asamalik vgologuz frostyx +description: dist-git for copr service - prod instance +tcp_ports: [22, 80] +# volumes: copr-dist-git +volumes: [ {volume_id: '6f812deb-ba43-4783-9e8e-2832eeae6e0b', device: '/dev/vdc'} ] +inventory_tenant: persistent +# name of machine in OpenStack +inventory_instance_name: copr-dist-git +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + +# Copr vars +copr_hostbase: copr-dist-git diff --git a/inventory/host_vars/copr-fe-dev.cloud.fedoraproject.org b/inventory/host_vars/copr-fe-dev.cloud.fedoraproject.org index 471bd80bb9..bab72034d2 100644 --- a/inventory/host_vars/copr-fe-dev.cloud.fedoraproject.org +++ b/inventory/host_vars/copr-fe-dev.cloud.fedoraproject.org @@ -1,22 +1,22 @@ --- instance_type: m1.medium -image: "Fedora-Cloud-Base-20141203-21" +image: "{{ fedora21_x86_64 }}" keypair: fedora-admin-20130801 security_group: web-80-anywhere-persistent,web-443-anywhere-persistent,ssh-anywhere-persistent,default zone: nova hostbase: copr-fe-dev- public_ip: 209.132.184.55 -root_auth_users: bkabrda ryanlerch pingou msuchy tradej asamalik vgologuz +root_auth_users: bkabrda ryanlerch pingou msuchy tradej asamalik vgologuz frostyx description: copr frontend server - dev instance tcp_ports: [22, 80, 443] # volumes: copr-fe-dev-db -volumes: [ {volume_id: '74fde12a-7f37-4003-9218-f82fadaf4ea7', device: '/dev/vdc'} ] +volumes: [ {volume_id: 'c1f1db5f-1b71-4ee8-82f6-0665ff142933', device: '/dev/vdc'} ] inventory_tenant: persistent # name of machine in OpenStack inventory_instance_name: copr-fe-dev cloud_networks: # persistent-net - - net-id: "a4c8289e-eeaa-4070-9ea7-789ed5da6632" + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" # Copr vars copr_hostbase: copr-fe-dev diff --git a/inventory/host_vars/copr-fe.cloud.fedoraproject.org b/inventory/host_vars/copr-fe.cloud.fedoraproject.org index 4508374eb9..fc32e34789 100644 --- a/inventory/host_vars/copr-fe.cloud.fedoraproject.org +++ b/inventory/host_vars/copr-fe.cloud.fedoraproject.org @@ -1,15 +1,24 @@ --- -instance_type: m1.medium -image: "{{ f20_qcow_id }}" +# this overrides vars/Fedora.yml +base_pkgs_erase: ['PackageKit*', 'sendmail', 'at'] + +instance_type: ms1.medium +image: "{{ fedora21_x86_64 }}" keypair: fedora-admin-20130801 -security_group: webserver +security_group: web-80-anywhere-persistent,web-443-anywhere-persistent,ssh-anywhere-persistent,default zone: nova hostbase: copr-fe- -public_ip: 209.132.184.144 -root_auth_users: bkabrda ryanlerch pingou msuchy sgallagh nb asamalik -description: copr frontend server +public_ip: 209.132.184.54 +root_auth_users: ryanlerch pingou msuchy sgallagh nb asamalik vgologuz +description: copr frontend server - prod instance volumes: ['-d /dev/vdb vol-0000000f'] tcp_ports: [22, 80, 443] +volumes: [ {volume_id: '8f790db7-8294-4d2b-8bae-7af5961ce0f8', device: '/dev/vdc'} ] +inventory_tenant: persistent +inventory_instance_name: copr-fe +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" # Copr vars @@ -21,3 +30,4 @@ dbs_to_backup: # Backup db dumps in /backups host_backup_targets: ['/backups'] + diff --git a/inventory/host_vars/copr-keygen-dev.cloud.fedoraproject.org b/inventory/host_vars/copr-keygen-dev.cloud.fedoraproject.org new file mode 100644 index 0000000000..eec4f7b3a2 --- /dev/null +++ b/inventory/host_vars/copr-keygen-dev.cloud.fedoraproject.org @@ -0,0 +1,22 @@ +--- +instance_type: ms1.small +image: "{{ fedora21_x86_64 }}" +keypair: fedora-admin-20130801 +# todo: remove some security groups ? +security_group: web-80-anywhere-persistent,web-443-anywhere-persistent,ssh-anywhere-persistent,default,allow-nagios-persistent +zone: nova +hostbase: copr-keygen-dev- +public_ip: 209.132.184.46 +root_auth_users: msuchy vgologuz +description: copr key gen and sign host - dev instance +volumes: [] + +inventory_tenant: persistent +# name of machine in OpenStack +inventory_instance_name: copr-keygen-dev +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + +# Copr vars +copr_hostbase: copr-keygen-dev diff --git a/inventory/host_vars/copr-keygen.cloud.fedoraproject.org b/inventory/host_vars/copr-keygen.cloud.fedoraproject.org index 4e626f8b82..10b240a94e 100644 --- a/inventory/host_vars/copr-keygen.cloud.fedoraproject.org +++ b/inventory/host_vars/copr-keygen.cloud.fedoraproject.org @@ -1,16 +1,28 @@ --- -instance_type: m1.small -image: "{{ f20_qcow_id }}" +instance_type: ms1.small +image: "{{ fedora21_x86_64 }}" keypair: fedora-admin-20130801 zone: nova hostbase: copr-keygen- -public_ip: 209.132.184.159 +# public_ip: 209.132.184.159 +public_ip: 209.132.184.49 root_auth_users: msuchy vgologuz description: copr key gen instance -volumes: ['-d /dev/vdc vol-0000002e'] -security_group: default +# volumes: ['-d /dev/vdc vol-0000002e'] +volumes: [] +# security_group: default +security_group: web-80-anywhere-persistent,ssh-anywhere-persistent,default,allow-nagios-persistent,keygen-persistent + +inventory_tenant: persistent +# name of machine in OpenStack +inventory_instance_name: copr-keygen +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + host_backup_targets: ['/backup/'] - -copr_hostbase: copr-keygen datacenter: cloud + +# Copr vars +copr_hostbase: copr-keygen diff --git a/inventory/host_vars/darkserver-dev.fedorainfracloud.org b/inventory/host_vars/darkserver-dev.fedorainfracloud.org new file mode 100644 index 0000000000..cad5fcbe6b --- /dev/null +++ b/inventory/host_vars/darkserver-dev.fedorainfracloud.org @@ -0,0 +1,18 @@ +--- +image: rhel7-20141015 +instance_type: m1.large +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,default +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: darkserver-dev +hostbase: darkserver-dev +public_ip: 209.132.184.171 +root_auth_users: kushal +description: darkserver development instance + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" diff --git a/inventory/host_vars/data-analysis01.phx2.fedoraproject.org b/inventory/host_vars/data-analysis01.phx2.fedoraproject.org new file mode 100644 index 0000000000..0c08458ced --- /dev/null +++ b/inventory/host_vars/data-analysis01.phx2.fedoraproject.org @@ -0,0 +1,16 @@ +--- +# this box is not mission critical +freezes: false + +# this box mounts a large share from the netapp to store combined http +# logs from the proxies. + +nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3" + +# general configs +nrpe_procs_warn: 900 +nrpe_procs_crit: 1000 +datacenter: phx2 +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 diff --git a/inventory/host_vars/db-fas01.phx2.fedoraproject.org b/inventory/host_vars/db-fas01.phx2.fedoraproject.org index 2f8556b37d..5ef280fcee 100644 --- a/inventory/host_vars/db-fas01.phx2.fedoraproject.org +++ b/inventory/host_vars/db-fas01.phx2.fedoraproject.org @@ -37,6 +37,10 @@ custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.240 --dport 5432 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 10.5.126.241 --dport 5432 -j ACCEPT' , + # ipsilon01 and ipsilon02 + '-A INPUT -p tcp -m tcp -s 10.5.126.46 --dport 5432 -j ACCEPT', + '-A INPUT -p tcp -m tcp -s 10.5.126.47 --dport 5432 -j ACCEPT' , + # sundries02... '-A INPUT -p tcp -m tcp -s 10.5.126.41 --dport 5432 -j ACCEPT', diff --git a/inventory/host_vars/db-s390-koji01.qa.fedoraproject.org b/inventory/host_vars/db-s390-koji01.qa.fedoraproject.org new file mode 100644 index 0000000000..e4b50868af --- /dev/null +++ b/inventory/host_vars/db-s390-koji01.qa.fedoraproject.org @@ -0,0 +1,43 @@ +--- +nm: 255.255.255.0 +gw: 10.5.131.254 +dns: 10.5.126.21 +volgroup: /dev/vg_guests +eth0_ip: 10.5.131.16 +vmhost: virthost-s390.qa.fedoraproject.org +datacenter: phx2 + +ks_url: http://infrastructure.phx2.fedoraproject.org/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://infrastructure.phx2.fedoraproject.org/repo/rhel/RHEL7-x86_64/ + +# This is a generic list, monitored by collectd +databases: +- koji + +# This is a more strict list, to be made publicly available +dbs_to_backup: +- koji + +# These are normally group variables, but in this case db servers are often different +lvm_size: 500000 +mem_size: 25165 +num_cpus: 12 +fas_client_groups: sysadmin-dba,sysadmin-noc,sysadmin-secondary +sudoers: "{{ private }}/files/sudo/sysadmin-secondary-sudoers" + +# kernel SHMMAX value +kernel_shmmax: 68719476736 + +# +# Only allow postgresql access from the frontend node. +# +custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.124.191 --dport 5432 -j ACCEPT' ] + +# +# Large updates pushes cause lots of db threads doing the tag moves, so up this from default. +# +nrpe_procs_warn: 600 +nrpe_procs_crit: 700 + +host_backup_targets: ['/backups'] +shared_buffers: "4GB" diff --git a/inventory/host_vars/db01.phx2.fedoraproject.org b/inventory/host_vars/db01.phx2.fedoraproject.org index 990a0d24e4..feafbf44e2 100644 --- a/inventory/host_vars/db01.phx2.fedoraproject.org +++ b/inventory/host_vars/db01.phx2.fedoraproject.org @@ -20,6 +20,7 @@ databases: - fedoratagger - kerneltest - kittystore +- koschei - mailman - mirrormanager - notifications @@ -37,6 +38,7 @@ dbs_to_backup: - fedocal - fedoratagger - kerneltest +- koschei - kittystore - mailman - mirrormanager diff --git a/inventory/host_vars/db03.phx2.fedoraproject.org b/inventory/host_vars/db03.phx2.fedoraproject.org new file mode 100644 index 0000000000..40f9c9c9b6 --- /dev/null +++ b/inventory/host_vars/db03.phx2.fedoraproject.org @@ -0,0 +1,38 @@ +--- +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_guests +eth0_ip: 10.5.126.112 +vmhost: virthost02.phx2.fedoraproject.org +datacenter: phx2 + +# This is a generic list, monitored by collectd +databases: +- mysql +- darkserver +- fpo-mediawiki +- pastebin + +# This is a more strict list of db to backup to /backups +dbs_to_backup: +- darkserver +- fpo-mediawiki +- pastebin + +mariadb_root_password: "{{ db03_mysql_root_password }}" + +# These are normally group variables, but in this case db servers are often different +lvm_size: 300000 +mem_size: 8192 +num_cpus: 2 +tcp_ports: [ 5432, 443, 3306 ] +fas_client_groups: sysadmin-dba,sysadmin-noc + +# kernel SHMMAX value +kernel_shmmax: 68719476736 + +host_backup_targets: ['/backups'] +shared_buffers: "4GB" diff --git a/inventory/host_vars/db03.stg.phx2.fedoraproject.org b/inventory/host_vars/db03.stg.phx2.fedoraproject.org new file mode 100644 index 0000000000..fe8f99a3b6 --- /dev/null +++ b/inventory/host_vars/db03.stg.phx2.fedoraproject.org @@ -0,0 +1,32 @@ +--- +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_guests +eth0_ip: 10.5.126.113 +vmhost: virthost11.phx2.fedoraproject.org +datacenter: phx2 + +# This is a generic list, monitored by collectd +databases: +- postgres + +# This is a more strict list, to be made publicly available +#dbs_to_backup: + +mariadb_root_password: "{{ db03_stg_mysql_root_password }}" + +# These are normally group variables, but in this case db servers are often different +lvm_size: 300000 +mem_size: 8192 +num_cpus: 2 +tcp_ports: [ 5432, 443, 3306 ] +fas_client_groups: sysadmin-dba,sysadmin-noc + +# kernel SHMMAX value +kernel_shmmax: 68719476736 + +host_backup_targets: ['/backups'] +shared_buffers: "4GB" diff --git a/inventory/host_vars/db05.phx2.fedoraproject.org b/inventory/host_vars/db05.phx2.fedoraproject.org deleted file mode 100644 index 149ad66fbd..0000000000 --- a/inventory/host_vars/db05.phx2.fedoraproject.org +++ /dev/null @@ -1,2 +0,0 @@ ---- -host_backup_targets: ['/backups'] diff --git a/inventory/host_vars/docs-dev-builder01.fedorainfracloud.org b/inventory/host_vars/docs-dev-builder01.fedorainfracloud.org new file mode 100644 index 0000000000..d7030053fa --- /dev/null +++ b/inventory/host_vars/docs-dev-builder01.fedorainfracloud.org @@ -0,0 +1,20 @@ +--- +image: rhel7-20141015 +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,default,all-icmp-persistent +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: docs-dev-builder01 +hostbase: docs-dev-builder01 +public_ip: 209.132.184.56 +root_auth_users: immanetize +description: docs-dev buildbot builder + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + +freezes: false diff --git a/inventory/host_vars/docs-dev-frontend.fedorainfracloud.org b/inventory/host_vars/docs-dev-frontend.fedorainfracloud.org new file mode 100644 index 0000000000..294873b9ca --- /dev/null +++ b/inventory/host_vars/docs-dev-frontend.fedorainfracloud.org @@ -0,0 +1,20 @@ +--- +image: rhel7-20141015 +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,default,all-icmp-persistent +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: docs-dev-frontend +hostbase: docs-dev-frontend +public_ip: 209.132.184.52 +root_auth_users: immanetize +description: docs-dev frontend server + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + +freezes: false diff --git a/inventory/host_vars/docs-dev-master.fedorainfracloud.org b/inventory/host_vars/docs-dev-master.fedorainfracloud.org new file mode 100644 index 0000000000..c367653c1d --- /dev/null +++ b/inventory/host_vars/docs-dev-master.fedorainfracloud.org @@ -0,0 +1,24 @@ +--- +image: rhel7-20141015 +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,default,all-icmp-persistent +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: docs-dev-master +hostbase: docs-dev-master +public_ip: 209.132.184.51 +root_auth_users: immanetize +description: taiga frontend server + +volumes: + - volume_id: c37e1833-5ac4-4eac-97c1-24b6d8671dce + device: /dev/vdc + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + +freezes: false diff --git a/inventory/host_vars/dopr-dev.cloud.fedoraproject.org b/inventory/host_vars/dopr-dev.cloud.fedoraproject.org new file mode 100644 index 0000000000..145b8b9e65 --- /dev/null +++ b/inventory/host_vars/dopr-dev.cloud.fedoraproject.org @@ -0,0 +1,4 @@ +--- +resolvconf: "resolv.conf/cloud" +tcp_ports: [80, 443] +freezes: false diff --git a/inventory/host_vars/elections01.stg.phx2.fedoraproject.org b/inventory/host_vars/elections01.stg.phx2.fedoraproject.org index 47ac90f8d4..d768358473 100644 --- a/inventory/host_vars/elections01.stg.phx2.fedoraproject.org +++ b/inventory/host_vars/elections01.stg.phx2.fedoraproject.org @@ -8,5 +8,5 @@ ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ volgroup: /dev/vg_guests eth0_ip: 10.5.126.105 -vmhost: virthost12.phx2.fedoraproject.org +vmhost: virthost11.phx2.fedoraproject.org datacenter: phx2 diff --git a/inventory/host_vars/faitout.fedorainfracloud.org b/inventory/host_vars/faitout.fedorainfracloud.org new file mode 100644 index 0000000000..51e6966c59 --- /dev/null +++ b/inventory/host_vars/faitout.fedorainfracloud.org @@ -0,0 +1,18 @@ +--- +image: rhel7-20141015 +instance_type: m1.small +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,pg-5432-anywhere,default +zone: nova +tcp_ports: [22, 80, 443, 5432] + +inventory_tenant: persistent +inventory_instance_name: faitout +hostbase: faitout +public_ip: 209.132.184.65 +root_auth_users: pingou +description: faitout development instance + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" diff --git a/inventory/host_vars/fas01.stg.phx2.fedoraproject.org b/inventory/host_vars/fas01.stg.phx2.fedoraproject.org index 5e74c805cb..91102924ca 100644 --- a/inventory/host_vars/fas01.stg.phx2.fedoraproject.org +++ b/inventory/host_vars/fas01.stg.phx2.fedoraproject.org @@ -10,6 +10,5 @@ vmhost: virthost10.phx2.fedoraproject.org datacenter: phx2 # There's only this server in stg, so it does certs. - master_fas_node: True gen_cert: True diff --git a/inventory/host_vars/fas2-dev.fedorainfracloud.org b/inventory/host_vars/fas2-dev.fedorainfracloud.org new file mode 100644 index 0000000000..6fb39f88bb --- /dev/null +++ b/inventory/host_vars/fas2-dev.fedorainfracloud.org @@ -0,0 +1,18 @@ +--- +image: "{{ centos66_x86_64 }}" +instance_type: m1.small +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,default +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: fas2-dev +hostbase: fas2-dev +public_ip: 209.132.184.63 +root_auth_users: laxathom +description: fas2 development instance + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" diff --git a/inventory/host_vars/fas3-dev.fedorainfracloud.org b/inventory/host_vars/fas3-dev.fedorainfracloud.org new file mode 100644 index 0000000000..d19aa4989a --- /dev/null +++ b/inventory/host_vars/fas3-dev.fedorainfracloud.org @@ -0,0 +1,18 @@ +--- +image: rhel7-20141015 +instance_type: m1.small +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,default +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: fas3-dev +hostbase: fas3-dev +public_ip: 209.132.184.64 +root_auth_users: laxathom +description: fas3 development instance + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" diff --git a/inventory/host_vars/fed-cloud09.cloud.fedoraproject.org b/inventory/host_vars/fed-cloud09.cloud.fedoraproject.org index 2559de113b..49f6920dda 100644 --- a/inventory/host_vars/fed-cloud09.cloud.fedoraproject.org +++ b/inventory/host_vars/fed-cloud09.cloud.fedoraproject.org @@ -1,2 +1,5 @@ --- root_auth_users: msuchy +nrpe_procs_warn: 900 +nrpe_procs_crit: 1000 +host_group: openstack-compute diff --git a/inventory/host_vars/fedimg01.stg.phx2.fedoraproject.org b/inventory/host_vars/fedimg01.stg.phx2.fedoraproject.org index 59b1d9d2ec..1604633258 100644 --- a/inventory/host_vars/fedimg01.stg.phx2.fedoraproject.org +++ b/inventory/host_vars/fedimg01.stg.phx2.fedoraproject.org @@ -7,6 +7,6 @@ ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ datacenter: phx2 volgroup: /dev/vg_guests -vmhost: virthost12.phx2.fedoraproject.org +vmhost: virthost11.phx2.fedoraproject.org eth0_ip: 10.5.126.9 diff --git a/inventory/host_vars/fedoauth01.phx2.fedoraproject.org b/inventory/host_vars/fedoauth01.phx2.fedoraproject.org deleted file mode 100644 index 591d045bf3..0000000000 --- a/inventory/host_vars/fedoauth01.phx2.fedoraproject.org +++ /dev/null @@ -1,10 +0,0 @@ ---- -nm: 255.255.255.0 -gw: 10.5.126.254 -dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-6 -ks_repo: http://10.5.126.23/repo/rhel/RHEL6-x86_64/ -volgroup: /dev/vg_guests00 -eth0_ip: 10.5.126.240 -vmhost: virthost07.phx2.fedoraproject.org -datacenter: phx2 diff --git a/inventory/host_vars/fedoauth01.stg.phx2.fedoraproject.org b/inventory/host_vars/fedoauth01.stg.phx2.fedoraproject.org deleted file mode 100644 index 6a91e2973c..0000000000 --- a/inventory/host_vars/fedoauth01.stg.phx2.fedoraproject.org +++ /dev/null @@ -1,10 +0,0 @@ ---- -nm: 255.255.255.0 -gw: 10.5.126.254 -dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-6 -ks_repo: http://10.5.126.23/repo/rhel/RHEL6-x86_64/ -volgroup: /dev/vg_guests -eth0_ip: 10.5.126.28 -vmhost: virthost12.phx2.fedoraproject.org -datacenter: phx2 diff --git a/inventory/host_vars/fedoauth02.phx2.fedoraproject.org b/inventory/host_vars/fedoauth02.phx2.fedoraproject.org deleted file mode 100644 index 3a8cc1d508..0000000000 --- a/inventory/host_vars/fedoauth02.phx2.fedoraproject.org +++ /dev/null @@ -1,10 +0,0 @@ ---- -nm: 255.255.255.0 -gw: 10.5.126.254 -dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-6 -ks_repo: http://10.5.126.23/repo/rhel/RHEL6-x86_64/ -volgroup: /dev/vg_guests -eth0_ip: 10.5.126.241 -vmhost: virthost09.phx2.fedoraproject.org -datacenter: phx2 diff --git a/inventory/host_vars/fedocal01.stg.phx2.fedoraproject.org b/inventory/host_vars/fedocal01.stg.phx2.fedoraproject.org index 3dea6fce33..601b0afd8d 100644 --- a/inventory/host_vars/fedocal01.stg.phx2.fedoraproject.org +++ b/inventory/host_vars/fedocal01.stg.phx2.fedoraproject.org @@ -8,5 +8,5 @@ ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ volgroup: /dev/vg_guests eth0_ip: 10.5.126.66 -vmhost: virthost12.phx2.fedoraproject.org +vmhost: virthost11.phx2.fedoraproject.org datacenter: phx2 diff --git a/inventory/host_vars/gallery01.stg.phx2.fedoraproject.org b/inventory/host_vars/gallery01.stg.phx2.fedoraproject.org index 5fdc82c64f..6b7ac9063e 100644 --- a/inventory/host_vars/gallery01.stg.phx2.fedoraproject.org +++ b/inventory/host_vars/gallery01.stg.phx2.fedoraproject.org @@ -6,5 +6,5 @@ ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-6 ks_repo: http://10.5.126.23/repo/rhel/RHEL6-x86_64/ volgroup: /dev/vg_guests eth0_ip: 10.5.126.70 -vmhost: virthost12.phx2.fedoraproject.org +vmhost: virthost11.phx2.fedoraproject.org datacenter: phx2 diff --git a/inventory/host_vars/glittergallery-dev.fedorainfracloud.org b/inventory/host_vars/glittergallery-dev.fedorainfracloud.org new file mode 100644 index 0000000000..5d982be7d4 --- /dev/null +++ b/inventory/host_vars/glittergallery-dev.fedorainfracloud.org @@ -0,0 +1,18 @@ +--- +image: "{{ fedora21_x86_64 }}" +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,default,web-443-anywhere-persistent +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: glittergallery-dev +hostbase: glittergallery-dev +public_ip: 209.132.184.60 +root_auth_users: sonalkr132 sarupbanskota +description: glittergallery GSoC work + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" diff --git a/inventory/host_vars/grafana.cloud.fedoraproject.org b/inventory/host_vars/grafana.cloud.fedoraproject.org new file mode 100644 index 0000000000..a4edabdb8b --- /dev/null +++ b/inventory/host_vars/grafana.cloud.fedoraproject.org @@ -0,0 +1,21 @@ +instance_type: m1.medium +image: "{{ fedora21_x86_64 }}" +keypair: fedora-admin-20130801 +security_group: default,wide-open-persistent +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: grafana +hostbase: grafana +public_ip: 209.132.184.44 +root_auth_users: codeblock ralph +description: graphite/statsd/grafana/etc experimentation + +volumes: + - volume_id: 818172fb-c278-4569-978f-f2822ab2d021 + device: /dev/vdc + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" diff --git a/inventory/host_vars/hosted03.fedoraproject.org b/inventory/host_vars/hosted03.fedoraproject.org new file mode 100644 index 0000000000..ee50d4e919 --- /dev/null +++ b/inventory/host_vars/hosted03.fedoraproject.org @@ -0,0 +1,4 @@ +--- +host_backup_targets: ['/srv'] + +fedmsg_fqdn: hosted03.vpn.fedoraproject.org diff --git a/inventory/host_vars/hosted04.fedoraproject.org b/inventory/host_vars/hosted04.fedoraproject.org deleted file mode 100644 index fbc0826155..0000000000 --- a/inventory/host_vars/hosted04.fedoraproject.org +++ /dev/null @@ -1,2 +0,0 @@ ---- -host_backup_targets: ['/srv'] diff --git a/inventory/host_vars/hotness01.phx2.fedoraproject.org b/inventory/host_vars/hotness01.phx2.fedoraproject.org index 220abf738d..05e768ee8a 100644 --- a/inventory/host_vars/hotness01.phx2.fedoraproject.org +++ b/inventory/host_vars/hotness01.phx2.fedoraproject.org @@ -11,3 +11,4 @@ eth0_ip: 10.5.126.5 volgroup: /dev/vg_guests00 vmhost: virthost07.phx2.fedoraproject.org datacenter: phx2 +freezes: false diff --git a/inventory/host_vars/ibiblio03.fedoraproject.org b/inventory/host_vars/ibiblio03.fedoraproject.org new file mode 100644 index 0000000000..c81416a178 --- /dev/null +++ b/inventory/host_vars/ibiblio03.fedoraproject.org @@ -0,0 +1,7 @@ +--- +nrpe_procs_warn: 900 +nrpe_procs_crit: 1000 +datacenter: ibiblio +nm: 255.255.255.128 +gw: 152.19.134.129 +dns: 152.2.21.1 diff --git a/inventory/host_vars/ibiblio05.fedoraproject.org b/inventory/host_vars/ibiblio05.fedoraproject.org new file mode 100644 index 0000000000..c81416a178 --- /dev/null +++ b/inventory/host_vars/ibiblio05.fedoraproject.org @@ -0,0 +1,7 @@ +--- +nrpe_procs_warn: 900 +nrpe_procs_crit: 1000 +datacenter: ibiblio +nm: 255.255.255.128 +gw: 152.19.134.129 +dns: 152.2.21.1 diff --git a/inventory/host_vars/ipsilon01.phx2.fedoraproject.org b/inventory/host_vars/ipsilon01.phx2.fedoraproject.org new file mode 100644 index 0000000000..f7efabeab0 --- /dev/null +++ b/inventory/host_vars/ipsilon01.phx2.fedoraproject.org @@ -0,0 +1,12 @@ +--- +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 + +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ + +volgroup: /dev/vg_guests00 +eth0_ip: 10.5.126.46 +vmhost: virthost15.phx2.fedoraproject.org +datacenter: phx2 diff --git a/inventory/host_vars/ipsilon02.phx2.fedoraproject.org b/inventory/host_vars/ipsilon02.phx2.fedoraproject.org new file mode 100644 index 0000000000..be12d1a2ff --- /dev/null +++ b/inventory/host_vars/ipsilon02.phx2.fedoraproject.org @@ -0,0 +1,12 @@ +--- +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 + +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ + +volgroup: /dev/vg_virthost +eth0_ip: 10.5.126.47 +vmhost: virthost17.phx2.fedoraproject.org +datacenter: phx2 diff --git a/inventory/host_vars/java-deptools.fedorainfracloud.org b/inventory/host_vars/java-deptools.fedorainfracloud.org new file mode 100644 index 0000000000..543455f3c2 --- /dev/null +++ b/inventory/host_vars/java-deptools.fedorainfracloud.org @@ -0,0 +1,22 @@ +--- +image: "{{ fedora22_x86_64 }}" +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,default,web-443-anywhere-persistent +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: java-deptools +hostbase: java-deptools +public_ip: 209.132.184.191 +root_auth_users: msimacek mizdebsk msrb +description: java-deptools application + +volumes: + - volume_id: dbe99b89-b93b-4c55-97ee-2c5e4ad3a714 + device: /dev/vdc + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" diff --git a/inventory/host_vars/jenkins-f22.fedorainfracloud.org b/inventory/host_vars/jenkins-f22.fedorainfracloud.org new file mode 100644 index 0000000000..5ffeb0c76d --- /dev/null +++ b/inventory/host_vars/jenkins-f22.fedorainfracloud.org @@ -0,0 +1,20 @@ +--- +image: "{{ fedora22_x86_64 }}" +instance_type: m1.xlarge +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,all-icmp-persistent,default +zone: nova +tcp_ports: [22, 80] + +inventory_tenant: persistent +inventory_instance_name: jenkins-f22 +hostbase: jenkins-f22 +public_ip: 209.132.184.59 +root_auth_users: pingou +description: jenkins f22 builder in new cloud + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + +freezes: false diff --git a/inventory/host_vars/jenkins-slave-el6.fedorainfracloud.org b/inventory/host_vars/jenkins-slave-el6.fedorainfracloud.org new file mode 100644 index 0000000000..c65bdaf39a --- /dev/null +++ b/inventory/host_vars/jenkins-slave-el6.fedorainfracloud.org @@ -0,0 +1,19 @@ +--- +image: "{{ centos66_x86_64 }}" +instance_type: m1.small +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,all-icmp-persistent,default +zone: nova +tcp_ports: [22] + +inventory_tenant: persistent +inventory_instance_name: jenkins-el6 +hostbase: jenkins-el6 +public_ip: 209.132.184.58 +root_auth_users: mizdebsk msrb +description: jenkins el6 builder in new cloud + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + diff --git a/inventory/host_vars/jenkins-slave-el7.fedorainfracloud.org b/inventory/host_vars/jenkins-slave-el7.fedorainfracloud.org new file mode 100644 index 0000000000..542ae44f65 --- /dev/null +++ b/inventory/host_vars/jenkins-slave-el7.fedorainfracloud.org @@ -0,0 +1,19 @@ +--- +image: rhel7-20141015 +instance_type: m1.small +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,all-icmp-persistent,default +zone: nova +tcp_ports: [22] + +inventory_tenant: persistent +inventory_instance_name: jenkins-el7 +hostbase: jenkins-el7 +public_ip: 209.132.184.189 +root_auth_users: mizdebsk msrb +description: jenkins el7 builder in new cloud + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + diff --git a/inventory/host_vars/jenkins-slave-f22.fedorainfracloud.org b/inventory/host_vars/jenkins-slave-f22.fedorainfracloud.org new file mode 100644 index 0000000000..07513a7b48 --- /dev/null +++ b/inventory/host_vars/jenkins-slave-f22.fedorainfracloud.org @@ -0,0 +1,19 @@ +--- +image: "{{ fedora22_x86_64 }}" +instance_type: m1.small +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,all-icmp-persistent,default +zone: nova +tcp_ports: [22] + +inventory_tenant: persistent +inventory_instance_name: jenkins-slave-f22 +hostbase: jenkins-el6 +public_ip: 209.132.184.190 +root_auth_users: mizdebsk msrb +description: jenkins f22 builder in new cloud + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + diff --git a/inventory/host_vars/jenkins.cloud.fedoraproject.org b/inventory/host_vars/jenkins.cloud.fedoraproject.org index df29f7da60..2e13b5d9f9 100644 --- a/inventory/host_vars/jenkins.cloud.fedoraproject.org +++ b/inventory/host_vars/jenkins.cloud.fedoraproject.org @@ -6,6 +6,6 @@ security_group: jenkins zone: nova hostbase: jenkins-master- public_ip: 209.132.184.153 -root_auth_users: pingou +root_auth_users: pingou mizdebsk description: jenkins cloud master volumes: ['-d /dev/vdb vol-00000011', '-d /dev/vdc vol-0000002b'] diff --git a/inventory/host_vars/jenkins.fedorainfracloud.org b/inventory/host_vars/jenkins.fedorainfracloud.org new file mode 100644 index 0000000000..568a7dfb5e --- /dev/null +++ b/inventory/host_vars/jenkins.fedorainfracloud.org @@ -0,0 +1,27 @@ +--- +image: "{{ fedora22_x86_64 }}" +instance_type: m1.small +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,default,all-icmp-persistent +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: jenkins +hostbase: jenkins +public_ip: 209.132.184.57 +root_auth_users: mizdebsk msrb +description: jenkins master in new cloud + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + +jenkins_master: True + +tcp_ports: [ 8080 ] + +custom_nat_rules: [ + # Redirect port 80 to 8080, which is used by jenkins + '-A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080', + ] diff --git a/inventory/host_vars/koji01.phx2.fedoraproject.org b/inventory/host_vars/koji01.phx2.fedoraproject.org index 12a167870d..a5c37932d1 100644 --- a/inventory/host_vars/koji01.phx2.fedoraproject.org +++ b/inventory/host_vars/koji01.phx2.fedoraproject.org @@ -17,3 +17,5 @@ keepalived_ipaddress: 10.5.125.63/24 nrpe_procs_warn: 900 nrpe_procs_crit: 1000 + +fedmsg_koji_instance: primary diff --git a/inventory/host_vars/koji01.stg.phx2.fedoraproject.org b/inventory/host_vars/koji01.stg.phx2.fedoraproject.org index 5d2da78a14..1b7c0573f5 100644 --- a/inventory/host_vars/koji01.stg.phx2.fedoraproject.org +++ b/inventory/host_vars/koji01.stg.phx2.fedoraproject.org @@ -8,3 +8,5 @@ volgroup: /dev/vg_guests eth0_ip: 10.5.126.87 vmhost: virthost10.phx2.fedoraproject.org datacenter: phx2 + +fedmsg_koji_instance: primary diff --git a/inventory/host_vars/koji02.phx2.fedoraproject.org b/inventory/host_vars/koji02.phx2.fedoraproject.org index bbfc927a67..fa275cf0f3 100644 --- a/inventory/host_vars/koji02.phx2.fedoraproject.org +++ b/inventory/host_vars/koji02.phx2.fedoraproject.org @@ -17,3 +17,5 @@ keepalived_ipaddress: 10.5.125.63/24 nrpe_procs_warn: 900 nrpe_procs_crit: 1000 + +fedmsg_koji_instance: primary diff --git a/inventory/host_vars/koschei.cloud.fedoraproject.org b/inventory/host_vars/koschei.cloud.fedoraproject.org deleted file mode 100644 index 6f6b06baea..0000000000 --- a/inventory/host_vars/koschei.cloud.fedoraproject.org +++ /dev/null @@ -1,20 +0,0 @@ -instance_type: m1.small -image: ami-00000042 -keypair: fedora-admin-20130801 -security_group: webserver -zone: fedoracloud -hostbase: koschei -public_ip: 209.132.184.151 -# users/groups who should have root ssh access -root_auth_users: mizdebsk msimacek -description: Koschei - ticket 4449 -volumes: ['-d /dev/vdb vol-0000002c'] - -# These are consumed by a task in roles/fedmsg/base/main.yml -fedmsg_certs: -- service: shell - owner: root - group: root -- service: koschei - owner: root - group: koschei diff --git a/inventory/host_vars/koschei01.phx2.fedoraproject.org b/inventory/host_vars/koschei01.phx2.fedoraproject.org new file mode 100644 index 0000000000..b4ffec5ea5 --- /dev/null +++ b/inventory/host_vars/koschei01.phx2.fedoraproject.org @@ -0,0 +1,12 @@ +--- +nm: 255.255.255.0 +gw: 10.5.125.254 +dns: 10.5.126.21 + +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ + +volgroup: /dev/xenGuests +eth0_ip: 10.5.125.222 +vmhost: bvirthost09.phx2.fedoraproject.org +datacenter: phx2 diff --git a/inventory/host_vars/koschei01.stg.phx2.fedoraproject.org b/inventory/host_vars/koschei01.stg.phx2.fedoraproject.org new file mode 100644 index 0000000000..d9cd3d03c5 --- /dev/null +++ b/inventory/host_vars/koschei01.stg.phx2.fedoraproject.org @@ -0,0 +1,12 @@ +--- +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 + +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ + +volgroup: /dev/vg_guests +eth0_ip: 10.5.126.221 +vmhost: virthost11.phx2.fedoraproject.org +datacenter: phx2 diff --git a/inventory/host_vars/lists-dev.cloud.fedoraproject.org b/inventory/host_vars/lists-dev.cloud.fedoraproject.org deleted file mode 100644 index bc1e5f17be..0000000000 --- a/inventory/host_vars/lists-dev.cloud.fedoraproject.org +++ /dev/null @@ -1,12 +0,0 @@ ---- -instance_type: m1.large -image: "{{ f20_qcow_id }}" -keypair: fedora-admin-20130801 -security_group: smtpserver -zone: nova -hostbase: lists-dev- -public_ip: 209.132.184.145 -root_auth_users: abompard -description: lists-dev instance to further test hyperkitty and mailman3 -volumes: ['-d /dev/vdb vol-0000000c'] -freezes: false diff --git a/inventory/host_vars/lists-dev.fedorainfracloud.org b/inventory/host_vars/lists-dev.fedorainfracloud.org new file mode 100644 index 0000000000..dab9d42e8b --- /dev/null +++ b/inventory/host_vars/lists-dev.fedorainfracloud.org @@ -0,0 +1,22 @@ +--- +image: rhel7-20141015 +instance_type: m1.large +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,default,web-443-anywhere-persistent +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: lists-dev +hostbase: lists-dev +public_ip: 209.132.184.180 +root_auth_users: abompard +description: lists development work + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + +# Used by the mailman role +mailman_url: lists-dev.cloud.fedoraproject.org +mailman_db_server: localhost diff --git a/inventory/host_vars/lockbox-comm01.qa.fedoraproject.org b/inventory/host_vars/lockbox-comm01.qa.fedoraproject.org deleted file mode 100644 index 218eeb0d5d..0000000000 --- a/inventory/host_vars/lockbox-comm01.qa.fedoraproject.org +++ /dev/null @@ -1,15 +0,0 @@ ---- -nm: 255.255.255.0 -gw: 10.5.124.254 -dns: 10.5.124.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-6 -ks_repo: http://10.5.126.23/repo/rhel/RHEL6-x86_64/ -volgroup: /dev/Guests00 -eth0_ip: 10.5.124.210 -vmhost: virthost-comm01.qa.fedoraproject.org -datacenter: phx2 -gitrepos: - - {name: private, path: 'local'} -ansible_base: /srv/ansible/ - -tcp_ports: [ 80 ] diff --git a/inventory/host_vars/memcached01.stg.phx2.fedoraproject.org b/inventory/host_vars/memcached01.stg.phx2.fedoraproject.org new file mode 100644 index 0000000000..350a56efd9 --- /dev/null +++ b/inventory/host_vars/memcached01.stg.phx2.fedoraproject.org @@ -0,0 +1,12 @@ +--- +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 + +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_guests +vmhost: virthost11.phx2.fedoraproject.org +datacenter: phx2 + +eth0_ip: 10.5.126.210 diff --git a/inventory/host_vars/mm-backend01.phx2.fedoraproject.org b/inventory/host_vars/mm-backend01.phx2.fedoraproject.org new file mode 100644 index 0000000000..31b9204ab3 --- /dev/null +++ b/inventory/host_vars/mm-backend01.phx2.fedoraproject.org @@ -0,0 +1,27 @@ +--- +lvm_size: 20000 +num_cpus: 2 +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_virthost03 +eth0_ip: 10.5.126.183 +eth1_ip: 10.5.127.23 +vmhost: virthost03.phx2.fedoraproject.org +datacenter: phx2 + +# nfs mount options, overrides the all/default +nfs_mount_opts: "ro,hard,bg,intr,nodev,nosuid,nfsvers=3" + +# We define this here to override the global one because we need eth1 +virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }} + --disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }} + --vcpus={{ num_cpus }} -l {{ ks_repo }} -x + "ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0 + hostname={{ inventory_hostname }} nameserver={{ dns }} + ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none + ip={{ eth1_ip }}:::{{ nm }}:{{ inventory_hostname }}-nfs:eth1:none" + --network=bridge=br0,model=virtio --network=bridge=br1,model=virtio + --autostart --noautoconsole diff --git a/inventory/host_vars/mm-backend01.stg.phx2.fedoraproject.org b/inventory/host_vars/mm-backend01.stg.phx2.fedoraproject.org index 86505ee814..43e99d6c2a 100644 --- a/inventory/host_vars/mm-backend01.stg.phx2.fedoraproject.org +++ b/inventory/host_vars/mm-backend01.stg.phx2.fedoraproject.org @@ -1,7 +1,7 @@ --- lvm_size: 20000 mem_size: 4096 -num_cpus: 2 +num_cpus: 4 nm: 255.255.255.0 gw: 10.5.126.254 dns: 10.5.126.21 @@ -14,7 +14,7 @@ vmhost: virthost16.phx2.fedoraproject.org datacenter: phx2 # nfs mount options, overrides the all/default -nfs_mount_opts: "ro,hard,bg,intr,nodev,nosuid" +nfs_mount_opts: "ro,hard,bg,intr,nodev,nosuid,nfsvers=3" # We define this here to override the global one because we need eth1 virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }} diff --git a/inventory/host_vars/mm-crawler01.phx2.fedoraproject.org b/inventory/host_vars/mm-crawler01.phx2.fedoraproject.org new file mode 100644 index 0000000000..0f364b8ee6 --- /dev/null +++ b/inventory/host_vars/mm-crawler01.phx2.fedoraproject.org @@ -0,0 +1,13 @@ +--- +lvm_size: 20000 +mem_size: 32768 +num_cpus: 4 +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_guests +eth0_ip: 10.5.126.184 +vmhost: virthost02.phx2.fedoraproject.org +datacenter: phx2 diff --git a/inventory/host_vars/mm-crawler02.phx2.fedoraproject.org b/inventory/host_vars/mm-crawler02.phx2.fedoraproject.org new file mode 100644 index 0000000000..7a27f7f13a --- /dev/null +++ b/inventory/host_vars/mm-crawler02.phx2.fedoraproject.org @@ -0,0 +1,13 @@ +--- +lvm_size: 20000 +mem_size: 32768 +num_cpus: 4 +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_virthost03 +eth0_ip: 10.5.126.185 +vmhost: virthost03.phx2.fedoraproject.org +datacenter: phx2 diff --git a/inventory/host_vars/mm-frontend01.phx2.fedoraproject.org b/inventory/host_vars/mm-frontend01.phx2.fedoraproject.org new file mode 100644 index 0000000000..d0d5cdbd43 --- /dev/null +++ b/inventory/host_vars/mm-frontend01.phx2.fedoraproject.org @@ -0,0 +1,16 @@ +--- +lvm_size: 20000 +mem_size: 8192 +num_cpus: 2 +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_guests +eth0_ip: 10.5.126.182 +vmhost: virthost02.phx2.fedoraproject.org +datacenter: phx2 + +tcp_ports: [ 80, 443 ] + diff --git a/inventory/host_vars/mm-frontend02.phx2.fedoraproject.org b/inventory/host_vars/mm-frontend02.phx2.fedoraproject.org new file mode 100644 index 0000000000..d8caa83cf5 --- /dev/null +++ b/inventory/host_vars/mm-frontend02.phx2.fedoraproject.org @@ -0,0 +1,16 @@ +--- +lvm_size: 20000 +mem_size: 8192 +num_cpus: 2 +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_guests +eth0_ip: 10.5.126.186 +vmhost: virthost19.phx2.fedoraproject.org +datacenter: phx2 + +tcp_ports: [ 80, 443 ] + diff --git a/inventory/host_vars/ns-sb01.fedoraproject.org b/inventory/host_vars/ns-sb01.fedoraproject.org index 00ff5e9fac..82f976194a 100644 --- a/inventory/host_vars/ns-sb01.fedoraproject.org +++ b/inventory/host_vars/ns-sb01.fedoraproject.org @@ -3,7 +3,7 @@ nm: 255.255.255.0 gw: 192.168.122.1 dns: 8.8.8.8 -volgroup: /dev/vg_host +volgroup: /dev/vg_Server eth0_ip: 192.168.122.3 vmhost: serverbeach09.fedoraproject.org datacenter: serverbeach @@ -17,4 +17,21 @@ virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }} "ksdevice=eth0 ks={{ ks_url }} ip={{ eth0_ip }} netmask={{ nm }} gateway={{ gw }} dns={{ dns }} console=tty0 console=ttyS0 hostname={{ inventory_hostname }}" + --network bridge=virbr0,model=virtio --autostart --noautoconsole + +ks_url: http://209.132.181.6/repo/rhel/ks/kvm-rhel-7-ext +ks_repo: http://209.132.181.6/repo/rhel/RHEL7-x86_64/ + +# This is consumed by the roles/fedora-web/main role +sponsor: serverbeach +postfix_group: vpn + +mem_size: 6144 +nrpe_procs_warn: 900 +nrpe_procs_crit: 1000 + +# This is used in the httpd.conf to determine the value for serverlimit and +# maxrequestworkers. On 8gb proxies, 900 seems fine. But on 4gb proxies, this +# should be lowered in the host vars for that proxy. +maxrequestworkers: 600 diff --git a/inventory/host_vars/ns02.fedoraproject.org b/inventory/host_vars/ns02.fedoraproject.org index b587b4affc..ec29b7ec6f 100644 --- a/inventory/host_vars/ns02.fedoraproject.org +++ b/inventory/host_vars/ns02.fedoraproject.org @@ -3,12 +3,15 @@ nm: 255.255.255.128 gw: 152.19.134.129 dns: 8.8.8.8 -volgroup: /dev/VirtGuests00 +volgroup: /dev/vg_guests eth0_ip: 152.19.134.139 ansible_ssh_host: ns02.fedoraproject.org postfix_group: vpn -vmhost: ibiblio02.fedoraproject.org +vmhost: ibiblio03.fedoraproject.org datacenter: ibiblio + +ks_url: http://209.132.181.6/repo/rhel/ks/kvm-rhel-7-ext +ks_repo: http://209.132.181.6/repo/rhel/RHEL7-x86_64/ diff --git a/inventory/host_vars/nuancier01.stg.phx2.fedoraproject.org b/inventory/host_vars/nuancier01.stg.phx2.fedoraproject.org index 6096d091d4..431a35a610 100644 --- a/inventory/host_vars/nuancier01.stg.phx2.fedoraproject.org +++ b/inventory/host_vars/nuancier01.stg.phx2.fedoraproject.org @@ -8,5 +8,5 @@ ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ volgroup: /dev/vg_guests eth0_ip: 10.5.126.202 -vmhost: virthost12.phx2.fedoraproject.org +vmhost: virthost11.phx2.fedoraproject.org datacenter: phx2 diff --git a/inventory/host_vars/osbs01.stg.phx2.fedoraproject.org b/inventory/host_vars/osbs01.stg.phx2.fedoraproject.org new file mode 100644 index 0000000000..08153be3a7 --- /dev/null +++ b/inventory/host_vars/osbs01.stg.phx2.fedoraproject.org @@ -0,0 +1,10 @@ +--- +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_virthost16 +eth0_ip: 10.5.126.114 +vmhost: virthost16.phx2.fedoraproject.org +datacenter: phx2 diff --git a/inventory/host_vars/pagure-stg01.fedoraproject.org b/inventory/host_vars/pagure-stg01.fedoraproject.org index e42a39b201..065f1323ec 100644 --- a/inventory/host_vars/pagure-stg01.fedoraproject.org +++ b/inventory/host_vars/pagure-stg01.fedoraproject.org @@ -3,15 +3,14 @@ nm: 255.255.255.128 gw: 140.211.169.193 dns: 8.8.8.8 -ks_url: http://infrastructure.fedoraproject.org/repo/rhel/ks/kvm-fedora-21-ext -ks_repo: http://infrastructure.fedoraproject.org/pub/fedora/linux/releases/21/Server/x86_64/os/ +ks_url: http://infrastructure.fedoraproject.org/repo/rhel/ks/kvm-rhel-7-ext +ks_repo: http://infrastructure.fedoraproject.org/repo/rhel/RHEL7-x86_64/ + volgroup: /dev/vg_server eth0_ip: 140.211.169.203 ansible_ssh_host: pagure-stg01.fedoraproject.org -postfix_group: vpn - vmhost: osuosl02.fedoraproject.org datacenter: osuosl diff --git a/inventory/host_vars/pagure01.fedoraproject.org b/inventory/host_vars/pagure01.fedoraproject.org new file mode 100644 index 0000000000..411a85ef30 --- /dev/null +++ b/inventory/host_vars/pagure01.fedoraproject.org @@ -0,0 +1,21 @@ +--- +nm: 255.255.255.128 +gw: 140.211.169.193 +dns: 8.8.8.8 + +ks_url: http://infrastructure.fedoraproject.org/repo/rhel/ks/kvm-rhel-7-ext +ks_repo: http://infrastructure.fedoraproject.org/repo/rhel/RHEL7-x86_64/ + +volgroup: /dev/vg_server + +eth0_ip: 140.211.169.204 +ansible_ssh_host: pagure.fedoraproject.org + +vmhost: osuosl02.fedoraproject.org +datacenter: osuosl + +# +# PostgreSQL configuration +# + +shared_buffers: "32MB" diff --git a/inventory/host_vars/people01.fedoraproject.org b/inventory/host_vars/people01.fedoraproject.org new file mode 100644 index 0000000000..4b0c9ca461 --- /dev/null +++ b/inventory/host_vars/people01.fedoraproject.org @@ -0,0 +1,26 @@ +--- +freezes: false +datacenter: ibiblio +#host_backup_targets: ['/srv/web'] + +nm: 255.255.255.128 +gw: 152.19.134.129 +dns: 8.8.8.8 +volgroup: /dev/vg_guests +eth0_ip: 152.19.134.196 +ks_url: http://209.132.181.6/repo/rhel/ks/kvm-rhel-7-people +ks_repo: http://209.132.181.6/repo/rhel/RHEL7-x86_64/ +postfix_group: vpn +vmhost: ibiblio03.fedoraproject.org +datacenter: ibiblio + +fedmsg_fqdn: people01.vpn.fedoraproject.org + +tcp_ports: [80, 443, 9418] + +nrpe_procs_warn: 900 +nrpe_procs_crit: 1000 + +lvm_size: 1t +mem_size: 8192 +num_cpus: 4 diff --git a/inventory/host_vars/people02.fedoraproject.org b/inventory/host_vars/people02.fedoraproject.org deleted file mode 100644 index eee061a41a..0000000000 --- a/inventory/host_vars/people02.fedoraproject.org +++ /dev/null @@ -1,33 +0,0 @@ ---- -freezes: false -datacenter: internetx -host_backup_targets: ['/srv/web'] - -nm: 255.255.255.240 -gw: 85.236.55.1 -dns: 8.8.8.8 - -ks_url: http://infrastructure.fedoraproject.org/repo/rhel/ks/kvm-rhel-7 -ks_repo: http://infrastructure.fedoraproject.org/repo/rhel/RHEL7-x86_64/ - -vmhost: internetx01.fedoraproject.org -volgroup: /dev/VolGuests00 -eth0_ip: 85.236.55.7 -postfix_group: vpn - -tcp_ports: [80, 443, 9418] - -nrpe_procs_warn: 900 -nrpe_procs_crit: 1000 - -lvm_size: 20000 -mem_size: 8192 -num_cpus: 4 - -virt_install_command: /usr/sbin/virt-install -n {{ inventory_hostname }} -r {{ mem_size }} - --disk {{ volgroup }}/{{ inventory_hostname }} - --vcpus={{ num_cpus }} -l {{ ks_repo }} -x - "ksdevice=eth0 ks={{ ks_url }} ip={{ eth0_ip }} netmask={{ nm }} - gateway={{ gw }} dns={{ dns }} console=tty0 console=ttyS0 - hostname={{ inventory_hostname }}" - --network=bridge=br0 --autostart --noautoconsole diff --git a/inventory/host_vars/people03.fedoraproject.org b/inventory/host_vars/people03.fedoraproject.org deleted file mode 100644 index 2aed120946..0000000000 --- a/inventory/host_vars/people03.fedoraproject.org +++ /dev/null @@ -1,4 +0,0 @@ ---- -freezes: false -datacenter: ibiblio -host_backup_targets: ['/srv/web'] diff --git a/inventory/host_vars/pkgs02.phx2.fedoraproject.org b/inventory/host_vars/pkgs02.phx2.fedoraproject.org index 43a01e6a5e..4ddd88c861 100644 --- a/inventory/host_vars/pkgs02.phx2.fedoraproject.org +++ b/inventory/host_vars/pkgs02.phx2.fedoraproject.org @@ -25,3 +25,4 @@ virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }} --autostart --noautoconsole host_backup_targets: ['/srv'] +nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3" diff --git a/inventory/host_vars/proxy11.fedoraproject.org b/inventory/host_vars/proxy11.fedoraproject.org index 4c412de68c..d07e698d7a 100644 --- a/inventory/host_vars/proxy11.fedoraproject.org +++ b/inventory/host_vars/proxy11.fedoraproject.org @@ -2,7 +2,7 @@ nm: 255.255.255.0 gw: 67.219.144.1 dns: 8.8.8.8 -num_cpus: 4 +num_cpus: 6 ks_url: http://209.132.181.6/repo/rhel/ks/kvm-rhel-7-ext ks_repo: http://209.132.181.6/repo/rhel/RHEL7-x86_64/ diff --git a/inventory/host_vars/qadevel-stg.qa.fedoraproject.org b/inventory/host_vars/qa-stg01.qa.fedoraproject.org similarity index 59% rename from inventory/host_vars/qadevel-stg.qa.fedoraproject.org rename to inventory/host_vars/qa-stg01.qa.fedoraproject.org index b88e6e1905..51205e7d22 100644 --- a/inventory/host_vars/qadevel-stg.qa.fedoraproject.org +++ b/inventory/host_vars/qa-stg01.qa.fedoraproject.org @@ -1,18 +1,20 @@ --- nm: 255.255.255.0 gw: 10.5.124.254 -dns: 10.5.124.21 +dns: 10.5.126.21 ks_url: http://10.5.126.23/repo/rhel/ks/buildvm-fedora-21 ks_repo: http://10.5.126.23/pub/fedora/linux/releases/21/Server/x86_64/os/ -volgroup: /dev/VirtGuests +volgroup: /dev/vg_guests eth0_ip: 10.5.124.230 -vmhost: virthost-comm03.qa.fedoraproject.org +vmhost: virthost-comm04.qa.fedoraproject.org datacenter: phx2 fas_client_groups: sysadmin-qa,sysadmin-main -public_hostname: qadevel-stg.qa.fedoraproject.org +mariadb_root_password: "{{ qa_stg_mariadb_root_password }}" + +public_hostname: qa.stg.fedoraproject.org buildmaster: 10.5.124.230 buildslaves: - - qadevel-stg + - qa-stg01 diff --git a/inventory/host_vars/qa01.qa.fedoraproject.org b/inventory/host_vars/qa01.qa.fedoraproject.org new file mode 100644 index 0000000000..a9fee97a67 --- /dev/null +++ b/inventory/host_vars/qa01.qa.fedoraproject.org @@ -0,0 +1,5 @@ +--- +freezes: false +fas_client_groups: sysadmin-qa,sysadmin-main +sudoers: "{{ private }}/files/sudo/qavirt-sudoers" + diff --git a/inventory/host_vars/qa02.qa.fedoraproject.org b/inventory/host_vars/qa02.qa.fedoraproject.org new file mode 100644 index 0000000000..8eda2760a6 --- /dev/null +++ b/inventory/host_vars/qa02.qa.fedoraproject.org @@ -0,0 +1,35 @@ +--- +freezes: false +fas_client_groups: sysadmin-qa,sysadmin-main +sudoers: "{{ private }}/files/sudo/qavirt-sudoers" +datacenter: phx2 + +# hardware and setup information +eth0_ip: 10.5.124.152 +eth0_mac: 00:21:5e:c6:cc:9c +eth_interface: eth0 +volgroup: vmstore + +# beaker clients hosted on this machine +clients: + - hostname: virt01.qa.fedoraproject.org + macaddress: "52:54:00:a2:de:30" + memsize: 4096 + num_cpus: 2 + lvm_size: 20G + - hostname: virt02.qa.fedoraproject.org + macaddress: "52:54:00:fe:22:ff" + memsize: 4096 + num_cpus: 2 + lvm_size: 20G + - hostname: virt03.qa.fedoraproject.org + macaddress: "52:54:00:c5:04:14" + memsize: 4096 + num_cpus: 2 + lvm_size: 20G + - hostname: virt04.qa.fedoraproject.org + macaddress: "52:54:00:b5:97:30" + memsize: 4096 + num_cpus: 2 + lvm_size: 20G + diff --git a/inventory/host_vars/qa03.qa.fedoraproject.org b/inventory/host_vars/qa03.qa.fedoraproject.org new file mode 100644 index 0000000000..a9fee97a67 --- /dev/null +++ b/inventory/host_vars/qa03.qa.fedoraproject.org @@ -0,0 +1,5 @@ +--- +freezes: false +fas_client_groups: sysadmin-qa,sysadmin-main +sudoers: "{{ private }}/files/sudo/qavirt-sudoers" + diff --git a/inventory/host_vars/qa08.qa.fedoraproject.org b/inventory/host_vars/qa08.qa.fedoraproject.org new file mode 100644 index 0000000000..1b66e934a3 --- /dev/null +++ b/inventory/host_vars/qa08.qa.fedoraproject.org @@ -0,0 +1,35 @@ +--- +freezes: false +fas_client_groups: sysadmin-qa,sysadmin-main +sudoers: "{{ private }}/files/sudo/qavirt-sudoers" +datacenter: phx2 + +# hardware and setup information +eth0_ip: 10.5.124.158 +eth0_mac: e4:1f:13:e5:46:80 +eth_interface: eth0 +volgroup: vmstore + +# beaker clients hosted on this machine +clients: + - hostname: virt15.qa.fedoraproject.org + macaddress: "52:54:00:1d:15:85" + memsize: 4096 + num_cpus: 2 + lvm_size: 20G + - hostname: virt16.qa.fedoraproject.org + macaddress: "52:54:00:f2:cc:2a" + memsize: 4096 + num_cpus: 2 + lvm_size: 20G + - hostname: virt17.qa.fedoraproject.org + macaddress: "52:54:00:58:9b:0e" + memsize: 4096 + num_cpus: 2 + lvm_size: 20G + - hostname: virt18.qa.fedoraproject.org + macaddress: "52:54:00:22:3b:07" + memsize: 4096 + num_cpus: 2 + lvm_size: 20G + diff --git a/inventory/host_vars/qadevel.qa.fedoraproject.org b/inventory/host_vars/qadevel.qa.fedoraproject.org index 718de4d03c..e373372e40 100644 --- a/inventory/host_vars/qadevel.qa.fedoraproject.org +++ b/inventory/host_vars/qadevel.qa.fedoraproject.org @@ -2,18 +2,18 @@ nm: 255.255.255.0 gw: 10.5.124.254 dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-20 -ks_repo: http://10.5.126.23/pub/fedora/linux/releases/20/Fedora/x86_64/os/ -volgroup: /dev/Guests00 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-21 +ks_repo: http://10.5.126.23/pub/fedora/linux/releases/21/Server/x86_64/os/ +volgroup: /dev/VirtGuests eth0_ip: 10.5.124.180 -vmhost: virthost-comm01.qa.fedoraproject.org +vmhost: virthost-comm03.qa.fedoraproject.org datacenter: phx2 fas_client_groups: sysadmin-qa,sysadmin-main # default virt install command is for a single nic-device # define in another group file for more nics (see buildvm) -virt_install_command: /usr/sbin/virt-install -n {{ inventory_hostname }} -r {{ mem_size }} +virt_install_command: /usr/bin/virt-install -n {{ inventory_hostname }} -r {{ mem_size }} --disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }} --vcpus={{ num_cpus }} -l {{ ks_repo }} -x "ks={{ ks_url }} ip={{ eth0_ip }} netmask={{ nm }} @@ -21,6 +21,8 @@ virt_install_command: /usr/sbin/virt-install -n {{ inventory_hostname }} -r {{ m hostname={{ inventory_hostname }}" --network=bridge=br0 --autostart --noautoconsole +mariadb_root_password: "{{ qadevel_mariadb_root_password }}" + public_hostname: qadevel.qa.fedoraproject.org buildmaster: 10.5.124.180 diff --git a/inventory/host_vars/rawhide-composer.phx2.fedoraproject.org b/inventory/host_vars/rawhide-composer.phx2.fedoraproject.org index 56db2a45bc..9cb3409b4d 100644 --- a/inventory/host_vars/rawhide-composer.phx2.fedoraproject.org +++ b/inventory/host_vars/rawhide-composer.phx2.fedoraproject.org @@ -5,12 +5,4 @@ volgroup: /dev/vg_bvirthost06 kojipkgs_url: kojipkgs.fedoraproject.org kojihub_url: koji.fedoraproject.org/kojihub - -# These are consumed by a task in roles/fedmsg/base/main.yml -fedmsg_certs: -- service: shell - owner: root - group: root -- service: bodhi - owner: root - group: masher +kojihub_scheme: https diff --git a/inventory/host_vars/resultsdb01.qa.fedoraproject.org b/inventory/host_vars/resultsdb01.qa.fedoraproject.org index c659b96bcc..a3854701ab 100644 --- a/inventory/host_vars/resultsdb01.qa.fedoraproject.org +++ b/inventory/host_vars/resultsdb01.qa.fedoraproject.org @@ -2,10 +2,21 @@ nm: 255.255.255.0 gw: 10.5.124.254 dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-20 -ks_repo: http://10.5.126.23/pub/fedora/linux/releases/20/Fedora/x86_64/os/ +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-21 +ks_repo: http://10.5.126.23/pub/fedora/linux/releases/21/Server/x86_64/os/ volgroup: /dev/VirtGuests eth0_ip: 10.5.124.207 vmhost: virthost-comm03.qa.fedoraproject.org datacenter: phx2 sudoers: "{{ private }}/files/sudo/qavirt-sudoers" + +# default virt install command is for a single nic-device +# define in another group file for more nics (see buildvm) +virt_install_command: /usr/bin/virt-install -n {{ inventory_hostname }} -r {{ mem_size }} + --disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }} + --vcpus={{ num_cpus }} -l {{ ks_repo }} -x + "ks={{ ks_url }} ip={{ eth0_ip }} netmask={{ nm }} + gateway={{ gw }} dns={{ dns }} console=tty0 console=ttyS0 + hostname={{ inventory_hostname }}" + --network=bridge=br0 --autostart --noautoconsole + diff --git a/inventory/host_vars/s390-koji01.qa.fedoraproject.org b/inventory/host_vars/s390-koji01.qa.fedoraproject.org new file mode 100644 index 0000000000..358d51ba37 --- /dev/null +++ b/inventory/host_vars/s390-koji01.qa.fedoraproject.org @@ -0,0 +1,79 @@ +--- +nm: 255.255.255.0 +gw: 10.5.124.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_guests +eth0_ip: 10.5.124.191 +vmhost: virthost-s390.qa.fedoraproject.org +datacenter: phx2 +nrpe_procs_warn: 900 +nrpe_procs_crit: 1000 + +fas_client_groups: sysadmin-noc,sysadmin-secondary + +fedmsg_fqdn: s390-koji01.qa.fedoraproject.org + +custom_rules: [ + # Need for rsync from secondary01 for content. + '-A INPUT -p tcp -m tcp -s 209.132.181.8 --dport 873 -j ACCEPT', +] + +sudoers: "{{ private }}/files/sudo/sysadmin-secondary-sudoers" + +# +# define this here because s390 koji only needs eth0, not eth1 also +# +virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }} + --disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }} + --vcpus={{ num_cpus }} -l {{ ks_repo }} -x + "ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0 + hostname={{ inventory_hostname }} nameserver={{ dns }} + ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none" + --network=bridge=br0,model=virtio --network=bridge=br1,model=virtio + --autostart --noautoconsole + +koji_topurl: "http://s390pkgs.fedoraproject.org/" +koji_server_url: "http://s390.koji.fedoraproject.org/kojihub" +koji_weburl: "http://s390.koji.fedoraproject.org/koji" + +fedmsg_koji_instance: s390 + +# Overload the fedmsg_certs definition from the ansible koji group, since the +# s390 hub *also* does compose stuff, not just koji stuff. +fedmsg_certs: +- service: shell + owner: root + group: sysadmin +- service: koji + owner: root + group: apache + can_send: + - buildsys.build.state.change + - buildsys.package.list.change + - buildsys.repo.done + - buildsys.repo.init + - buildsys.rpm.sign + - buildsys.tag + - buildsys.task.state.change + - buildsys.untag +- service: bodhi + owner: root + group: localreleng + can_send: + - compose.branched.complete + - compose.branched.mash.complete + - compose.branched.mash.start + - compose.branched.pungify.complete + - compose.branched.pungify.start + - compose.branched.rsync.complete + - compose.branched.rsync.start + - compose.branched.start + - compose.epelbeta.complete + - compose.rawhide.complete + - compose.rawhide.mash.complete + - compose.rawhide.mash.start + - compose.rawhide.rsync.complete + - compose.rawhide.rsync.start + - compose.rawhide.start diff --git a/inventory/host_vars/secondary-bridge01.qa.fedoraproject.org b/inventory/host_vars/secondary-bridge01.qa.fedoraproject.org new file mode 100644 index 0000000000..a685feef71 --- /dev/null +++ b/inventory/host_vars/secondary-bridge01.qa.fedoraproject.org @@ -0,0 +1,12 @@ +--- +nm: 255.255.255.0 +gw: 10.5.124.254 +dns: 10.5.126.21 + +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/VirtGuests +vmhost: virthost-comm03.qa.fedoraproject.org +datacenter: phx2 + +eth0_ip: 10.5.124.145 diff --git a/inventory/host_vars/secondary-vault01.qa.fedoraproject.org b/inventory/host_vars/secondary-vault01.qa.fedoraproject.org new file mode 100644 index 0000000000..9234f73680 --- /dev/null +++ b/inventory/host_vars/secondary-vault01.qa.fedoraproject.org @@ -0,0 +1,12 @@ +--- +nm: 255.255.255.0 +gw: 10.5.124.254 +dns: 10.5.126.21 + +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ +volgroup: /dev/vg_guests +vmhost: virthost-comm04.qa.fedoraproject.org +datacenter: phx2 + +eth0_ip: 10.5.124.146 diff --git a/inventory/host_vars/secondary01.phx2.fedoraproject.org b/inventory/host_vars/secondary01.phx2.fedoraproject.org index 3dba41c3f8..46988fa51b 100644 --- a/inventory/host_vars/secondary01.phx2.fedoraproject.org +++ b/inventory/host_vars/secondary01.phx2.fedoraproject.org @@ -1,10 +1,29 @@ --- +lvm_size: 20000 +mem_size: 10240 +num_cpus: 4 + nm: 255.255.255.0 gw: 10.5.126.254 dns: 10.5.126.21 + ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ -volgroup: /dev/vg_guests00 + +volgroup: /dev/vg_guests eth0_ip: 10.5.126.27 -vmhost: virthost15.phx2.fedoraproject.org +eth1_ip: 10.5.127.66 + +vmhost: virthost02.phx2.fedoraproject.org datacenter: phx2 + +# We define this here to override the global one because we need eth1 +virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }} + --disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }} + --vcpus={{ num_cpus }} -l {{ ks_repo }} -x + "ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0 + hostname={{ inventory_hostname }} nameserver={{ dns }} + ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none + ip={{ eth1_ip }}:::{{ nm }}:{{ inventory_hostname }}-nfs:eth1:none" + --network=bridge=br0,model=virtio --network=bridge=br1,model=virtio + --autostart --noautoconsole diff --git a/inventory/host_vars/serverbeach09.fedoraproject.org b/inventory/host_vars/serverbeach09.fedoraproject.org new file mode 100644 index 0000000000..331d8c93a2 --- /dev/null +++ b/inventory/host_vars/serverbeach09.fedoraproject.org @@ -0,0 +1,5 @@ +--- +datacenter: serverbeach +nrpe_procs_warn: 900 +nrpe_procs_crit: 1000 +postfix_group: vpn diff --git a/inventory/host_vars/shumgrepper-dev.fedorainfracloud.org b/inventory/host_vars/shumgrepper-dev.fedorainfracloud.org new file mode 100644 index 0000000000..6cc2116e37 --- /dev/null +++ b/inventory/host_vars/shumgrepper-dev.fedorainfracloud.org @@ -0,0 +1,18 @@ +--- +image: rhel7-20141015 +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,default +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: shumgrepper-dev +hostbase: shumgrepper-dev +public_ip: 209.132.184.66 +root_auth_users: pingou +description: shumgrepper development instance + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" diff --git a/inventory/host_vars/smtp-mm-ib01.fedoraproject.org b/inventory/host_vars/smtp-mm-ib01.fedoraproject.org index 00fcdf7c91..94b0df7fc5 100644 --- a/inventory/host_vars/smtp-mm-ib01.fedoraproject.org +++ b/inventory/host_vars/smtp-mm-ib01.fedoraproject.org @@ -2,9 +2,9 @@ nm: 255.255.255.128 gw: 152.19.134.129 dns: 152.2.21.1 152.2.253.100 -ks_url: http://infrastructure.fedoraproject.org/repo/rhel/ks/kvm-rhel-6 -ks_repo: http://infrastructure.fedoraproject.org/repo/rhel/RHEL6-x86_64/ -volgroup: /dev/VirtGuests00 +volgroup: /dev/vg_guests eth0_ip: 152.19.134.143 -vmhost: ibiblio02.fedoraproject.org +vmhost: ibiblio03.fedoraproject.org datacenter: ibiblio +ks_url: http://209.132.181.6/repo/rhel/ks/kvm-rhel-7-ext +ks_repo: http://209.132.181.6/repo/rhel/RHEL7-x86_64/ diff --git a/inventory/host_vars/statscache01.stg.phx2.fedoraproject.org b/inventory/host_vars/statscache01.stg.phx2.fedoraproject.org new file mode 100644 index 0000000000..92ccd4f7d6 --- /dev/null +++ b/inventory/host_vars/statscache01.stg.phx2.fedoraproject.org @@ -0,0 +1,14 @@ +--- +nm: 255.255.255.0 +gw: 10.5.126.254 +dns: 10.5.126.21 + +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ + +eth0_ip: 10.5.126.2 + +vmhost: virthost11.phx2.fedoraproject.org +volgroup: /dev/vg_guests + +datacenter: phx2 diff --git a/inventory/host_vars/sundries01.phx2.fedoraproject.org b/inventory/host_vars/sundries01.phx2.fedoraproject.org index 8bca9054bf..73726007ae 100644 --- a/inventory/host_vars/sundries01.phx2.fedoraproject.org +++ b/inventory/host_vars/sundries01.phx2.fedoraproject.org @@ -2,8 +2,8 @@ nm: 255.255.255.0 gw: 10.5.126.254 dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-6 -ks_repo: http://10.5.126.23/repo/rhel/RHEL6-x86_64/ +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ volgroup: /dev/vg_virthost03 eth0_ip: 10.5.126.38 vmhost: virthost03.phx2.fedoraproject.org diff --git a/inventory/host_vars/sundries01.stg.phx2.fedoraproject.org b/inventory/host_vars/sundries01.stg.phx2.fedoraproject.org index 952fe3de94..3e1f2c187b 100644 --- a/inventory/host_vars/sundries01.stg.phx2.fedoraproject.org +++ b/inventory/host_vars/sundries01.stg.phx2.fedoraproject.org @@ -2,11 +2,11 @@ nm: 255.255.255.0 gw: 10.5.126.254 dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-6 -ks_repo: http://10.5.126.23/repo/rhel/RHEL6-x86_64/ +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ volgroup: /dev/vg_guests eth0_ip: 10.5.126.24 -vmhost: virthost12.phx2.fedoraproject.org +vmhost: virthost11.phx2.fedoraproject.org datacenter: phx2 # This overrides a group var and lets the playbook know that we should # install special cron jobs here. diff --git a/inventory/host_vars/sundries02.phx2.fedoraproject.org b/inventory/host_vars/sundries02.phx2.fedoraproject.org index 42c34a5846..cffba3fcc9 100644 --- a/inventory/host_vars/sundries02.phx2.fedoraproject.org +++ b/inventory/host_vars/sundries02.phx2.fedoraproject.org @@ -2,8 +2,8 @@ nm: 255.255.255.0 gw: 10.5.126.254 dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-6 -ks_repo: http://10.5.126.23/repo/rhel/RHEL6-x86_64/ +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ volgroup: /dev/vg_virthost01 eth0_ip: 10.5.126.40 vmhost: virthost01.phx2.fedoraproject.org diff --git a/inventory/host_vars/taiga.cloud.fedoraproject.org b/inventory/host_vars/taiga.cloud.fedoraproject.org new file mode 100644 index 0000000000..998dc859bd --- /dev/null +++ b/inventory/host_vars/taiga.cloud.fedoraproject.org @@ -0,0 +1,26 @@ +--- +image: "{{ fedora21_x86_64 }}" +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-persistent,web-80-anywhere-persistent,default +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: persistent +inventory_instance_name: taiga +hostbase: taiga +public_ip: 209.132.184.50 +root_auth_users: ralph maxamillion +description: taiga frontend server + +host_backup_targets: ['/backups'] +dbs_to_backup: ['taiga'] + +volumes: + - volume_id: 4a99a0b3-6812-4c09-af1e-6313a467e3ec + device: /dev/vdc + +cloud_networks: + # persistent-net + - net-id: "67b77354-39a4-43de-b007-bb813ac5c35f" + diff --git a/inventory/host_vars/taskotron-client07.qa.fedoraproject.org b/inventory/host_vars/taskotron-client07.qa.fedoraproject.org new file mode 100644 index 0000000000..3756f8a343 --- /dev/null +++ b/inventory/host_vars/taskotron-client07.qa.fedoraproject.org @@ -0,0 +1,14 @@ +--- +nm: 255.255.255.0 +gw: 10.5.124.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-21 +ks_repo: http://10.5.126.23/pub/fedora/linux/releases/21/Server/x86_64/os/ +volgroup: /dev/VirtGuests +eth0_ip: 10.5.124.165 +vmhost: qa09.qa.fedoraproject.org +datacenter: phx2 + +short_hostname: taskotron-client07.qa +buildslave_name: taskotron-client07 +fas_client_groups: sysadmin-qa,sysadmin-main diff --git a/inventory/host_vars/taskotron-client08.qa.fedoraproject.org b/inventory/host_vars/taskotron-client08.qa.fedoraproject.org new file mode 100644 index 0000000000..b300c0da16 --- /dev/null +++ b/inventory/host_vars/taskotron-client08.qa.fedoraproject.org @@ -0,0 +1,14 @@ +--- +nm: 255.255.255.0 +gw: 10.5.124.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-21 +ks_repo: http://10.5.126.23/pub/fedora/linux/releases/21/Server/x86_64/os/ +volgroup: /dev/VirtGuests +eth0_ip: 10.5.124.166 +vmhost: qa09.qa.fedoraproject.org +datacenter: phx2 + +short_hostname: taskotron-client08.qa +buildslave_name: taskotron-client08 +fas_client_groups: sysadmin-qa,sysadmin-main diff --git a/inventory/host_vars/taskotron-client09.qa.fedoraproject.org b/inventory/host_vars/taskotron-client09.qa.fedoraproject.org new file mode 100644 index 0000000000..641c6efca6 --- /dev/null +++ b/inventory/host_vars/taskotron-client09.qa.fedoraproject.org @@ -0,0 +1,14 @@ +--- +nm: 255.255.255.0 +gw: 10.5.124.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-21 +ks_repo: http://10.5.126.23/pub/fedora/linux/releases/21/Server/x86_64/os/ +volgroup: /dev/VirtGuests +eth0_ip: 10.5.124.167 +vmhost: qa09.qa.fedoraproject.org +datacenter: phx2 + +short_hostname: taskotron-client09.qa +buildslave_name: taskotron-client09 +fas_client_groups: sysadmin-qa,sysadmin-main diff --git a/inventory/host_vars/taskotron-client10.qa.fedoraproject.org b/inventory/host_vars/taskotron-client10.qa.fedoraproject.org new file mode 100644 index 0000000000..695c78b4b8 --- /dev/null +++ b/inventory/host_vars/taskotron-client10.qa.fedoraproject.org @@ -0,0 +1,14 @@ +--- +nm: 255.255.255.0 +gw: 10.5.124.254 +dns: 10.5.126.21 +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-21 +ks_repo: http://10.5.126.23/pub/fedora/linux/releases/21/Server/x86_64/os/ +volgroup: /dev/VirtGuests +eth0_ip: 10.5.124.168 +vmhost: qa09.qa.fedoraproject.org +datacenter: phx2 + +short_hostname: taskotron-client10.qa +buildslave_name: taskotron-client10 +fas_client_groups: sysadmin-qa,sysadmin-main diff --git a/inventory/host_vars/taskotron-client22.qa.fedoraproject.org b/inventory/host_vars/taskotron-client22.qa.fedoraproject.org index 5948337ede..cbf52c70b9 100644 --- a/inventory/host_vars/taskotron-client22.qa.fedoraproject.org +++ b/inventory/host_vars/taskotron-client22.qa.fedoraproject.org @@ -2,8 +2,8 @@ nm: 255.255.255.0 gw: 10.5.124.254 dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-20 -ks_repo: http://10.5.126.23/pub/fedora/linux/releases/20/Fedora/x86_64/os/ +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-21 +ks_repo: http://10.5.126.23/pub/fedora/linux/releases/21/Server/x86_64/os/ volgroup: /dev/VirtGuests eth0_ip: 10.5.124.183 vmhost: qa09.qa.fedoraproject.org diff --git a/inventory/host_vars/taskotron-client23.qa.fedoraproject.org b/inventory/host_vars/taskotron-client23.qa.fedoraproject.org index 09741b7b88..c5f61d22fa 100644 --- a/inventory/host_vars/taskotron-client23.qa.fedoraproject.org +++ b/inventory/host_vars/taskotron-client23.qa.fedoraproject.org @@ -2,8 +2,8 @@ nm: 255.255.255.0 gw: 10.5.124.254 dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-20 -ks_repo: http://10.5.126.23/pub/fedora/linux/releases/20/Fedora/i386/os/ +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-21 +ks_repo: http://10.5.126.23/pub/fedora/linux/releases/21/Server/i386/os/ volgroup: /dev/VirtGuests eth0_ip: 10.5.124.184 vmhost: qa09.qa.fedoraproject.org diff --git a/inventory/host_vars/taskotron-client24.qa.fedoraproject.org b/inventory/host_vars/taskotron-client24.qa.fedoraproject.org index 2ed3a20dfc..d1342af3fb 100644 --- a/inventory/host_vars/taskotron-client24.qa.fedoraproject.org +++ b/inventory/host_vars/taskotron-client24.qa.fedoraproject.org @@ -2,8 +2,8 @@ nm: 255.255.255.0 gw: 10.5.124.254 dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-20 -ks_repo: http://10.5.126.23/pub/fedora/linux/releases/20/Fedora/x86_64/os/ +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-21 +ks_repo: http://10.5.126.23/pub/fedora/linux/releases/21/Server/x86_64/os/ volgroup: /dev/VirtGuests eth0_ip: 10.5.124.185 vmhost: qa09.qa.fedoraproject.org diff --git a/inventory/host_vars/taskotron-client25.qa.fedoraproject.org b/inventory/host_vars/taskotron-client25.qa.fedoraproject.org index 54bb934735..b431224fd1 100644 --- a/inventory/host_vars/taskotron-client25.qa.fedoraproject.org +++ b/inventory/host_vars/taskotron-client25.qa.fedoraproject.org @@ -2,8 +2,8 @@ nm: 255.255.255.0 gw: 10.5.124.254 dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-20 -ks_repo: http://10.5.126.23/pub/fedora/linux/releases/20/Fedora/x86_64/os/ +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-21 +ks_repo: http://10.5.126.23/pub/fedora/linux/releases/21/Server/x86_64/os/ volgroup: /dev/VirtGuests eth0_ip: 10.5.124.186 vmhost: qa09.qa.fedoraproject.org diff --git a/inventory/host_vars/taskotron-dev01.qa.fedoraproject.org b/inventory/host_vars/taskotron-dev01.qa.fedoraproject.org index 43b5941c77..b6c4e08db5 100644 --- a/inventory/host_vars/taskotron-dev01.qa.fedoraproject.org +++ b/inventory/host_vars/taskotron-dev01.qa.fedoraproject.org @@ -8,7 +8,7 @@ volgroup: /dev/vg_guests eth0_ip: 10.5.124.181 vmhost: virthost-comm04.qa.fedoraproject.org datacenter: phx2 -fas_client_groups: sysadmin-qa,sysadmin-main +fas_client_groups: sysadmin-qa,sysadmin-main,fi-apprentice lvm_size: 45000 # default virt install command is for a single nic-device diff --git a/inventory/host_vars/taskotron01.qa.fedoraproject.org b/inventory/host_vars/taskotron01.qa.fedoraproject.org index b1b31c3566..8165126423 100644 --- a/inventory/host_vars/taskotron01.qa.fedoraproject.org +++ b/inventory/host_vars/taskotron01.qa.fedoraproject.org @@ -2,14 +2,15 @@ nm: 255.255.255.0 gw: 10.5.124.254 dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-20 -ks_repo: http://10.5.126.23/pub/fedora/linux/releases/20/Fedora/x86_64/os/ +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-fedora-21-taskotron-master +ks_repo: http://10.5.126.23/pub/fedora/linux/releases/21/Server/x86_64/os/ volgroup: /dev/VirtGuests eth0_ip: 10.5.124.206 vmhost: virthost-comm03.qa.fedoraproject.org datacenter: phx2 fas_client_groups: sysadmin-qa,sysadmin-main sudoers: "{{ private }}/files/sudo/qavirt-sudoers" +lvm_size: 45000 # default virt install command is for a single nic-device # define in another group file for more nics (see buildvm) @@ -25,6 +26,10 @@ public_hostname: taskotron.fedoraproject.org buildmaster: 10.5.124.206 buildslaves: + - taskotron-client07 + - taskotron-client08 + - taskotron-client09 + - taskotron-client10 - taskotron-client22 - taskotron-client23 - taskotron-client24 @@ -32,6 +37,10 @@ buildslaves: i386_buildslaves: - taskotron-client23 x86_64_buildslaves: + - taskotron-client07 + - taskotron-client08 + - taskotron-client09 + - taskotron-client10 - taskotron-client22 - taskotron-client24 - taskotron-client25 diff --git a/inventory/host_vars/torrent01.fedoraproject.org b/inventory/host_vars/torrent01.fedoraproject.org new file mode 100644 index 0000000000..e1a12ab7c4 --- /dev/null +++ b/inventory/host_vars/torrent01.fedoraproject.org @@ -0,0 +1,17 @@ +--- +nm: 255.255.255.128 +gw: 152.19.134.129 +dns: 8.8.8.8 + +volgroup: /dev/vg_guests + +eth0_ip: 152.19.134.141 +ansible_ssh_host: torrent01.fedoraproject.org + +ks_url: http://209.132.181.6/repo/rhel/ks/kvm-rhel-7-ext +ks_repo: http://209.132.181.6/repo/rhel/RHEL7-x86_64/ + +postfix_group: vpn + +vmhost: ibiblio03.fedoraproject.org +datacenter: ibiblio diff --git a/inventory/host_vars/twisted-fedora21-1.fedorainfracloud.org b/inventory/host_vars/twisted-fedora21-1.fedorainfracloud.org new file mode 100644 index 0000000000..fe00ac27ab --- /dev/null +++ b/inventory/host_vars/twisted-fedora21-1.fedorainfracloud.org @@ -0,0 +1,17 @@ +--- +image: "{{ fedora21_x86_64 }}" +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-pythonbots,all-icmp-pythonbots,default +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: pythonbots +inventory_instance_name: twisted-fedora21-1 +hostbase: twisted-fedora21-1 +public_ip: 209.132.184.135 +description: twisted buildbot for fedora 21 + +cloud_networks: + # pythonbots-net + - net-id: "36ca66de-001d-4807-a688-58c363d84d68" diff --git a/inventory/host_vars/twisted-fedora21-2.fedorainfracloud.org b/inventory/host_vars/twisted-fedora21-2.fedorainfracloud.org new file mode 100644 index 0000000000..c7915cacba --- /dev/null +++ b/inventory/host_vars/twisted-fedora21-2.fedorainfracloud.org @@ -0,0 +1,17 @@ +--- +image: "{{ fedora21_x86_64 }}" +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-pythonbots,all-icmp-pythonbots,default +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: pythonbots +inventory_instance_name: twisted-fedora21-2 +hostbase: twisted-fedora21-2 +public_ip: 209.132.184.136 +description: twisted buildbot for fedora 21 + +cloud_networks: + # pythonbots-net + - net-id: "36ca66de-001d-4807-a688-58c363d84d68" diff --git a/inventory/host_vars/twisted-fedora22-1.fedorainfracloud.org b/inventory/host_vars/twisted-fedora22-1.fedorainfracloud.org new file mode 100644 index 0000000000..74c875bea7 --- /dev/null +++ b/inventory/host_vars/twisted-fedora22-1.fedorainfracloud.org @@ -0,0 +1,17 @@ +--- +image: "{{ fedora22_x86_64 }}" +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-pythonbots,all-icmp-pythonbots,default +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: pythonbots +inventory_instance_name: twisted-fedora22-1 +hostbase: twisted-fedora22-1 +public_ip: 209.132.184.183 +description: twisted buildbot for fedora 22 + +cloud_networks: + # pythonbots-net + - net-id: "36ca66de-001d-4807-a688-58c363d84d68" diff --git a/inventory/host_vars/twisted-fedora22-2.fedorainfracloud.org b/inventory/host_vars/twisted-fedora22-2.fedorainfracloud.org new file mode 100644 index 0000000000..ec1a026702 --- /dev/null +++ b/inventory/host_vars/twisted-fedora22-2.fedorainfracloud.org @@ -0,0 +1,17 @@ +--- +image: "{{ fedora22_x86_64 }}" +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-pythonbots,all-icmp-pythonbots,default +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: pythonbots +inventory_instance_name: twisted-fedora22-1 +hostbase: twisted-fedora22-1 +public_ip: 209.132.184.184 +description: twisted buildbot for fedora 22 + +cloud_networks: + # pythonbots-net + - net-id: "36ca66de-001d-4807-a688-58c363d84d68" diff --git a/inventory/host_vars/twisted-rhel6-1.fedorainfracloud.org b/inventory/host_vars/twisted-rhel6-1.fedorainfracloud.org new file mode 100644 index 0000000000..889ab67445 --- /dev/null +++ b/inventory/host_vars/twisted-rhel6-1.fedorainfracloud.org @@ -0,0 +1,17 @@ +--- +image: "{{ centos66_x86_64 }}" +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-pythonbots,all-icmp-pythonbots,default +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: pythonbots +inventory_instance_name: twisted-rhel6-1 +hostbase: twisted-rhel6-1 +public_ip: 209.132.184.185 +description: twisted buildbot for rhel 6 + +cloud_networks: + # pythonbots-net + - net-id: "36ca66de-001d-4807-a688-58c363d84d68" diff --git a/inventory/host_vars/twisted-rhel6-2.fedorainfracloud.org b/inventory/host_vars/twisted-rhel6-2.fedorainfracloud.org new file mode 100644 index 0000000000..9626f598e9 --- /dev/null +++ b/inventory/host_vars/twisted-rhel6-2.fedorainfracloud.org @@ -0,0 +1,17 @@ +--- +image: "{{ centos66_x86_64 }}" +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-pythonbots,all-icmp-pythonbots,default +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: pythonbots +inventory_instance_name: twisted-rhel6-2 +hostbase: twisted-rhel6-2 +public_ip: 209.132.184.186 +description: twisted buildbot for rhel6 2 + +cloud_networks: + # pythonbots-net + - net-id: "36ca66de-001d-4807-a688-58c363d84d68" diff --git a/inventory/host_vars/twisted-rhel7-1.fedorainfracloud.org b/inventory/host_vars/twisted-rhel7-1.fedorainfracloud.org new file mode 100644 index 0000000000..bfac09bb35 --- /dev/null +++ b/inventory/host_vars/twisted-rhel7-1.fedorainfracloud.org @@ -0,0 +1,17 @@ +--- +image: rhel7-20141015 +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-pythonbots,all-icmp-pythonbots,default +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: pythonbots +inventory_instance_name: twisted-rhel7-1 +hostbase: twisted-rhel7-1 +public_ip: 209.132.184.187 +description: twisted buildbot for rhel7 1 + +cloud_networks: + # pythonbots-net + - net-id: "36ca66de-001d-4807-a688-58c363d84d68" diff --git a/inventory/host_vars/twisted-rhel7-2.fedorainfracloud.org b/inventory/host_vars/twisted-rhel7-2.fedorainfracloud.org new file mode 100644 index 0000000000..a3bc4a42c4 --- /dev/null +++ b/inventory/host_vars/twisted-rhel7-2.fedorainfracloud.org @@ -0,0 +1,17 @@ +--- +image: rhel7-20141015 +instance_type: m1.medium +keypair: fedora-admin-20130801 +security_group: ssh-anywhere-pythonbots,all-icmp-pythonbots,default +zone: nova +tcp_ports: [22, 80, 443] + +inventory_tenant: pythonbots +inventory_instance_name: twisted-rhel7-2 +hostbase: twisted-rhel7-2 +public_ip: 209.132.184.188 +description: twisted buildbot for rhel7 2 + +cloud_networks: + # pythonbots-net + - net-id: "36ca66de-001d-4807-a688-58c363d84d68" diff --git a/inventory/host_vars/value01.phx2.fedoraproject.org b/inventory/host_vars/value01.phx2.fedoraproject.org index e909e2cccf..cbead328ab 100644 --- a/inventory/host_vars/value01.phx2.fedoraproject.org +++ b/inventory/host_vars/value01.phx2.fedoraproject.org @@ -2,8 +2,8 @@ nm: 255.255.255.0 gw: 10.5.126.254 dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-6 -ks_repo: http://10.5.126.23/repo/rhel/RHEL6-x86_64/ +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ volgroup: /dev/vg_virthost03 eth0_ip: 10.5.126.49 vmhost: virthost03.phx2.fedoraproject.org diff --git a/inventory/host_vars/value01.stg.phx2.fedoraproject.org b/inventory/host_vars/value01.stg.phx2.fedoraproject.org index 81e4235e8f..8140dd8fdf 100644 --- a/inventory/host_vars/value01.stg.phx2.fedoraproject.org +++ b/inventory/host_vars/value01.stg.phx2.fedoraproject.org @@ -2,8 +2,8 @@ nm: 255.255.255.0 gw: 10.5.126.254 dns: 10.5.126.21 -ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-6 -ks_repo: http://10.5.126.23/repo/rhel/RHEL6-x86_64/ +ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7 +ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/ volgroup: /dev/vg_guests eth0_ip: 10.5.126.91 vmhost: virthost12.phx2.fedoraproject.org diff --git a/inventory/host_vars/virthost-comm01.qa.fedoraproject.org b/inventory/host_vars/virthost-comm01.qa.fedoraproject.org deleted file mode 100644 index 9342178f79..0000000000 --- a/inventory/host_vars/virthost-comm01.qa.fedoraproject.org +++ /dev/null @@ -1,3 +0,0 @@ ---- -# This virthost only has non release critical instances, so it doesn't freeze -freezes: false diff --git a/inventory/inventory b/inventory/inventory index 569d10ce01..7445f01d74 100644 --- a/inventory/inventory +++ b/inventory/inventory @@ -7,11 +7,18 @@ [beaker] beaker01.qa.fedoraproject.org +[beaker-stg] +beaker-stg01.qa.fedoraproject.org + +[beaker-virthosts] +qa02.qa.fedoraproject.org +qa08.qa.fedoraproject.org + [qadevel] qadevel.qa.fedoraproject.org:222 -[qadevel-stg] -qadevel-stg.qa.fedoraproject.org:222 +[qa-stg] +qa-stg01.qa.fedoraproject.org:222 [arm-packager] arm03-packager00.cloud.fedoraproject.org @@ -35,9 +42,6 @@ arm01-retrace01.arm.fedoraproject.org retrace01.qa.fedoraproject.org retrace02.qa.fedoraproject.org -[app-stg] -app01.stg.phx2.fedoraproject.org - [ask] ask01.phx2.fedoraproject.org ask02.phx2.fedoraproject.org @@ -49,8 +53,7 @@ ask01.stg.phx2.fedoraproject.org atomic01.qa.fedoraproject.org [backup] -backup02.fedoraproject.org -backup03.phx2.fedoraproject.org +backup01.phx2.fedoraproject.org [badges-backend] badges-backend01.phx2.fedoraproject.org @@ -65,9 +68,6 @@ badges-web02.phx2.fedoraproject.org [badges-web-stg] badges-web01.stg.phx2.fedoraproject.org -[bapp] -bapp02.phx2.fedoraproject.org - [bastion] bastion01.phx2.fedoraproject.org bastion02.phx2.fedoraproject.org @@ -79,14 +79,19 @@ blockerbugs02.phx2.fedoraproject.org [blockerbugs-stg] blockerbugs01.stg.phx2.fedoraproject.org -blockerbugs-dev.cloud.fedoraproject.org [bodhi] bodhi01.phx2.fedoraproject.org bodhi02.phx2.fedoraproject.org +[bodhi2] +bodhi03.phx2.fedoraproject.org +bodhi04.phx2.fedoraproject.org + [bodhi-stg] bodhi01.stg.phx2.fedoraproject.org + +[bodhi2-stg] bodhi02.stg.phx2.fedoraproject.org [bugzilla2fedmsg] @@ -112,6 +117,7 @@ ibiblio01.fedoraproject.org ibiblio02.fedoraproject.org ibiblio03.fedoraproject.org ibiblio04.fedoraproject.org +ibiblio05.fedoraproject.org internetx01.fedoraproject.org osuosl01.fedoraproject.org osuosl02.fedoraproject.org @@ -134,6 +140,11 @@ datagrepper01.stg.phx2.fedoraproject.org [docs-backend] docs-backend01.phx2.fedoraproject.org +[docs-dev] +docs-dev-master.fedorainfracloud.org +docs-dev-frontend.fedorainfracloud.org +docs-dev-builder01.fedorainfracloud.org + [fedimg] fedimg01.phx2.fedoraproject.org @@ -171,17 +182,17 @@ mailman01.stg.phx2.fedoraproject.org [collab] collab03.fedoraproject.org -collab04.fedoraproject.org [releng] -branched-composer.phx2.fedoraproject.org -rawhide-composer.phx2.fedoraproject.org releng04.phx2.fedoraproject.org relepel01.phx2.fedoraproject.org -[releng-stg] -composer.stg.phx2.fedoraproject.org -releng01.stg.phx2.fedoraproject.org +[bodhi-backend] +bodhi-backend01.phx2.fedoraproject.org +bodhi-backend02.phx2.fedoraproject.org + +[bodhi-backend-stg] +bodhi-backend01.stg.phx2.fedoraproject.org [composers] branched-composer.phx2.fedoraproject.org @@ -192,12 +203,20 @@ composer.stg.phx2.fedoraproject.org [sign-bridge] sign-bridge01.phx2.fedoraproject.org +secondary-bridge01.qa.fedoraproject.org # # sign vault servers don't listen to ssh by default. # -#[sign-vault] +[sign-vault] #sign-vault03.phx2.fedoraproject.org #sign-vault04.phx2.fedoraproject.org +#secondary-vault01.qa.fedoraproject.org + +#[statscache] +#statscache01.phx2.fedoraproject.org + +[statscache-stg] +statscache01.stg.phx2.fedoraproject.org [autosign] autosign01.phx2.fedoraproject.org @@ -207,16 +226,17 @@ darkserver01.phx2.fedoraproject.org [dbserver] db01.phx2.fedoraproject.org -db05.phx2.fedoraproject.org +db03.phx2.fedoraproject.org db-fas01.phx2.fedoraproject.org db-datanommer02.phx2.fedoraproject.org db-koji01.phx2.fedoraproject.org +db-s390-koji01.qa.fedoraproject.org db-qa01.qa.fedoraproject.org [dbserver-stg] db-fas01.stg.phx2.fedoraproject.org db01.stg.phx2.fedoraproject.org -db02.stg.phx2.fedoraproject.org +db03.stg.phx2.fedoraproject.org [download-phx2] download01.phx2.fedoraproject.org @@ -254,12 +274,8 @@ fas03.phx2.fedoraproject.org [fas-stg] fas01.stg.phx2.fedoraproject.org -[hyperkitty-stg] -lists-dev.cloud.fedoraproject.org - [hosted] hosted03.fedoraproject.org -hosted04.fedoraproject.org hosted-lists01.fedoraproject.org [hotness] @@ -276,7 +292,7 @@ kerneltest01.phx2.fedoraproject.org kerneltest01.stg.phx2.fedoraproject.org [kernel-qa] -kernel01.qa.fedoraproject.org +#kernel01.qa.fedoraproject.org #kernel02.qa.fedoraproject.org [keys] @@ -285,18 +301,54 @@ keys02.fedoraproject.org [koji] koji01.phx2.fedoraproject.org koji02.phx2.fedoraproject.org +s390-koji01.qa.fedoraproject.org + +# We need an inventory definition of these hosts for fedmsg certs even though +# they are not yet ansibilized. When they're finally assimilated, move them to +# the main group +[koji-not-yet-ansibilized] +arm-hub01.qa.fedoraproject.org +ppc-hub.qa.fedoraproject.org + [koji-stg] koji01.stg.phx2.fedoraproject.org +# Create an OSEv3 group that contains the masters and nodes groups +[OSv3:children] +openshift_masters +openshift_nodes + +# host group for OpenShift v3 masters +[openshift_masters] +osbs01.stg.phx2.fedoraproject.org + +# host group for OpenShift v3 nodes +[openshift_nodes] +osbs01.stg.phx2.fedoraproject.org + +[osbs-stg] +osbs01.stg.phx2.fedoraproject.org + [kojipkgs] kojipkgs01.phx2.fedoraproject.org +[koschei] +koschei01.phx2.fedoraproject.org + +[koschei-stg] +koschei01.stg.phx2.fedoraproject.org + [infracore] lockbox01.phx2.fedoraproject.org log01.phx2.fedoraproject.org noc01.phx2.fedoraproject.org noc02.fedoraproject.org +data-analysis01.phx2.fedoraproject.org + +[ipsilon] +ipsilon01.phx2.fedoraproject.org +ipsilon02.phx2.fedoraproject.org [ipsilon-stg] ipsilon01.stg.phx2.fedoraproject.org @@ -305,7 +357,6 @@ ipsilon01.stg.phx2.fedoraproject.org dhcp01.phx2.fedoraproject.org [lockbox] -lockbox-comm01.qa.fedoraproject.org [nagios] noc01.phx2.fedoraproject.org @@ -333,17 +384,13 @@ nuancier02.phx2.fedoraproject.org nuancier01.stg.phx2.fedoraproject.org nuancier02.stg.phx2.fedoraproject.org -[fedoauth] -fedoauth01.phx2.fedoraproject.org -fedoauth02.phx2.fedoraproject.org - -[fedoauth-stg] -fedoauth01.stg.phx2.fedoraproject.org - [memcached] memcached01.phx2.fedoraproject.org memcached02.phx2.fedoraproject.org +[memcached-stg] +memcached01.stg.phx2.fedoraproject.org + [mirrorlist2] mirrorlist-dedicatedsolutions.fedoraproject.org mirrorlist-host1plus.fedoraproject.org @@ -354,14 +401,41 @@ mirrorlist-phx2.phx2.fedoraproject.org [mirrorlist2-stg] mirrorlist-phx2.stg.phx2.fedoraproject.org -[mm-stg] +[mm-frontend] +mm-frontend01.phx2.fedoraproject.org +mm-frontend02.phx2.fedoraproject.org + +[mm-backend] +mm-backend01.phx2.fedoraproject.org + +[mm-crawler] +mm-crawler01.phx2.fedoraproject.org +mm-crawler02.phx2.fedoraproject.org + +[mm-frontend-stg] mm-frontend01.stg.phx2.fedoraproject.org + +[mm-backend-stg] mm-backend01.stg.phx2.fedoraproject.org + +[mm-crawler-stg] mm-crawler01.stg.phx2.fedoraproject.org -[other] -people03.fedoraproject.org -torrent02.fedoraproject.org +[mm:children] +mm-frontend +mm-backend +mm-crawler + +[mm-stg:children] +mm-frontend-stg +mm-backend-stg +mm-crawler-stg + +[people] +people01.fedoraproject.org + +[torrent] +torrent01.fedoraproject.org [secondary] secondary01.phx2.fedoraproject.org @@ -446,50 +520,51 @@ smtp-mm-coloamer01.fedoraproject.org smtp-mm-tummy01.fedoraproject.org [spare] -junk01.phx2.fedoraproject.org # # All staging hosts should be in this group too. # [staging] -app01.stg.phx2.fedoraproject.org ask01.stg.phx2.fedoraproject.org badges-backend01.stg.phx2.fedoraproject.org badges-web01.stg.phx2.fedoraproject.org blockerbugs01.stg.phx2.fedoraproject.org bodhi01.stg.phx2.fedoraproject.org bodhi02.stg.phx2.fedoraproject.org +bodhi-backend01.stg.phx2.fedoraproject.org bugzilla2fedmsg01.stg.phx2.fedoraproject.org buildvm-01.stg.phx2.fedoraproject.org busgateway01.stg.phx2.fedoraproject.org composer.stg.phx2.fedoraproject.org datagrepper01.stg.phx2.fedoraproject.org db01.stg.phx2.fedoraproject.org -db02.stg.phx2.fedoraproject.org +db03.stg.phx2.fedoraproject.org db-fas01.stg.phx2.fedoraproject.org elections01.stg.phx2.fedoraproject.org fas01.stg.phx2.fedoraproject.org fedimg01.stg.phx2.fedoraproject.org -fedoauth01.stg.phx2.fedoraproject.org fedocal01.stg.phx2.fedoraproject.org github2fedmsg01.stg.phx2.fedoraproject.org gallery01.stg.phx2.fedoraproject.org hotness01.stg.phx2.fedoraproject.org kerneltest01.stg.phx2.fedoraproject.org koji01.stg.phx2.fedoraproject.org +koschei01.stg.phx2.fedoraproject.org mailman01.stg.phx2.fedoraproject.org ipsilon01.stg.phx2.fedoraproject.org +memcached01.stg.phx2.fedoraproject.org notifs-backend01.stg.phx2.fedoraproject.org notifs-web01.stg.phx2.fedoraproject.org notifs-web02.stg.phx2.fedoraproject.org nuancier01.stg.phx2.fedoraproject.org nuancier02.stg.phx2.fedoraproject.org +osbs01.stg.phx2.fedoraproject.org packages03.stg.phx2.fedoraproject.org paste01.stg.phx2.fedoraproject.org pkgdb01.stg.phx2.fedoraproject.org pkgs01.stg.phx2.fedoraproject.org proxy01.stg.phx2.fedoraproject.org -releng01.stg.phx2.fedoraproject.org resultsdb-stg01.qa.fedoraproject.org +statscache01.stg.phx2.fedoraproject.org summershum01.stg.phx2.fedoraproject.org sundries01.stg.phx2.fedoraproject.org tagger01.stg.phx2.fedoraproject.org @@ -500,6 +575,7 @@ mirrorlist-phx2.stg.phx2.fedoraproject.org mm-frontend01.stg.phx2.fedoraproject.org mm-backend01.stg.phx2.fedoraproject.org mm-crawler01.stg.phx2.fedoraproject.org +beaker-stg01.qa.fedoraproject.org # This is a list of hosts that are a little "friendly" with staging. # They are exempted from the iptables wall between staging and prod. @@ -566,6 +642,10 @@ taskotron-client21.qa.fedoraproject.org taskotron01.qa.fedoraproject.org [taskotron-prod-clients] +taskotron-client07.qa.fedoraproject.org +taskotron-client08.qa.fedoraproject.org +taskotron-client09.qa.fedoraproject.org +taskotron-client10.qa.fedoraproject.org taskotron-client22.qa.fedoraproject.org taskotron-client23.qa.fedoraproject.org taskotron-client24.qa.fedoraproject.org @@ -599,6 +679,11 @@ virthost15.phx2.fedoraproject.org virthost16.phx2.fedoraproject.org virthost17.phx2.fedoraproject.org virthost18.phx2.fedoraproject.org +virthost19.phx2.fedoraproject.org +virthost20.phx2.fedoraproject.org +virthost21.phx2.fedoraproject.org +virthost22.phx2.fedoraproject.org +qa03.qa.fedoraproject.org qa04.qa.fedoraproject.org qa05.qa.fedoraproject.org qa06.qa.fedoraproject.org @@ -611,7 +696,6 @@ qa13.qa.fedoraproject.org qa14.qa.fedoraproject.org [virthost-comm] -virthost-comm01.qa.fedoraproject.org virthost-comm02.qa.fedoraproject.org virthost-comm03.qa.fedoraproject.org virthost-comm04.qa.fedoraproject.org @@ -624,59 +708,63 @@ wiki01.stg.phx2.fedoraproject.org wiki01.phx2.fedoraproject.org wiki02.phx2.fedoraproject.org + +# This is a convenience group listing the hosts that live on the QA network that +# are allowed to send inbound fedmsg messages to our production fedmsg bus. +# See also: +# - inventory/group_vars/proxies for the iptables custom_rules list +# - roles/fedmsg/base/templates/relay.py.j2 +[fedmsg-qa-network] +retrace01.qa.fedoraproject.org +retrace02.qa.fedoraproject.org +s390-koji01.qa.fedoraproject.org + + # assorted categories of fedmsg services, for convenience -[fedmsg-hubs] -badges-backend01.phx2.fedoraproject.org -busgateway01.phx2.fedoraproject.org -fedimg01.phx2.fedoraproject.org -hotness01.phx2.fedoraproject.org -notifs-backend01.phx2.fedoraproject.org -pkgs02.phx2.fedoraproject.org -summershum01.phx2.fedoraproject.org +[fedmsg-hubs:children] +badges-backend +busgateway +fedimg +hotness +notifs-backend +pkgs +summershum -[fedmsg-hubs-stg] -badges-backend01.stg.phx2.fedoraproject.org -busgateway01.stg.phx2.fedoraproject.org -fedimg01.stg.phx2.fedoraproject.org -hotness01.stg.phx2.fedoraproject.org -notifs-backend01.stg.phx2.fedoraproject.org -pkgs01.stg.phx2.fedoraproject.org -summershum01.stg.phx2.fedoraproject.org +[fedmsg-hubs-stg:children] +badges-backend-stg +busgateway-stg +fedimg-stg +hotness-stg +notifs-backend-stg +pkgs-stg +summershum-stg -[fedmsg-ircs] -value01.phx2.fedoraproject.org +[fedmsg-ircs:children] +value -[fedmsg-ircs-stg] -value01.stg.phx2.fedoraproject.org +[fedmsg-ircs-stg:children] +value-stg -[fedmsg-relays] -busgateway01.phx2.fedoraproject.org -anitya-frontend01.fedoraproject.org +[fedmsg-relays:children] +busgateway +anitya-frontend -[fedmsg-relays-stg] -busgateway01.stg.phx2.fedoraproject.org +[fedmsg-relays-stg:children] +busgateway-stg -[fedmsg-gateways] -busgateway01.phx2.fedoraproject.org -proxy01.phx2.fedoraproject.org -proxy02.fedoraproject.org -proxy03.fedoraproject.org -proxy04.fedoraproject.org -proxy06.fedoraproject.org -proxy07.fedoraproject.org -proxy08.fedoraproject.org -proxy09.fedoraproject.org -proxy10.phx2.fedoraproject.org +[fedmsg-gateways:children] +busgateway +proxies -[fedmsg-gateways-stg] -busgateway01.stg.phx2.fedoraproject.org -proxy01.stg.phx2.fedoraproject.org +[fedmsg-gateways-stg:children] +busgateway-stg +proxies-stg -[moksha-hubs] -bugzilla2fedmsg01.phx2.fedoraproject.org +[moksha-hubs:children] +bugzilla2fedmsg -[moksha-hubs-stg] -bugzilla2fedmsg01.stg.phx2.fedoraproject.org +[moksha-hubs-stg:children] +bugzilla2fedmsg-stg [fedmsg-services:children] fedmsg-hubs @@ -692,6 +780,11 @@ fedmsg-relays-stg fedmsg-gateways-stg moksha-hubs-stg +# These are groups that are using the python34 fedmsg stack. +[python34-fedmsg:children] +mailman +mailman-stg + ## END fedmsg services [cloud-hardware] @@ -713,6 +806,15 @@ fed-cloud15.cloud.fedoraproject.org #fed-cloud16.cloud.fedoraproject.org cloud-noc01.cloud.fedoraproject.org +[new-cloud-hardware] +fed-cloud09.cloud.fedoraproject.org +fed-cloud10.cloud.fedoraproject.org +fed-cloud11.cloud.fedoraproject.org +fed-cloud12.cloud.fedoraproject.org +fed-cloud13.cloud.fedoraproject.org +fed-cloud14.cloud.fedoraproject.org +fed-cloud15.cloud.fedoraproject.org + [openstack-compute] fed-cloud10.cloud.fedoraproject.org fed-cloud11.cloud.fedoraproject.org @@ -722,43 +824,51 @@ fed-cloud14.cloud.fedoraproject.org fed-cloud15.cloud.fedoraproject.org [persistent-cloud] -#fedocal.dev.fedoraproject.org -209.132.184.147 -#copr-fe.cloud.fedoraproject.org -copr-fe.cloud.fedoraproject.org -209.132.184.150 -#artboard.cloud.fedoraproject.org -209.132.184.143 -#logstash-dev.cloud.fedoraproject.org -209.132.184.146 -# copr-be.cloud.fedoraproject.org on openstack +#shogun-ca.cloud.fedoraproject.org (oldcloud) +209.132.184.157 +# +# Instances below are all moved to the new cloud +# +# artboard instance +artboard.fedorainfracloud.org +# copr production instances copr-be.cloud.fedoraproject.org -#elections-dev -209.132.184.162 +copr-fe.cloud.fedoraproject.org +copr-keygen.cloud.fedoraproject.org # copr dev instances copr-be-dev.cloud.fedoraproject.org copr-fe-dev.cloud.fedoraproject.org -#shogun-ca.cloud.fedoraproject.org -209.132.184.157 -# bodhi.dev.fedoraproject.org -bodhi.dev.fedoraproject.org -# Koschei instance - ticket 4449 -koschei.cloud.fedoraproject.org -# darkserver-dev -209.132.184.148 -# DevPi test instance - ticket 4524 -209.132.184.166 -# copr keygen instance -copr-keygen.cloud.fedoraproject.org +# taiga for kanban-style project planning +taiga.cloud.fedoraproject.org +# graphite/statsd/grafana exploration +grafana.cloud.fedoraproject.org +# glittergallery GSoC dev work +glittergallery-dev.fedorainfracloud.org +# shumgrepper-dev +shumgrepper-dev.fedorainfracloud.org +# fas2-dev +fas2-dev.fedorainfracloud.org +# fas3-dev +fas3-dev.fedorainfracloud.org +# faitout +faitout.fedorainfracloud.org +# darkserver development instance +darkserver-dev.fedorainfracloud.org +# lists development instance +lists-dev.fedorainfracloud.org +# java-deptools ticket 4846 +java-deptools.fedorainfracloud.org [jenkins-slaves] # EL-6 builder 209.132.184.165 -# F20 builder -209.132.184.209 # RHEL7 builder 209.132.184.137 +[jenkins-slaves-newcloud] +# fedora 22 builder in new cloud +jenkins-f22.fedorainfracloud.org + [jenkins-cloud] 209.132.184.153 #jenkins.cloud.fedoraproject.org @@ -767,6 +877,15 @@ copr-keygen.cloud.fedoraproject.org [jenkins-cloud:children] jenkins-slaves +# +# These are in the new cloud +# +[jenkins-dev] +jenkins.fedorainfracloud.org +jenkins-slave-el6.fedorainfracloud.org +jenkins-slave-el7.fedorainfracloud.org +jenkins-slave-f22.fedorainfracloud.org + [osuosl] proxy06.fedoraproject.org @@ -779,10 +898,8 @@ packages koji releng dbserver -bapp [groupa] -torrent02.fedoraproject.org secondary01.phx2.fedoraproject.org @@ -809,7 +926,7 @@ bkernel buildvmhost [groupc] -people03.fedoraproject.org +people01.fedoraproject.org [virtservers:children] colo-virt @@ -823,13 +940,10 @@ copr-fe-dev.cloud.fedoraproject.org [copr-back-stg] copr-be-dev.cloud.fedoraproject.org -#copr-be-dev2.cloud.fedoraproject.org -#209.132.184.49 [copr-keygen-stg] -209.132.184.124 +copr-keygen-dev.cloud.fedoraproject.org -# temporary [copr-keygen] copr-keygen.cloud.fedoraproject.org @@ -839,15 +953,39 @@ copr-fe.cloud.fedoraproject.org [copr-back] copr-be.cloud.fedoraproject.org +[copr-dist-git] +copr-dist-git.fedorainfracloud.org + +[copr-dist-git-stg] +copr-dist-git-dev.fedorainfracloud.org + [copr:children] copr-front copr-back copr-keygen +copr-dist-git [copr-stg:children] copr-front-stg copr-back-stg copr-keygen-stg +copr-dist-git-stg + +[dopr-stg] +dopr-dev.cloud.fedoraproject.org + +[pagure] +pagure01.fedoraproject.org [pagure-stg] pagure-stg01.fedoraproject.org + +[twisted-buildbots] +twisted-fedora21-1.fedorainfracloud.org +twisted-fedora21-2.fedorainfracloud.org +twisted-fedora22-1.fedorainfracloud.org +twisted-fedora22-2.fedorainfracloud.org +twisted-rhel6-1.fedorainfracloud.org +twisted-rhel6-2.fedorainfracloud.org +twisted-rhel7-1.fedorainfracloud.org +twisted-rhel7-2.fedorainfracloud.org diff --git a/library/nova_compute.py b/library/nova_compute.py index 4c1e977259..c3cbfda07f 100644 --- a/library/nova_compute.py +++ b/library/nova_compute.py @@ -184,7 +184,8 @@ EXAMPLES = ''' wait_for: 200 flavor_id: 4 nics: - - net-id: 34605f38-e52a-25d2-b6ec-754a13ffb723 + # persistent-net + - net-id: 67b77354-39a4-43de-b007-bb813ac5c35f meta: hostname: test1 group: uge_master diff --git a/lookup_plugins/oo_option.py b/lookup_plugins/oo_option.py new file mode 100644 index 0000000000..35dce48f91 --- /dev/null +++ b/lookup_plugins/oo_option.py @@ -0,0 +1,73 @@ +#!/usr/bin/env python2 +# -*- coding: utf-8 -*- +# vim: expandtab:tabstop=4:shiftwidth=4 + +''' +oo_option lookup plugin for openshift-ansible + +Usage: + + - debug: + msg: "{{ lookup('oo_option', '') | default('', True) }}" + +This returns, by order of priority: + +* if it exists, the `cli_` ansible variable. This variable is set by `bin/cluster --option = …` +* if it exists, the envirnoment variable named `` +* if none of the above conditions are met, empty string is returned +''' + +from ansible.utils import template +import os + +# Reason: disable too-few-public-methods because the `run` method is the only +# one required by the Ansible API +# Status: permanently disabled +# pylint: disable=too-few-public-methods +class LookupModule(object): + ''' oo_option lookup plugin main class ''' + + # Reason: disable unused-argument because Ansible is calling us with many + # parameters we are not interested in. + # The lookup plugins of Ansible have this kwargs “catch-all” parameter + # which is not used + # Status: permanently disabled unless Ansible API evolves + # pylint: disable=unused-argument + def __init__(self, basedir=None, **kwargs): + ''' Constructor ''' + self.basedir = basedir + + # Reason: disable unused-argument because Ansible is calling us with many + # parameters we are not interested in. + # The lookup plugins of Ansible have this kwargs “catch-all” parameter + # which is not used + # Status: permanently disabled unless Ansible API evolves + # pylint: disable=unused-argument + def run(self, terms, inject=None, **kwargs): + ''' Main execution path ''' + + try: + terms = template.template(self.basedir, terms, inject) + # Reason: disable broad-except to really ignore any potential exception + # This is inspired by the upstream "env" lookup plugin: + # https://github.com/ansible/ansible/blob/devel/v1/ansible/runner/lookup_plugins/env.py#L29 + # pylint: disable=broad-except + except Exception: + pass + + if isinstance(terms, basestring): + terms = [terms] + + ret = [] + + for term in terms: + option_name = term.split()[0] + cli_key = 'cli_' + option_name + if inject and cli_key in inject: + ret.append(inject[cli_key]) + elif option_name in os.environ: + ret.append(os.environ[option_name]) + else: + ret.append('') + + return ret diff --git a/lookup_plugins/sequence.py b/lookup_plugins/sequence.py new file mode 100644 index 0000000000..8ca9e7b39e --- /dev/null +++ b/lookup_plugins/sequence.py @@ -0,0 +1,215 @@ +# (c) 2013, Jayson Vantuyl +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +from ansible.errors import AnsibleError +import ansible.utils as utils +from re import compile as re_compile, IGNORECASE + +# shortcut format +NUM = "(0?x?[0-9a-f]+)" +SHORTCUT = re_compile( + "^(" + # Group 0 + NUM + # Group 1: Start + "-)?" + + NUM + # Group 2: End + "(/" + # Group 3 + NUM + # Group 4: Stride + ")?" + + "(:(.+))?$", # Group 5, Group 6: Format String + IGNORECASE +) + + +class LookupModule(object): + """ + sequence lookup module + + Used to generate some sequence of items. Takes arguments in two forms. + + The simple / shortcut form is: + + [start-]end[/stride][:format] + + As indicated by the brackets: start, stride, and format string are all + optional. The format string is in the style of printf. This can be used + to pad with zeros, format in hexadecimal, etc. All of the numerical values + can be specified in octal (i.e. 0664) or hexadecimal (i.e. 0x3f8). + Negative numbers are not supported. + + Some examples: + + 5 -> ["1","2","3","4","5"] + 5-8 -> ["5", "6", "7", "8"] + 2-10/2 -> ["2", "4", "6", "8", "10"] + 4:host%02d -> ["host01","host02","host03","host04"] + + The standard Ansible key-value form is accepted as well. For example: + + start=5 end=11 stride=2 format=0x%02x -> ["0x05","0x07","0x09","0x0a"] + + This format takes an alternate form of "end" called "count", which counts + some number from the starting value. For example: + + count=5 -> ["1", "2", "3", "4", "5"] + start=0x0f00 count=4 format=%04x -> ["0f00", "0f01", "0f02", "0f03"] + start=0 count=5 stride=2 -> ["0", "2", "4", "6", "8"] + start=1 count=5 stride=2 -> ["1", "3", "5", "7", "9"] + + The count option is mostly useful for avoiding off-by-one errors and errors + calculating the number of entries in a sequence when a stride is specified. + """ + + def __init__(self, basedir, **kwargs): + """absorb any keyword args""" + self.basedir = basedir + + def reset(self): + """set sensible defaults""" + self.start = 1 + self.count = None + self.end = None + self.stride = 1 + self.format = "%d" + + def parse_kv_args(self, args): + """parse key-value style arguments""" + for arg in ["start", "end", "count", "stride"]: + try: + arg_raw = args.pop(arg, None) + if arg_raw is None: + continue + arg_cooked = int(arg_raw, 0) + setattr(self, arg, arg_cooked) + except ValueError: + raise AnsibleError( + "can't parse arg %s=%r as integer" + % (arg, arg_raw) + ) + if 'format' in args: + self.format = args.pop("format") + if args: + raise AnsibleError( + "unrecognized arguments to with_sequence: %r" + % args.keys() + ) + + def parse_simple_args(self, term): + """parse the shortcut forms, return True/False""" + match = SHORTCUT.match(term) + if not match: + return False + + _, start, end, _, stride, _, format = match.groups() + + if start is not None: + try: + start = int(start, 0) + except ValueError: + raise AnsibleError("can't parse start=%s as integer" % start) + if end is not None: + try: + end = int(end, 0) + except ValueError: + raise AnsibleError("can't parse end=%s as integer" % end) + if stride is not None: + try: + stride = int(stride, 0) + except ValueError: + raise AnsibleError("can't parse stride=%s as integer" % stride) + + if start is not None: + self.start = start + if end is not None: + self.end = end + if stride is not None: + self.stride = stride + if format is not None: + self.format = format + + def sanity_check(self): + if self.count is None and self.end is None: + raise AnsibleError( + "must specify count or end in with_sequence" + ) + elif self.count is not None and self.end is not None: + raise AnsibleError( + "can't specify both count and end in with_sequence" + ) + elif self.count is not None: + # convert count to end + if self.count != 0: + self.end = self.start + self.count * self.stride - 1 + else: + self.start = 0 + self.end = 0 + self.stride = 0 + del self.count + if self.stride > 0 and self.end < self.start: + raise AnsibleError("to count backwards make stride negative") + if self.stride < 0 and self.end > self.start: + raise AnsibleError("to count forward don't make stride negative") + if self.format.count('%') != 1: + raise AnsibleError("bad formatting string: %s" % self.format) + + def generate_sequence(self): + if self.stride > 0: + adjust = 1 + else: + adjust = -1 + numbers = xrange(self.start, self.end + adjust, self.stride) + + for i in numbers: + try: + formatted = self.format % i + yield formatted + except (ValueError, TypeError): + raise AnsibleError( + "problem formatting %r with %r" % self.format + ) + + def run(self, terms, inject=None, **kwargs): + results = [] + + terms = utils.listify_lookup_plugin_terms(terms, self.basedir, inject) + + if isinstance(terms, basestring): + terms = [ terms ] + + for term in terms: + try: + self.reset() # clear out things for this iteration + + try: + if not self.parse_simple_args(term): + self.parse_kv_args(utils.parse_kv(term)) + except Exception: + raise AnsibleError( + "unknown error parsing with_sequence arguments: %r" + % term + ) + + self.sanity_check() + if self.stride != 0: + results.extend(self.generate_sequence()) + except AnsibleError: + raise + except Exception, e: + raise AnsibleError( + "unknown error generating sequence: %s" % str(e) + ) + + return results diff --git a/master.yml b/master.yml index 0b57bd877f..c8ca7f20ad 100644 --- a/master.yml +++ b/master.yml @@ -23,8 +23,12 @@ - include: /srv/web/infra/ansible/playbooks/groups/badges-web.yml - include: /srv/web/infra/ansible/playbooks/groups/bastion.yml - include: /srv/web/infra/ansible/playbooks/groups/beaker.yml +#- include: /srv/web/infra/ansible/playbooks/groups/beaker-stg.yml +- include: /srv/web/infra/ansible/playbooks/groups/beaker-virthosts.yml - include: /srv/web/infra/ansible/playbooks/groups/blockerbugs.yml - include: /srv/web/infra/ansible/playbooks/groups/bodhi.yml +- include: /srv/web/infra/ansible/playbooks/groups/bodhi2.yml +- include: /srv/web/infra/ansible/playbooks/groups/bodhi-backend.yml - include: /srv/web/infra/ansible/playbooks/groups/bugzilla2fedmsg.yml - include: /srv/web/infra/ansible/playbooks/groups/buildhw.yml - include: /srv/web/infra/ansible/playbooks/groups/buildvm.yml @@ -37,22 +41,27 @@ - include: /srv/web/infra/ansible/playbooks/groups/dhcp.yml - include: /srv/web/infra/ansible/playbooks/groups/dns.yml - include: /srv/web/infra/ansible/playbooks/groups/docs-backend.yml +- include: /srv/web/infra/ansible/playbooks/groups/docs-dev.yml - include: /srv/web/infra/ansible/playbooks/groups/download.yml - include: /srv/web/infra/ansible/playbooks/groups/elections.yml - include: /srv/web/infra/ansible/playbooks/groups/fas.yml - include: /srv/web/infra/ansible/playbooks/groups/fedimg.yml -- include: /srv/web/infra/ansible/playbooks/groups/fedoauth.yml - include: /srv/web/infra/ansible/playbooks/groups/fedocal.yml - include: /srv/web/infra/ansible/playbooks/groups/gallery.yml - include: /srv/web/infra/ansible/playbooks/groups/github2fedmsg.yml - include: /srv/web/infra/ansible/playbooks/groups/hotness.yml +- include: /srv/web/infra/ansible/playbooks/groups/ipsilon.yml - include: /srv/web/infra/ansible/playbooks/groups/jenkins-cloud.yml - include: /srv/web/infra/ansible/playbooks/groups/kerneltest.yml - include: /srv/web/infra/ansible/playbooks/groups/keyserver.yml - include: /srv/web/infra/ansible/playbooks/groups/koji-hub.yml - include: /srv/web/infra/ansible/playbooks/groups/kojipkgs.yml +- include: /srv/web/infra/ansible/playbooks/groups/koschei.yml - include: /srv/web/infra/ansible/playbooks/groups/lockbox.yml -- include: /srv/web/infra/ansible/playbooks/groups/mailman.yml +- include: /srv/web/infra/ansible/playbooks/groups/logserver.yml +# Waiting for rhel7 python3 and reinstall +#- include: /srv/web/infra/ansible/playbooks/groups/mailman.yml +- include: /srv/web/infra/ansible/playbooks/groups/mariadb-server.yml - include: /srv/web/infra/ansible/playbooks/groups/mirrorlist2.yml - include: /srv/web/infra/ansible/playbooks/groups/mirrormanager.yml - include: /srv/web/infra/ansible/playbooks/groups/memcached.yml @@ -60,20 +69,27 @@ - include: /srv/web/infra/ansible/playbooks/groups/notifs-backend.yml - include: /srv/web/infra/ansible/playbooks/groups/notifs-web.yml - include: /srv/web/infra/ansible/playbooks/groups/nuancier.yml +#- include: /srv/web/infra/ansible/playbooks/groups/openstack-compute-nodes.yml +- include: /srv/web/infra/ansible/playbooks/groups/osbs.yml - include: /srv/web/infra/ansible/playbooks/groups/packages.yml +- include: /srv/web/infra/ansible/playbooks/groups/pagure.yml - include: /srv/web/infra/ansible/playbooks/groups/paste.yml +- include: /srv/web/infra/ansible/playbooks/groups/people.yml - include: /srv/web/infra/ansible/playbooks/groups/pkgdb.yml - include: /srv/web/infra/ansible/playbooks/groups/pkgs.yml - include: /srv/web/infra/ansible/playbooks/groups/postgresql-server.yml - include: /srv/web/infra/ansible/playbooks/groups/proxies.yml - include: /srv/web/infra/ansible/playbooks/groups/qadevel.yml -#- include: /srv/web/infra/ansible/playbooks/groups/qadevel-stg.yml +- include: /srv/web/infra/ansible/playbooks/groups/qa-stg.yml - include: /srv/web/infra/ansible/playbooks/groups/resultsdb-prod.yml - include: /srv/web/infra/ansible/playbooks/groups/resultsdb-dev.yml - include: /srv/web/infra/ansible/playbooks/groups/resultsdb-stg.yml - include: /srv/web/infra/ansible/playbooks/groups/retrace.yml - include: /srv/web/infra/ansible/playbooks/groups/releng-compose.yml +- include: /srv/web/infra/ansible/playbooks/groups/secondary.yml - include: /srv/web/infra/ansible/playbooks/groups/smtp-mm.yml +- include: /srv/web/infra/ansible/playbooks/groups/sign-bridge.yml +- include: /srv/web/infra/ansible/playbooks/groups/statscache.yml - include: /srv/web/infra/ansible/playbooks/groups/summershum.yml - include: /srv/web/infra/ansible/playbooks/groups/sundries.yml - include: /srv/web/infra/ansible/playbooks/groups/tagger.yml @@ -83,6 +99,8 @@ - include: /srv/web/infra/ansible/playbooks/groups/taskotron-dev-clients.yml - include: /srv/web/infra/ansible/playbooks/groups/taskotron-stg.yml - include: /srv/web/infra/ansible/playbooks/groups/taskotron-stg-clients.yml +- include: /srv/web/infra/ansible/playbooks/groups/torrent.yml +- include: /srv/web/infra/ansible/playbooks/groups/twisted-buildbots.yml - include: /srv/web/infra/ansible/playbooks/groups/unbound.yml - include: /srv/web/infra/ansible/playbooks/groups/value.yml - include: /srv/web/infra/ansible/playbooks/groups/virthost.yml @@ -92,14 +110,14 @@ # host playbooks # -- include: /srv/web/infra/ansible/playbooks/hosts/artboard.cloud.fedoraproject.org.yml -- include: /srv/web/infra/ansible/playbooks/hosts/blockerbugs-dev.cloud.fedoraproject.org.yml -- include: /srv/web/infra/ansible/playbooks/hosts/bodhi.dev.fedoraproject.org.yml +- include: /srv/web/infra/ansible/playbooks/hosts/artboard.fedorainfracloud.org.yml - include: /srv/web/infra/ansible/playbooks/hosts/cloud-noc01.cloud.fedoraproject.org.yml -- include: /srv/web/infra/ansible/playbooks/hosts/elections-dev.cloud.fedoraproject.org.yml -- include: /srv/web/infra/ansible/playbooks/hosts/fedocal.dev.fedoraproject.org.yml -- include: /srv/web/infra/ansible/playbooks/hosts/koschei.cloud.fedoraproject.org.yml -- include: /srv/web/infra/ansible/playbooks/hosts/lists-dev.cloud.fedoraproject.org.yml -- include: /srv/web/infra/ansible/playbooks/hosts/logserver.yml -- include: /srv/web/infra/ansible/playbooks/hosts/logstash-dev.cloud.fedoraproject.org.yml +- include: /srv/web/infra/ansible/playbooks/hosts/darkserver-dev.fedorainfracloud.org.yml +- include: /srv/web/infra/ansible/playbooks/hosts/dopr-dev.cloud.fedoraproject.org.yml +#- include: /srv/web/infra/ansible/playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml +- include: /srv/web/infra/ansible/playbooks/hosts/data-analysis01.phx2.fedoraproject.org.yml +- include: /srv/web/infra/ansible/playbooks/hosts/lists-dev.fedorainfracloud.org.yml - include: /srv/web/infra/ansible/playbooks/hosts/shogun-ca.cloud.fedoraproject.org.yml +- include: /srv/web/infra/ansible/playbooks/hosts/taiga.cloud.fedoraproject.org.yml +- include: /srv/web/infra/ansible/playbooks/hosts/glittergallery-dev.fedorainfracloud.org.yml +- include: /srv/web/infra/ansible/playbooks/hosts/shumgrepper-dev.fedorainfracloud.org.yml diff --git a/playbooks/check-for-updates.yml b/playbooks/check-for-updates.yml new file mode 100644 index 0000000000..8c80765fa7 --- /dev/null +++ b/playbooks/check-for-updates.yml @@ -0,0 +1,32 @@ +# +# simple playbook to check all hosts and see how many updates they have pending. +# It could be a lot faster if we didn't gather facts, but we need that for yum vs dnf checking +# +# If you want a pretty sorted list, you need to post process the output here with something +# like: +# +# time ansible-playbook check-for-updates.yml | grep msg\": | awk -F: '{print $2}' | sort +# + +- name: check for updates + hosts: all + gather_facts: true + user: root + + tasks: + + - name: check for updates (yum) + yum: list=updates update_cache=true + register: yumoutput + when: ansible_distribution_major_version|int < 22 + + - name: check for updates (dnf) + dnf: list=updates + register: dnfoutput + when: ansible_distribution_major_version|int > 21 + + - debug: msg="{{ inventory_hostname}} {{ yumoutput.results|length }}" + when: yumoutput is defined and yumoutput.results|length > 0 + + - debug: msg="{{ inventory_hostname}} {{ dnfoutput.results|length }}" + when: dnfoutput is defined and dnfoutput.results|length > 0 diff --git a/playbooks/el6_temp_instance.yml b/playbooks/el6_temp_instance.yml deleted file mode 100644 index 36a29d9cc3..0000000000 --- a/playbooks/el6_temp_instance.yml +++ /dev/null @@ -1,41 +0,0 @@ -# setup a transient el6 instance -# optionally can take --extra-vars="hostbase=hostnamebase root_auth_users='user1 user2 user3'" -# -# You might need to run it with `-c paramiko` for it to finish cleanly - -- name: check/create instance - hosts: lockbox01.phx2.fedoraproject.org - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - vars: - - keypair: fedora-admin-20130801 - - image: "{{ el6_qcow_id }}" - - instance_type: m1.small - - security_group: default - - region: nova - - tasks: - - include: "{{ tasks }}/transient_cloud.yml" - -- name: provision instance - hosts: tmp_just_created - user: root - gather_facts: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - tasks: - - include: "{{ tasks }}/growroot_cloud.yml" - - include: "{{ tasks }}/cloud_setup_basic.yml" - - - handlers: - - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/el7_temp_instance.yml b/playbooks/el7_temp_instance.yml deleted file mode 100644 index 2b8076e014..0000000000 --- a/playbooks/el7_temp_instance.yml +++ /dev/null @@ -1,56 +0,0 @@ -# setup a transient fedora instance -# optionally can take --extra-vars="hostbase=hostnamebase root_auth_users='user1 user2 user3'" - -- name: check/create instance - hosts: lockbox01.phx2.fedoraproject.org - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - vars: - - keypair: fedora-admin-20130801 - - image: "{{ el7_qcow_id }}" - - instance_type: m1.small - - security_group: default - - region: nova - - tasks: - - include: "{{ tasks }}/transient_cloud.yml" - -- name: provision instance - hosts: tmp_just_created - gather_facts: True - user: fedora - sudo: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - tasks: - - name: install cloud-utils - yum: pkg=cloud-utils state=present - - - name: growpart /dev/vda1 partition (/) to full size - action: command growpart /dev/vda 1 - register: growpart - always_run: true - changed_when: "growpart.rc != 1" - failed_when: growpart.rc == 2 - - - name: resize the /dev/vda 1 fs - action: command xfs_growfs /dev/vda1 - when: growpart.rc == 0 - -# - name: put the mbr back - b/c the resize breaks booting otherwise -# action: shell cat /usr/share/syslinux/mbr.bin > /dev/vda -# when: growpart.rc == 0 - - - - include: "{{ tasks }}/cloud_setup_basic.yml" - - handlers: - - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/f19_temp_instance.yml b/playbooks/f19_temp_instance.yml deleted file mode 100644 index d696172387..0000000000 --- a/playbooks/f19_temp_instance.yml +++ /dev/null @@ -1,53 +0,0 @@ -# setup a transient fedora instance -# optionally can take --extra-vars="hostbase=hostnamebase root_auth_users='user1 user2 user3'" - -- name: check/create instance - hosts: lockbox01.phx2.fedoraproject.org - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - vars: - - keypair: fedora-admin-20130801 - - image: "{{ f19_qcow_id }}" - - instance_type: m1.small - - security_group: default - - region: nova - - tasks: - - include: "{{ tasks }}/transient_cloud.yml" - -- name: provision instance - hosts: tmp_just_created - user: fedora - gather_facts: True - sudo: yes - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - tasks: - - name: growpart /dev/vda1 partition (/) to full size - action: command growpart /dev/vda 1 - register: growpart - always_run: true - changed_when: "growpart.rc != 1" - failed_when: growpart.rc == 2 - - - name: resize the /dev/vda 1 fs - action: command resize2fs /dev/vda1 - when: growpart.rc == 0 - - - name: put the mbr back - b/c the resize breaks booting otherwise - action: shell cat /usr/share/syslinux/mbr.bin > /dev/vda - when: growpart.rc == 0 - - - - include: "{{ tasks }}/cloud_setup_basic.yml" - - handlers: - - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/f20_temp_instance.yml b/playbooks/f20_temp_instance.yml deleted file mode 100644 index 582d7f0b40..0000000000 --- a/playbooks/f20_temp_instance.yml +++ /dev/null @@ -1,53 +0,0 @@ -# setup a transient fedora instance -# optionally can take --extra-vars="hostbase=hostnamebase root_auth_users='user1 user2 user3'" - -- name: check/create instance - hosts: lockbox01.phx2.fedoraproject.org - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - vars: - - keypair: fedora-admin-20130801 - - image: "{{ f20_qcow_id }}" - - instance_type: m1.small - - security_group: default - - region: nova - - tasks: - - include: "{{ tasks }}/transient_cloud.yml" - -- name: provision instance - hosts: tmp_just_created - user: fedora - gather_facts: True - sudo: yes - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - tasks: - - name: growpart /dev/vda1 partition (/) to full size - action: command growpart /dev/vda 1 - register: growpart - always_run: true - changed_when: "growpart.rc != 1" - failed_when: growpart.rc == 2 - - - name: resize the /dev/vda 1 fs - action: command resize2fs /dev/vda1 - when: growpart.rc == 0 - - - name: put the mbr back - b/c the resize breaks booting otherwise - action: shell cat /usr/share/syslinux/mbr.bin > /dev/vda - when: growpart.rc == 0 - - - - include: "{{ tasks }}/cloud_setup_basic.yml" - - handlers: - - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/fedora_temp_instance.yml b/playbooks/fedora_temp_instance.yml deleted file mode 100644 index d4d927c6e6..0000000000 --- a/playbooks/fedora_temp_instance.yml +++ /dev/null @@ -1,37 +0,0 @@ -# setup a transient fedora instance -# optionally can take --extra-vars="hostbase=hostnamebase root_auth_users='user1 user2 user3'" - -- name: check/create instance - hosts: lockbox01.phx2.fedoraproject.org - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - vars: - - keypair: fedora-admin-20130801 - - image: "{{ f18_qcow_id }}" - - instance_type: m1.small - - security_group: default - - region: nova - - tasks: - - include: "{{ tasks }}/transient_cloud.yml" - -- name: provision instance - hosts: tmp_just_created - user: root - gather_facts: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - tasks: - - include: "{{ tasks }}/growroot_cloud.yml" - - include: "{{ tasks }}/cloud_setup_basic.yml" - - handlers: - - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/anitya.yml b/playbooks/groups/anitya.yml index 5c414aac6e..c6612136e2 100644 --- a/playbooks/groups/anitya.yml +++ b/playbooks/groups/anitya.yml @@ -32,7 +32,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client @@ -58,10 +58,8 @@ - "/srv/private/ansible/vars.yml" - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - pre_tasks: - - include: "{{ tasks }}/apache.yml" - roles: + - apache - anitya/fedmsg - anitya/frontend - role: collectd/fedmsg-service diff --git a/playbooks/groups/arm-packager.yml b/playbooks/groups/arm-packager.yml index fafe82fd18..6c38559927 100644 --- a/playbooks/groups/arm-packager.yml +++ b/playbooks/groups/arm-packager.yml @@ -11,6 +11,9 @@ - "/srv/private/ansible/vars.yml" - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + pre_tasks: + - include: "{{ tasks }}/yumrepos.yml" + roles: - base - rkhunter @@ -21,8 +24,31 @@ tasks: # this is how you include other task lists - - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/motd.yml" + - name: install packager tools (yum) + action: yum state=present pkg={{ item }} + with_items: + - fedora-packager + when: ansible_distribution_major_version|int < 22 + tags: + - packages + + - name: install packager tools (dnf) + action: dnf state=present pkg={{ item }} + with_items: + - fedora-packager + when: ansible_distribution_major_version|int > 21 + tags: + - packages + + - name: allow packagers to use mock + lineinfile: dest=/etc/pam.d/mock line="{{ item }} sufficient pam_succeed_if.so user ingroup packager use_uid quiet" + with_items: + - account + - auth + tags: + - config + handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/arm-qa.yml b/playbooks/groups/arm-qa.yml index 7a34c2ca84..6cfb5d0c43 100644 --- a/playbooks/groups/arm-qa.yml +++ b/playbooks/groups/arm-qa.yml @@ -11,6 +11,9 @@ - "/srv/private/ansible/vars.yml" - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + pre_tasks: + - include: "{{ tasks }}/yumrepos.yml" + roles: - base - rkhunter @@ -21,7 +24,6 @@ tasks: # this is how you include other task lists - - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/motd.yml" handlers: diff --git a/playbooks/groups/ask.yml b/playbooks/groups/ask.yml index d186e91ad8..6501d79085 100644 --- a/playbooks/groups/ask.yml +++ b/playbooks/groups/ask.yml @@ -32,6 +32,7 @@ - hosts - fas_client - collectd/base + - apache - ask - fedmsg/base - rsyncd @@ -43,7 +44,6 @@ - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/autosign.yml b/playbooks/groups/autosign.yml index a27a374f21..6ed73e9263 100644 --- a/playbooks/groups/autosign.yml +++ b/playbooks/groups/autosign.yml @@ -32,7 +32,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client diff --git a/playbooks/groups/backup-server.yml b/playbooks/groups/backup-server.yml index 24a7d23271..9a3cca75b3 100644 --- a/playbooks/groups/backup-server.yml +++ b/playbooks/groups/backup-server.yml @@ -4,7 +4,7 @@ # NOTE: most of these vars_path come from group_vars/backup_server or from hostvars - name: make backup server system - hosts: backup03.phx2.fedoraproject.org + hosts: backup01.phx2.fedoraproject.org user: root gather_facts: True @@ -22,6 +22,11 @@ - fas_client - sudo - collectd/base + - { role: nfs/client, + mnt_dir: '/fedora_backups', + nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3", + nfs_src_dir: 'fedora_backups' } + - openvpn/client tasks: - include: "{{ tasks }}/yumrepos.yml" @@ -33,7 +38,7 @@ user: name=gnomebackup state=present home=/fedora_backups/gnome/ createhome=yes shell=/sbin/nologin - name: Add a Directory for the Excludes list for each of the backed up GNOME machines - file: dest=/fedora/backups/gnome/excludes owner=gnomebackup group=gnomebackup state=directory + file: dest=/fedora_backups/gnome/excludes owner=gnomebackup group=gnomebackup state=directory - name: Install the GNOME SSH configuration file copy: src="{{ files }}/gnome/ssh_config" dest=/usr/local/etc/gnome_ssh_config mode=0600 owner=gnomebackup diff --git a/playbooks/groups/badges-backend.yml b/playbooks/groups/badges-backend.yml index 601b7ebf59..7605bb90d0 100644 --- a/playbooks/groups/badges-backend.yml +++ b/playbooks/groups/badges-backend.yml @@ -32,7 +32,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client diff --git a/playbooks/groups/badges-web.yml b/playbooks/groups/badges-web.yml index 641b49bfe1..7a7ab3dfef 100644 --- a/playbooks/groups/badges-web.yml +++ b/playbooks/groups/badges-web.yml @@ -32,11 +32,12 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client - collectd/base + - apache - badges/frontend - fedmsg/base - rsyncd @@ -56,7 +57,6 @@ - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/bastion.yml b/playbooks/groups/bastion.yml index d2aa164f09..800829dbc8 100644 --- a/playbooks/groups/bastion.yml +++ b/playbooks/groups/bastion.yml @@ -1,5 +1,5 @@ - name: make the servers - hosts: bastion:!bastion-comm01.qa.fedoraproject.org + hosts: bastion user: root gather_facts: False @@ -15,7 +15,7 @@ - include: "{{ handlers }}/restart_services.yml" - name: make the boxen be real for real - hosts: bastion:!bastion-comm01.qa.fedoraproject.org + hosts: bastion user: root gather_facts: True @@ -27,14 +27,13 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } - nagios_client - hosts - fas_client - sudo - collectd/base - - openvpn/server - - packager_alias + - { role: openvpn/server, when: not inventory_hostname.startswith('bastion-comm01') } + - { role: packager_alias, when: not inventory_hostname.startswith('bastion-comm01') } tasks: - include: "{{ tasks }}/yumrepos.yml" @@ -43,3 +42,14 @@ handlers: - include: "{{ handlers }}/restart_services.yml" + +- name: configure bastion-qa + hosts: bastion-comm01.qa.fedoraproject.org + user: root + gather_facts: True + + tasks: + - name: install needed packages + yum: pkg={{ item }} state=present + with_items: + - ipmitool diff --git a/playbooks/groups/beaker-stg.yml b/playbooks/groups/beaker-stg.yml new file mode 100644 index 0000000000..287ff34f79 --- /dev/null +++ b/playbooks/groups/beaker-stg.yml @@ -0,0 +1,69 @@ +# create a new beaker server +# NOTE: make sure there is room/space for this server on the vmhost +# NOTE: most of these vars_path come from group_vars/mirrorlist or from hostvars + +- name: make beaker server + hosts: beaker-stg + user: root + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - include: "{{ tasks }}/virt_instance_create.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + +- name: make the box be real + hosts: beaker-stg + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - base + - rkhunter + - denyhosts + - nagios_client + - hosts + - fas_client + - collectd/base + - sudo + - apache + + tasks: + # this is how you include other task lists + - include: "{{ tasks }}/yumrepos.yml" + - include: "{{ tasks }}/2fa_client.yml" + - include: "{{ tasks }}/motd.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + +- name: configure beaker and required services + hosts: beaker-stg + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - { role: mariadb_server, tags: ['mariadb'] } + - { role: beaker/base, tags: ['beakerbase'] } + - { role: beaker/labcontroller, tags: ['beakerlabcontroller'] } + - { role: beaker/server, tags: ['beakerserver'] } + + handlers: + - include: "{{ handlers }}/restart_services.yml" + diff --git a/playbooks/groups/beaker-virthosts.yml b/playbooks/groups/beaker-virthosts.yml new file mode 100644 index 0000000000..295e114df1 --- /dev/null +++ b/playbooks/groups/beaker-virthosts.yml @@ -0,0 +1,36 @@ +# create a new beaker virthost server system +# NOTE: should be used with --limit most of the time +# NOTE: most of these vars_path come from group_vars/backup_server or from hostvars +# This has an extra role that configures the virthost to be used with beaker for +# virtual machine clients + +- name: make virthost server system + hosts: beaker-virthosts + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - base + - rkhunter + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } + - nagios_client + - hosts + - fas_client + - collectd/base + - { role: iscsi_client, when: datacenter == "phx2" } + - sudo + - { role: openvpn/client, when: datacenter != "phx2" } + - { role: beaker/virthost, tags: ['beakervirthost'] } + + tasks: + - include: "{{ tasks }}/yumrepos.yml" + - include: "{{ tasks }}/2fa_client.yml" + - include: "{{ tasks }}/motd.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/blockerbugs.yml b/playbooks/groups/blockerbugs.yml index 99ad07cafc..70d66fa818 100644 --- a/playbooks/groups/blockerbugs.yml +++ b/playbooks/groups/blockerbugs.yml @@ -36,13 +36,13 @@ - rsyncd - { role: openvpn/client, when: env != "staging" } + - apache - blockerbugs tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/bodhi-backend.yml b/playbooks/groups/bodhi-backend.yml new file mode 100644 index 0000000000..2968ef81c6 --- /dev/null +++ b/playbooks/groups/bodhi-backend.yml @@ -0,0 +1,64 @@ +# create a new bodhi-backend system +# +# This group makes bodhi-backend servers. +# They are used by releng to push updates with bodhi. +# They also run some misc releng scripts. +# + +- name: make bodhi-backend systems + hosts: bodhi-backend:bodhi-backend-stg + user: root + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - include: "{{ tasks }}/virt_instance_create.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + +# Once the instance exists, configure it. + +- name: make bodhi-backend server system + hosts: bodhi-backend:bodhi-backend-stg + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - base + - nagios_client + - collectd/base + - hosts + - builder_repo + - fas_client + - sudo +# - role: nfs/client +# mnt_dir: '/pub/' +# nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub/' + - role: nfs/client + mnt_dir: '/mnt/fedora_koji' + nfs_src_dir: 'fedora_koji' + when: datacenter != 'staging' + - role: nfs/client + mnt_dir: '/mnt/fedora_koji' + nfs_src_dir: 'fedora_koji' + when: datacenter == 'staging' + - bodhi2/backend + - fedmsg/base + + tasks: + - include: "{{ tasks }}/2fa_client.yml" + - include: "{{ tasks }}/yumrepos.yml" + - include: "{{ tasks }}/motd.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/bodhi.yml b/playbooks/groups/bodhi.yml index 6856bb2858..bec5ceb4d0 100644 --- a/playbooks/groups/bodhi.yml +++ b/playbooks/groups/bodhi.yml @@ -27,25 +27,23 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client - collectd/base - { role: bodhi/base, when: "inventory_hostname.startswith('bodhi01') or inventory_hostname.startswith('bodhi02.phx2')" } - - { role: bodhi/masher, jobrunner: true, when: "inventory_hostname.startswith('releng04')" } - - { role: bodhi/masher, epelmasher: true, when: "inventory_hostname.startswith('relepel01')" } - { role: fedmsg/base, when: "inventory_hostname.startswith('bodhi01') or inventory_hostname.startswith('bodhi02.phx2')" } - rsyncd - sudo - { role: openvpn/client, when: env != "staging" } + - apache tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/bodhi2.yml b/playbooks/groups/bodhi2.yml new file mode 100644 index 0000000000..991b8a17d7 --- /dev/null +++ b/playbooks/groups/bodhi2.yml @@ -0,0 +1,51 @@ +- name: make bodhi2 + hosts: bodhi2:bodhi2-stg + user: root + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - include: "{{ tasks }}/virt_instance_create.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + +- name: make the box be real + hosts: bodhi2:bodhi2-stg + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - base + - rkhunter + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } + - nagios_client + - hosts + - fas_client + - sudo + - collectd/base + - rsyncd + - { role: openvpn/client, + when: env != "staging" } + - apache + - { role: bodhi2/base, when: "inventory_hostname.startswith('bodhi0')" } + - { role: fedmsg/base, when: "inventory_hostname.startswith('bodhi0')" } + - { role: nfs/client, when: datacenter == 'staging', mnt_dir: '/mnt/fedora_koji', nfs_src_dir: 'fedora_koji' } + + tasks: + - include: "{{ tasks }}/yumrepos.yml" + - include: "{{ tasks }}/2fa_client.yml" + - include: "{{ tasks }}/motd.yml" + - include: "{{ tasks }}/mod_wsgi.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/buildhw.yml b/playbooks/groups/buildhw.yml index 12f732e6a9..2b3f703076 100644 --- a/playbooks/groups/buildhw.yml +++ b/playbooks/groups/buildhw.yml @@ -3,7 +3,7 @@ # NOTE: most of these vars_path come from group_vars/buildhw or from hostvars - name: make koji builder(s) on raw hw - hosts: buildhw:buildppc:buildarm:buildaarch64:bkernel + hosts: buildhw:buildppc:buildarm:buildaarch64:buildppc64:bkernel remote_user: root gather_facts: True @@ -15,7 +15,10 @@ roles: - base - { role: nfs/client, when: inventory_hostname.startswith('build') , mnt_dir: '/mnt/fedora_koji', nfs_src_dir: 'fedora_koji' } + - { role: nfs/client, when: inventory_hostname.startswith('arm04-builder00') , mnt_dir: '/mnt/fedora_koji', nfs_src_dir: 'fedora_koji' } + - { role: nfs/client, when: inventory_hostname.startswith('arm04-builder01') , mnt_dir: '/mnt/fedora_koji', nfs_src_dir: 'fedora_koji' } - { role: nfs/client, when: inventory_hostname.startswith('aarch64') , mnt_dir: '/mnt/fedora_koji', nfs_src_dir: 'fedora_arm/data' } + - { role: nfs/client, when: inventory_hostname.startswith('ppc8') , mnt_dir: '/mnt/fedora_koji', nfs_src_dir: 'fedora_ppc/data' } - koji_builder - { role: bkernel, when: inventory_hostname.startswith('bkernel') } - hosts diff --git a/playbooks/groups/busgateway.yml b/playbooks/groups/busgateway.yml index 85f339d8db..eab0844bfc 100644 --- a/playbooks/groups/busgateway.yml +++ b/playbooks/groups/busgateway.yml @@ -27,7 +27,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client @@ -62,6 +62,7 @@ process: fedmsg-relay - role: collectd/fedmsg-service process: fedmsg-gateway + - role: collectd/fedmsg-activation vars_files: - /srv/web/infra/ansible/vars/global.yml diff --git a/playbooks/groups/copr-backend-newcloud.yml b/playbooks/groups/copr-backend-newcloud.yml deleted file mode 100644 index 35b190f3da..0000000000 --- a/playbooks/groups/copr-backend-newcloud.yml +++ /dev/null @@ -1,62 +0,0 @@ -- name: check/create instance - hosts: copr-back-stg - #hosts: copr-back:copr-back-stg - #hosts: copr-back-stg - user: fedora - sudo: True - #user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - /srv/private/ansible/vars.yml - - /srv/web/infra/ansible/vars/fedora-cloud.yml - - /srv/private/ansible/files/openstack/passwords.yml - tasks: - - include: "{{ tasks }}/persistent_cloud_new.yml" - - name: clean out old known_hosts for copr-be-dev - local_action: known_hosts path={{item}} host=copr-be-dev.cloud.fedoraproject.org state=absent - ignore_errors: True - with_items: - - /root/.ssh/known_hosts - - /etc/ssh/ssh_known_hosts - - include: "{{ tasks }}/growroot_cloud.yml" - -- name: cloud basic setup - hosts: copr-back-stg - #hosts: copr-be-dev2.cloud.fedoraproject.org - #hosts: copr-back:copr-back-stg - #hosts: copr-back-stg - user: fedora - sudo: True - gather_facts: True - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - /srv/private/ansible/vars.yml - - tasks: - - include: "{{ tasks }}/cloud_setup_basic.yml" - - - name: set hostname (required by some services, at least postfix need it) - shell: "hostname {{copr_hostbase}}.cloud.fedoraproject.org" - -- name: provision instance - hosts: copr-back-stg - #hosts: copr-back:copr-back-stg - #hosts: copr-back-stg - #user: root - gather_facts: True - user: fedora - sudo: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - /srv/private/ansible/vars.yml - - /srv/private/ansible/files/openstack/passwords.yml - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - # Roles are run first, before tasks, regardless of where you place them here. - roles: - - base - - copr/backend - - fedmsg/base diff --git a/playbooks/groups/copr-backend.yml b/playbooks/groups/copr-backend.yml index c317408f35..323dedead2 100644 --- a/playbooks/groups/copr-backend.yml +++ b/playbooks/groups/copr-backend.yml @@ -1,36 +1,41 @@ - name: check/create instance + #hosts: copr-back hosts: copr-back:copr-back-stg - #hosts: copr-back-stg user: root gather_facts: False vars_files: - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml tasks: - - include: "{{ tasks }}/persistent_cloud.yml" + - include: "{{ tasks }}/persistent_cloud_new.yml" - include: "{{ tasks }}/growroot_cloud.yml" - name: cloud basic setup hosts: copr-back:copr-back-stg - #hosts: copr-back-stg + user: root + gather_facts: True vars_files: - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" + - /srv/private/ansible/vars.yml tasks: - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{copr_hostbase}}.cloud.fedoraproject.org" + - name: provision instance hosts: copr-back:copr-back-stg - #hosts: copr-back-stg user: root - gather_facts: False + gather_facts: True vars_files: - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml # Roles are run first, before tasks, regardless of where you place them here. diff --git a/playbooks/groups/copr-frontend-newcloud.yml b/playbooks/groups/copr-dist-git.yml similarity index 63% rename from playbooks/groups/copr-frontend-newcloud.yml rename to playbooks/groups/copr-dist-git.yml index fa0191615b..9fdd9ad892 100644 --- a/playbooks/groups/copr-frontend-newcloud.yml +++ b/playbooks/groups/copr-dist-git.yml @@ -1,8 +1,8 @@ - name: check/create instance - #hosts: copr-front-stg:copr-front - hosts: copr-front-stg - user: fedora - sudo: True + hosts: copr-dist-git-stg:copr-dist-git + user: root + #user: centos + #sudo: True gather_facts: False vars_files: @@ -13,20 +13,14 @@ tasks: - include: "{{ tasks }}/persistent_cloud_new.yml" - - name: clean out old known_hosts for copr-fe-dev - local_action: known_hosts path={{item}} host=copr-fe-dev.cloud.fedoraproject.org state=absent - ignore_errors: True - with_items: - - /root/.ssh/known_hosts - - /etc/ssh/ssh_known_hosts - include: "{{ tasks }}/growroot_cloud.yml" - + # TODO: remove when copr-dist-git will be deployed to the persistent tenant - name: cloud basic setup - #hosts: copr-front-stg:copr-front - hosts: copr-front-stg - user: fedora - sudo: True + hosts: copr-dist-git-stg:copr-dist-git + user: root + #user: centos + #sudo: True gather_facts: True vars_files: - /srv/web/infra/ansible/vars/global.yml @@ -38,10 +32,10 @@ shell: "hostname {{copr_hostbase}}.cloud.fedoraproject.org" - name: provision instance - #hosts: copr-front:copr-front-stg - hosts: copr-front-stg - user: fedora - sudo: True + hosts: copr-dist-git-stg:copr-dist-git + user: root + # user: centos + # sudo: True gather_facts: True vars_files: @@ -51,4 +45,7 @@ roles: - base - - copr/frontend + - copr/dist_git + + handlers: + - include: "../../handlers/restart_services.yml" diff --git a/playbooks/groups/copr-frontend.yml b/playbooks/groups/copr-frontend.yml index 89eaacf13b..43839e018b 100644 --- a/playbooks/groups/copr-frontend.yml +++ b/playbooks/groups/copr-frontend.yml @@ -1,21 +1,22 @@ - name: check/create instance hosts: copr-front-stg:copr-front - #hosts: copr-front - user: root + # hosts: copr-front gather_facts: False vars_files: - /srv/web/infra/ansible/vars/global.yml - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml tasks: - - include: "{{ tasks }}/persistent_cloud.yml" + - include: "{{ tasks }}/persistent_cloud_new.yml" - include: "{{ tasks }}/growroot_cloud.yml" - - name: cloud basic setup hosts: copr-front-stg:copr-front - #hosts: copr-front + # hosts: copr-front + gather_facts: True vars_files: - /srv/web/infra/ansible/vars/global.yml - "/srv/private/ansible/vars.yml" @@ -27,8 +28,7 @@ - name: provision instance hosts: copr-front:copr-front-stg - #hosts: copr-front - user: root + # hosts: copr-front gather_facts: True vars_files: diff --git a/playbooks/groups/copr-keygen.yml b/playbooks/groups/copr-keygen.yml index 4405171e7a..869759bbba 100644 --- a/playbooks/groups/copr-keygen.yml +++ b/playbooks/groups/copr-keygen.yml @@ -1,20 +1,22 @@ - name: check/create instance - #hosts: copr-keygen:copr-keygen-stg - hosts: copr-keygen-stg - user: root + hosts: copr-keygen-stg:copr-keygen + #hosts: copr-keygen gather_facts: False vars_files: - /srv/web/infra/ansible/vars/global.yml - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml tasks: - - include: "{{ tasks }}/persistent_cloud.yml" + - include: "{{ tasks }}/persistent_cloud_new.yml" - include: "{{ tasks }}/growroot_cloud.yml" - name: cloud basic setup - #hosts: copr-keygen:copr-keygen-stg - hosts: copr-keygen-stg + hosts: copr-keygen-stg:copr-keygen + # hosts: copr-keygen + gather_facts: True vars_files: - /srv/web/infra/ansible/vars/global.yml - "/srv/private/ansible/vars.yml" @@ -25,15 +27,15 @@ shell: "hostname {{copr_hostbase}}.cloud.fedoraproject.org" - name: provision instance - #hosts: copr-keygen:copr-keygen-stg - hosts: copr-keygen-stg - gather_facts: False - user: root + hosts: copr-keygen:copr-keygen-stg + #hosts: copr-keygen + gather_facts: True + vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + roles: - base - copr/keygen - diff --git a/playbooks/groups/datagrepper.yml b/playbooks/groups/datagrepper.yml index eeca3fb546..126e07be34 100644 --- a/playbooks/groups/datagrepper.yml +++ b/playbooks/groups/datagrepper.yml @@ -29,7 +29,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client @@ -39,12 +39,12 @@ - sudo - { role: openvpn/client, when: env != "staging" } + - apache tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/docs-backend.yml b/playbooks/groups/docs-backend.yml index 3c7d59d166..bd260e65ba 100644 --- a/playbooks/groups/docs-backend.yml +++ b/playbooks/groups/docs-backend.yml @@ -36,14 +36,13 @@ - sudo - { role: openvpn/client, when: env != "staging" } + - apache tasks: # this is how you include other task lists - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - # we want httpd for now, to examine the product directly - - include: "{{ tasks }}/apache.yml" handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/docs-dev.yml b/playbooks/groups/docs-dev.yml new file mode 100644 index 0000000000..35f908896f --- /dev/null +++ b/playbooks/groups/docs-dev.yml @@ -0,0 +1,27 @@ +- name: check/create instance + hosts: docs-dev + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + - include: "{{ tasks }}/growroot_cloud.yml" + +- name: setup all the things + hosts: docs-dev + gather_facts: True + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" diff --git a/playbooks/groups/download.yml b/playbooks/groups/download.yml index 31c8749987..be4b3d1d54 100644 --- a/playbooks/groups/download.yml +++ b/playbooks/groups/download.yml @@ -45,13 +45,14 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client - collectd/base + - apache - download - - { role: mod_limitipconn, when: ansible_distribution_major_version != '7'} + - { role: mod_limitipconn, when: ansible_distribution_major_version|int != '7'} - rsyncd - { role: nfs/client, when: datacenter == "phx2", mnt_dir: '/srv/pub', nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub' } - { role: nfs/client, when: datacenter == "rdu", mnt_dir: '/srv/pub', nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub' } @@ -63,7 +64,6 @@ - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" when: env != "staging" - - include: "{{ tasks }}/apache.yml" - name: put in script for syncing action: copy src="{{ files }}/download/sync-up-downloads.sh" dest=/usr/local/bin/sync-up-downloads owner=root group=root mode=755 when: datacenter == 'ibiblio' diff --git a/playbooks/groups/elections.yml b/playbooks/groups/elections.yml index 66a1e7703b..23da25e1cf 100644 --- a/playbooks/groups/elections.yml +++ b/playbooks/groups/elections.yml @@ -27,7 +27,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client @@ -35,12 +35,13 @@ - sudo - { role: openvpn/client, when: env != "staging" } + - apache + - collectd/base tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/fas.yml b/playbooks/groups/fas.yml index 27113f0fb6..274494da2b 100644 --- a/playbooks/groups/fas.yml +++ b/playbooks/groups/fas.yml @@ -40,6 +40,7 @@ - collectd/base - rsyncd - memcached + - apache - fas_server - fedmsg/base - sudo @@ -51,7 +52,6 @@ - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/fedimg.yml b/playbooks/groups/fedimg.yml index 33e8deb0a2..4e5664040d 100644 --- a/playbooks/groups/fedimg.yml +++ b/playbooks/groups/fedimg.yml @@ -30,7 +30,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - fas_client - nagios_client - hosts diff --git a/playbooks/groups/fedoauth.yml b/playbooks/groups/fedoauth.yml index 02f1dc8a6e..16c87f4d18 100644 --- a/playbooks/groups/fedoauth.yml +++ b/playbooks/groups/fedoauth.yml @@ -4,7 +4,7 @@ # NOTE: most of these vars_path come from group_vars/fedoauth* or from hostvars - name: make fedoauth - hosts: fedoauth-stg:fedoauth + hosts: fedoauth user: root gather_facts: False @@ -20,7 +20,7 @@ - include: "{{ handlers }}/restart_services.yml" - name: make the box be real - hosts: fedoauth-stg:fedoauth + hosts: fedoauth user: root gather_facts: True @@ -40,19 +40,20 @@ - sudo - { role: openvpn/client, when: env != "staging" } + - apache + - collectd/base tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: - include: "{{ handlers }}/restart_services.yml" - name: deploy fedoauth itself - hosts: fedoauth-stg:fedoauth + hosts: fedoauth user: root gather_facts: True diff --git a/playbooks/groups/fedocal.yml b/playbooks/groups/fedocal.yml index 2974af521d..9f0252d136 100644 --- a/playbooks/groups/fedocal.yml +++ b/playbooks/groups/fedocal.yml @@ -32,7 +32,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client @@ -40,12 +40,13 @@ - sudo - { role: openvpn/client, when: env != "staging" } + - apache + - collectd/base tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/gallery.yml b/playbooks/groups/gallery.yml index 47358a9373..ec784d2063 100644 --- a/playbooks/groups/gallery.yml +++ b/playbooks/groups/gallery.yml @@ -38,12 +38,13 @@ - fas_client - fedmsg/base - sudo + - apache + - collectd/base tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/github2fedmsg.yml b/playbooks/groups/github2fedmsg.yml index 85b22a33f1..de608aa51c 100644 --- a/playbooks/groups/github2fedmsg.yml +++ b/playbooks/groups/github2fedmsg.yml @@ -32,7 +32,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client @@ -41,12 +41,12 @@ - sudo - { role: openvpn/client, when: env != "staging" } + - apache tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/hotness.yml b/playbooks/groups/hotness.yml index 01e817314c..0adca68120 100644 --- a/playbooks/groups/hotness.yml +++ b/playbooks/groups/hotness.yml @@ -32,7 +32,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - collectd/base - hosts diff --git a/playbooks/groups/ipsilon.yml b/playbooks/groups/ipsilon.yml index 11690c23aa..ec1164a777 100644 --- a/playbooks/groups/ipsilon.yml +++ b/playbooks/groups/ipsilon.yml @@ -4,7 +4,7 @@ # NOTE: most of these vars_path come from group_vars/ipsilon* or from hostvars - name: make ipsilon - hosts: ipsilon-stg + hosts: ipsilon:ipsilon-stg user: root gather_facts: False @@ -20,7 +20,7 @@ - include: "{{ handlers }}/restart_services.yml" - name: make the box be real - hosts: ipsilon-stg + hosts: ipsilon:ipsilon-stg user: root gather_facts: True @@ -40,19 +40,19 @@ - sudo - { role: openvpn/client, when: env != "staging" } + - apache tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: - include: "{{ handlers }}/restart_services.yml" - name: deploy ipsilon itself - hosts: ipsilon-stg + hosts: ipsilon:ipsilon-stg user: root gather_facts: True diff --git a/playbooks/groups/jenkins-cloud.yml b/playbooks/groups/jenkins-cloud.yml index 080d7fa8bd..9cc1fd7967 100644 --- a/playbooks/groups/jenkins-cloud.yml +++ b/playbooks/groups/jenkins-cloud.yml @@ -58,7 +58,7 @@ - java-1.8.0-openjdk tags: - packages - when: ansible_distribution_major_version != '7' + when: ansible_distribution_major_version|int != 7 - name: add jenkins proxy config file for apache action: copy src="{{ files }}/jenkins/master/jenkins-apache.conf" @@ -98,14 +98,14 @@ tags: - config - - name: make sure jenkins is stopped - action: service name=jenkins state=stopped +# - name: make sure jenkins is stopped +# action: service name=jenkins state=stopped - - name: clean any previous plugin deployments - action: file state=absent path=/var/lib/jenkins/plugins +# - name: clean any previous plugin deployments +# action: file state=absent path=/var/lib/jenkins/plugins - - name: mkdir dir for jenkins data - action: file state=directory path=/var/lib/jenkins/plugins/ owner=jenkins group=jenkins +# - name: mkdir dir for jenkins data +# action: file state=directory path=/var/lib/jenkins/plugins/ owner=jenkins group=jenkins # - name: Download jenkins plugins # get_url: url=https://updates.jenkins-ci.org/download/plugins/{{ item.name }}/{{ item.version }}/{{ item.name }}.hpi @@ -203,35 +203,7 @@ # - config - name: Install custom jenkins plugins (from ansible bigfiles) - action: copy src="{{ bigfiles }}/jenkins/{{ item }}.hpi" dest=/var/lib/jenkins/plugins/{{ item }}.hpi - with_items: - - fedmsg - - bazaar - - chucknorris - - cobertura - - cvs - - external-monitor-job - - git - - git-client - - instant-messaging - - ldap - - matrix-auth - - maven-plugin - - mercurial - - openid - - python - - scm-api - - ssh-agent - - subversion - - translation - - violations - - xunit - - multiple-scms - - credentials - - mailer - - javadoc - - warnings - - ghprb + synchronize: src="{{ bigfiles }}/jenkins/" dest=/var/lib/jenkins/plugins/ delete=yes notify: - restart jenkins tags: @@ -244,10 +216,6 @@ tags: - config - - name: Give the user jenkins the ownership of the /var/lib/jenkins - file: path=/var/lib/jenkins/ - owner=jenkins group=jenkins recurse=yes - - name: add jenkins ssh priv key so it can connect to clients action: copy src="{{ private }}/files/jenkins/ssh/jenkins_master" dest=/var/tmp/jenkins_master_id_rsa mode=600 owner=jenkins group=jenkins tags: @@ -258,18 +226,17 @@ tags: - config - - name: start jenkins itself - action: service name=jenkins state=running + - meta: flush_handlers - - name: wait for a dir to exist - this is just ugly - shell: while `true`; do [ -d /var/lib/jenkins/plugins/openid/WEB-INF/lib/ ] && break; sleep 5; done - async: 1800 - poll: 20 +# +# We have to wait here because if jenkins is restarting it takes a while to unpack all the +# plugins and create the directory we use below to sync the hotfix versions over the plugin versions +# for the openid pluin. +# + - wait_for: path=/var/lib/jenkins/plugins/openid/WEB-INF/lib/ - name: jenkins hotfix big file - copy: src={{ item }} dest=/var/lib/jenkins/plugins/openid/WEB-INF/lib/ group=jenkins mode=655 - with_fileglob: - - "{{ bigfiles }}/hotfixes/jenkins/openid/*.jar" + synchronize: src="{{ bigfiles }}/hotfixes/jenkins/openid/" dest=/var/lib/jenkins/plugins/openid/WEB-INF/lib/ notify: - restart jenkins @@ -289,8 +256,7 @@ - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml roles: - - role: fedmsg/base - fedmsg_fqdn: jenkins.cloud.fedoraproject.org + - fedmsg/base handlers: - include: "{{ handlers }}/restart_services.yml" @@ -298,6 +264,7 @@ ################################################### # jenkins slaves +# old cloud instance setup - name: check/create instance for jenkins-slaves hosts: jenkins-slaves user: root @@ -316,8 +283,22 @@ - include: "{{ tasks }}/persistent_cloud.yml" - include: "{{ tasks }}/growroot_cloud.yml" +# new cloud instance setup +- name: check/create instance + hosts: jenkins-slaves-newcloud + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + - name: provision workers - hosts: jenkins-slaves + hosts: jenkins-slaves:jenkins-slaves-newcloud user: root gather_facts: True tags: @@ -380,6 +361,12 @@ - openssl-devel # Required by bodhi/cffi/cryptography - redis # Required by copr - createrepo_c # Required by bodhi2 + - python-createrepo_c # Required by bodhi2 + - koji # Required by koschei (ticket #4852) + - python-hawkey # Required by koschei (ticket #4852) + - python-librepo # Required by koschei (ticket #4852) + - rpm-python # Required by koschei (ticket #4852) + - osbs # Required by pyrpkg (ticket 4838) tags: - packages @@ -391,13 +378,22 @@ - python-straight-plugin - pyflakes # Requested by user rholy (ticket #4175) #- dia # Required by javapackages-tools ticket #4279 - when: ansible_distribution_version != "7.0" + when: ansible_distribution_version[0] != "7" + tags: + - packages + + - name: install packages available on el7 builder + action: yum state=present pkg={{ item }} + with_items: + - python-pygit2 # Required for pagure + - python-webob1.4 # Required by bodhi2 + when: ansible_distribution_version[0] == "7" tags: - packages - name: install pkgs for jenkins for fedora systems > F19 action: yum state=present pkg={{ item }} - when: is_fedora is defined and ansible_distribution_major_version > 20 + when: is_fedora is defined and ansible_distribution_major_version|int >= 20 with_items: - sbt-extras @@ -480,24 +476,59 @@ - pwgen # Required for mpi4py - openmpi-devel # Required for mpi4py - mpich2-devel # Required for mpi4py - - python-openid # Required by Ipsilon - - python-openid-teams # Required by Ipsilon - - python-openid-cla # Required by Ipsilon - - python-cherrypy # Required by Ipsilon - - m2crypto # Required by Ipsilon - - lasso-python # Required by Ipsilon - - python-sqlalchemy # Required by Ipsilon - - python-ldap # Required by Ipsilon - - python-pam # Required by Ipsilon - - freeipa-python # Required by Ipsilon - - httpd # Required by Ipsilon - - mod_auth_mellon # Required by Ipsilon - - postgresql-server # Required by Ipsilon - - mod_wsgi # Required by Ipsilon - - python-jinja2 # Required by Ipsilon + - pylint # Required by Ipsilon + - python-pep8 + - nodejs-less + - python-openid + - python-openid-teams + - python-openid-cla + - python-cherrypy + - m2crypto + - lasso-python + - python-sqlalchemy + - python-ldap + - python-pam + - python-fedora + - freeipa-python + - httpd + - mod_auth_mellon + - postgresql-server + - openssl + - mod_wsgi + - python-jinja2 + - python-psycopg2 + - sssd + - libsss_simpleifp + - openldap-servers + - mod_auth_gssapi + - krb5-server + - socket_wrapper + - nss_wrapper + - python-requests-kerberos + - python-lesscpy # End requires for Ipsilon + - libxml2-python # Required by gimp-docs + - createrepo # Required by dnf tags: - packages + - name: setup jenkins_slave user + action: user name=jenkins_slave state=present createhome=yes system=no + tags: + - jenkinsuser + + - name: setup jenkins_slave ssh key + action: authorized_key user=jenkins_slave key="{{ item }}" + with_file: + - "{{ private }}/files/jenkins/ssh/jenkins_master.pub" + + - name: jenkins_slave to mock group + action: user name=jenkins_slave groups=mock + + - name: add .gitconfig for jenkins_slave user + action: copy src="{{ files }}/jenkins/gitconfig" dest=/home/jenkins_slave/.gitconfig owner=jenkins_slave group=jenkins_slave mode=664 + tags: + - config + - name: drop current android SDK when: is_fedora is defined action: file state=absent path=/var/android @@ -531,24 +562,6 @@ tags: - config - - name: setup jenkins_slave user - action: user name=jenkins_slave state=present createhome=yes system=no - tags: - - jenkinsuser - - - name: setup jenkins_slave ssh key - action: authorized_key user=jenkins_slave key="{{ item }}" - with_file: - - "{{ private }}/files/jenkins/ssh/jenkins_master.pub" - - - name: jenkins_slave to mock group - action: user name=jenkins_slave groups=mock - - - name: add .gitconfig for jenkins_slave user - action: copy src="{{ files }}/jenkins/gitconfig" dest=/home/jenkins_slave/.gitconfig owner=jenkins_slave group=jenkins_slave mode=664 - tags: - - config - - name: template sshd_config copy: src="{{ item }}" dest=/etc/ssh/sshd_config mode=0600 owner=root group=root with_first_found: diff --git a/playbooks/groups/jenkins-dev.yml b/playbooks/groups/jenkins-dev.yml new file mode 100644 index 0000000000..ef8fdcc5c9 --- /dev/null +++ b/playbooks/groups/jenkins-dev.yml @@ -0,0 +1,42 @@ +- name: check/create instance + hosts: jenkins-dev + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + +- name: setup all the things + hosts: jenkins-dev + gather_facts: True + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" + - include: "{{ tasks }}/yumrepos.yml" + +- name: provision instance + hosts: jenkins-dev + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - base + - { role: jenkins/master, when: jenkins_master is defined } + - { role: jenkins/slave, when: jenkins_master is not defined } + + handlers: + - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/kerneltest.yml b/playbooks/groups/kerneltest.yml index 861a57327d..354e12ebd5 100644 --- a/playbooks/groups/kerneltest.yml +++ b/playbooks/groups/kerneltest.yml @@ -40,12 +40,12 @@ - sudo - { role: openvpn/client, when: env != "staging" } + - apache tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/keyserver.yml b/playbooks/groups/keyserver.yml index 652c743b3d..60501f14fe 100644 --- a/playbooks/groups/keyserver.yml +++ b/playbooks/groups/keyserver.yml @@ -32,7 +32,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client @@ -40,13 +40,13 @@ - collectd/base - { role: openvpn/client, when: env != "staging" } + - apache - keyserver tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/koji-hub.yml b/playbooks/groups/koji-hub.yml index cb36f518db..be1843cf6e 100644 --- a/playbooks/groups/koji-hub.yml +++ b/playbooks/groups/koji-hub.yml @@ -3,7 +3,7 @@ # NOTE: most of these vars_path come from group_vars/koji-hub or from hostvars - name: make koji hub - hosts: koji-stg:koji01.phx2.fedoraproject.org:koji02.phx2.fedoraproject.org + hosts: koji-stg:koji01.phx2.fedoraproject.org:koji02.phx2.fedoraproject.org:s390-koji01.qa.fedoraproject.org user: root gather_facts: False @@ -21,7 +21,7 @@ # Once the instance exists, configure it. - name: make koji_hub server system - hosts: koji-stg:koji01.phx2.fedoraproject.org:koji02.phx2.fedoraproject.org + hosts: koji-stg:koji01.phx2.fedoraproject.org:koji02.phx2.fedoraproject.org:s390-koji01.qa.fedoraproject.org user: root gather_facts: True @@ -30,9 +30,6 @@ - "/srv/private/ansible/vars.yml" - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - pre_tasks: - - include: "{{ tasks }}/apache.yml" - roles: - base - rkhunter @@ -41,18 +38,37 @@ - fas_client - builder_repo - collectd/base + - apache - fedmsg/base - koji_hub - - { role: koji_builder, when: env == "staging" } + - { role: rsyncd, when: inventory_hostname.startswith('s390') } + - { role: koji_builder, when: env == "staging" or inventory_hostname.startswith('s390') } - { role: nfs/server, when: env == "staging" } - - { role: keepalived, when: env != "staging" } + - { role: keepalived, when: env == "production" and inventory_hostname.startswith('koji') } - role: nfs/client mnt_dir: '/mnt/fedora_koji' nfs_src_dir: 'fedora_koji' - when: env != 'staging' + when: env == 'production' and inventory_hostname.startswith('koji') + - role: nfs/client + mnt_dir: '/mnt/koji' + nfs_src_dir: 'fedora_s390/data' + when: env == 'production' and inventory_hostname.startswith('s390') + # In staging, we mount fedora_koji as read only (see nfs_mount_opts) + - role: nfs/client + mnt_dir: '/mnt/fedora_koji_prod' + nfs_src_dir: 'fedora_koji' + when: env == 'staging' and inventory_hostname.startswith('koji') - sudo tasks: + - name: create secondary volume dir for stg koji + file: dest=/mnt/koji/vol state=directory owner=apache group=apache mode=0755 + tags: koji_hub + when: env == 'staging' + - name: create symlink for stg/prod secondary volume + file: src=/mnt/fedora_koji_prod/koji dest=/mnt/koji/vol/prod state=link + tags: koji_hub + when: env == 'staging' - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" @@ -61,19 +77,19 @@ - include: "{{ handlers }}/restart_services.yml" -- name: Start the kojid builder daemon, but only on staging. - # Really -- this should never be set for prod. - hosts: koji-stg - user: root - gather_facts: True - - # XXX - should these just be included in koji_builder and koji_hub roles? - tasks: - - name: make sure kojid is running - service: name=kojid state=running - tags: - - kojid - - name: make sure kojira is running - service: name=kojira state=running - tags: - - kojira +#- name: Start the kojid builder daemon, but only on staging. +# # Really -- this should never be set for prod. +# hosts: koji-stg:s390-koji01.qa.fedoraproject.org +# user: root +# gather_facts: True +# +# # XXX - should these just be included in koji_builder and koji_hub roles? +# tasks: +# - name: make sure kojid is running +# service: name=kojid state=running +# tags: +# - kojid +# - name: make sure kojira is running +# service: name=kojira state=running +# tags: +# - kojira diff --git a/playbooks/groups/kojipkgs.yml b/playbooks/groups/kojipkgs.yml index 8b273486f0..388cebabec 100644 --- a/playbooks/groups/kojipkgs.yml +++ b/playbooks/groups/kojipkgs.yml @@ -32,6 +32,7 @@ - fas_client - sudo - collectd/base + - apache - kojipkgs - role: nfs/client mnt_dir: '/mnt/fedora_app/app' @@ -41,7 +42,6 @@ nfs_src_dir: 'fedora_koji' tasks: - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" diff --git a/playbooks/groups/koschei.yml b/playbooks/groups/koschei.yml new file mode 100644 index 0000000000..d836568914 --- /dev/null +++ b/playbooks/groups/koschei.yml @@ -0,0 +1,48 @@ +- name: make koschei + hosts: koschei:koschei-stg + user: root + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - include: "{{ tasks }}/virt_instance_create.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + +- name: install koschei + hosts: koschei:koschei-stg + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - base + - rkhunter + - nagios_client + - hosts + - fas_client + - builder_repo + - collectd/base + - apache + - koschei + - fedmsg/base + - sudo + - { role: openvpn/client, when: env != "staging" } + + tasks: + - include: "{{ tasks }}/yumrepos.yml" + - include: "{{ tasks }}/2fa_client.yml" + - include: "{{ tasks }}/motd.yml" + - include: "{{ tasks }}/mod_wsgi.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/lockbox.yml b/playbooks/groups/lockbox.yml index 6dbcc708aa..84890225bc 100644 --- a/playbooks/groups/lockbox.yml +++ b/playbooks/groups/lockbox.yml @@ -33,6 +33,7 @@ - fas_client - ansible-server - sudo + - collectd/base tasks: - include: "{{ tasks }}/yumrepos.yml" @@ -42,7 +43,6 @@ handlers: - include: "{{ handlers }}/restart_services.yml" - - name: configure lockbox hosts: lockbox user: root diff --git a/playbooks/hosts/logserver.yml b/playbooks/groups/logserver.yml similarity index 96% rename from playbooks/hosts/logserver.yml rename to playbooks/groups/logserver.yml index 15a112dbef..43b49cd24c 100644 --- a/playbooks/hosts/logserver.yml +++ b/playbooks/groups/logserver.yml @@ -30,6 +30,7 @@ - nagios_client - hosts - fas_client + - apache - collectd/base - collectd/server - sudo @@ -37,7 +38,6 @@ tasks: - include: "{{ tasks }}/yumrepos.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - include: "{{ tasks }}/openvpn_client_7.yml" diff --git a/playbooks/groups/mailman.yml b/playbooks/groups/mailman.yml index f50df37ad4..9778ba82d5 100644 --- a/playbooks/groups/mailman.yml +++ b/playbooks/groups/mailman.yml @@ -31,7 +31,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client @@ -40,13 +40,13 @@ - sudo - { role: openvpn/client, when: env != "staging" } + - apache tasks: # this is how you include other task lists - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: @@ -86,16 +86,11 @@ postgresql_user: name=hyperkittyadmin password={{ mailman_hk_admin_db_pass }} - name: hyperkitty DB user postgresql_user: name=hyperkittyapp password={{ mailman_hk_db_pass }} - - name: kittystore DB admin user - postgresql_user: name=kittystoreadmin password={{ mailman_ks_admin_db_pass }} - - name: kittystore DB user - postgresql_user: name=kittystoreapp password={{ mailman_ks_db_pass }} - name: databases creation postgresql_db: name={{ item }} owner="{{ item }}admin" encoding=UTF-8 with_items: - mailman - hyperkitty - - kittystore - name: test database creation postgresql_db: name=test_hyperkitty owner=hyperkittyadmin encoding=UTF-8 @@ -116,8 +111,6 @@ mailman_mailman_db_pass: "{{ mailman_mm_db_pass }}" mailman_hyperkitty_admin_db_pass: "{{ mailman_hk_admin_db_pass }}" mailman_hyperkitty_db_pass: "{{ mailman_hk_db_pass }}" - mailman_kittystore_admin_db_pass: "{{ mailman_ks_admin_db_pass }}" - mailman_kittystore_db_pass: "{{ mailman_ks_db_pass }}" mailman_hyperkitty_cookie_key: "{{ mailman_hk_cookie_key }}" - fedmsg/base @@ -126,7 +119,6 @@ yum: pkg={{ item }} state=present with_items: - tar - - mailman # transition from mailman2.1 tags: - packages diff --git a/playbooks/groups/mariadb-server.yml b/playbooks/groups/mariadb-server.yml index a313a4bff5..d3d7a5d736 100644 --- a/playbooks/groups/mariadb-server.yml +++ b/playbooks/groups/mariadb-server.yml @@ -3,7 +3,7 @@ # NOTE: most of these vars_path come from group_vars/backup_server or from hostvars - name: make mariadb-server instance - hosts: db-qa01.qa.fedoraproject.org + hosts: db03.phx2.fedoraproject.org:db03.stg.phx2.fedoraproject.org user: root gather_facts: False @@ -21,7 +21,7 @@ # Once the instance exists, configure it. - name: configure mariadb server system - hosts: db-qa01.qa.fedoraproject.org + hosts: db03.phx2.fedoraproject.org:db03.stg.phx2.fedoraproject.org user: root gather_facts: True @@ -33,7 +33,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - fas_client - nagios_client - hosts diff --git a/playbooks/groups/memcached.yml b/playbooks/groups/memcached.yml index 8a4a46a662..abe124ad92 100644 --- a/playbooks/groups/memcached.yml +++ b/playbooks/groups/memcached.yml @@ -1,5 +1,5 @@ - name: make memcached server - hosts: memcached + hosts: memcached:memcached-stg user: root gather_facts: False @@ -15,7 +15,7 @@ - include: "{{ handlers }}/restart_services.yml" - name: make the box be real - hosts: memcached + hosts: memcached:memcached-stg user: root gather_facts: True diff --git a/playbooks/groups/mirrorlist2.yml b/playbooks/groups/mirrorlist2.yml index 8d2e5ad71f..2872a5cfb6 100644 --- a/playbooks/groups/mirrorlist2.yml +++ b/playbooks/groups/mirrorlist2.yml @@ -29,6 +29,38 @@ - "/srv/private/ansible/vars.yml" - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + pre_tasks: + - name: Install policycoreutils-python + yum: pkg=policycoreutils-python state=present + + - name: Create /srv/web/ for all the goodies. + file: > + dest=/srv/web state=directory + owner=root group=root mode=0755 + tags: + - httpd + - httpd/website + + - name: check the selinux context of webdir + command: matchpathcon /srv/web + register: webdir + always_run: yes + changed_when: "1 != 1" + tags: + - config + - selinux + - httpd + - httpd/website + + - name: /srv/web file contexts + command: semanage fcontext -a -t httpd_sys_content_t "/srv/web(/.*)?" + when: webdir.stdout.find('httpd_sys_content_t') == -1 + tags: + - config + - selinux + - httpd + - httpd/website + roles: - base - rkhunter @@ -37,6 +69,35 @@ - hosts - fas_client - collectd/base + - apache + - httpd/mod_ssl + + - role: httpd/certificate + name: wildcard-2014.stg.fedoraproject.org + SSLCertificateChainFile: wildcard-2014.stg.fedoraproject.org.intermediate.cert + when: env == "staging" + + - role: httpd/website + name: mirrorlist-phx2.stg.phx2.fedoraproject.org + cert_name: wildcard-2014.stg.fedoraproject.org + SSLCertificateChainFile: wildcard-2014.stg.fedoraproject.org.intermediate.cert + when: env == "staging" + + - role: httpd/certificate + name: wildcard-2014.fedoraproject.org + SSLCertificateChainFile: wildcard-2014.fedoraproject.org.intermediate.cert + when: env != "staging" + + - role: httpd/website + name: mirrorlist-phx2.fedoraproject.org + cert_name: wildcard-2014.fedoraproject.org + server_aliases: + - mirrorlist-dedicatedsolutions.fedoraproject.org + - mirrorlist-host1plus.fedoraproject.org + - mirrorlist-ibiblio.fedoraproject.org + - mirrorlist-osuosl.fedoraproject.org + when: env != "staging" + - mirrormanager/mirrorlist2 - sudo - { role: openvpn/client, @@ -47,7 +108,6 @@ - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" diff --git a/playbooks/groups/mirrormanager.yml b/playbooks/groups/mirrormanager.yml index 190d10a17f..96cf275933 100644 --- a/playbooks/groups/mirrormanager.yml +++ b/playbooks/groups/mirrormanager.yml @@ -1,5 +1,5 @@ - name: make the servers - hosts: mm-stg + hosts: mm;mm-stg user: root gather_facts: False @@ -15,7 +15,7 @@ - include: "{{ handlers }}/restart_services.yml" - name: make the boxe be real for real - hosts: mm-stg + hosts: mm;mm-stg user: root gather_facts: True @@ -32,7 +32,7 @@ - fas_client - sudo - collectd/base - - { role: openvpn/client, when: env != "staging" } + - { role: openvpn/client, when: env != "staging" and inventory_hostname.startswith('mm-frontend') } - { role: nfs/client, when: inventory_hostname.startswith('mm-backend01'), mnt_dir: '/srv/pub', nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub' } tasks: @@ -44,7 +44,7 @@ - include: "{{ handlers }}/restart_services.yml" - name: Deploy the backend - hosts: mm-backend01.stg.phx2.fedoraproject.org + hosts: mm-backend;mm-backend-stg user: root gather_facts: True @@ -55,12 +55,13 @@ roles: - mirrormanager/backend + - s3-mirror handlers: - include: "{{ handlers }}/restart_services.yml" - name: Deploy the crawler - hosts: mm-crawler01.stg.phx2.fedoraproject.org + hosts: mm-crawler;mm-crawler-stg user: root gather_facts: True @@ -71,12 +72,14 @@ roles: - mirrormanager/crawler + - { role: rsyncd, + when: env != "staging" } handlers: - include: "{{ handlers }}/restart_services.yml" - name: Deploy the frontend (web-app) - hosts: mm-frontend01.stg.phx2.fedoraproject.org + hosts: mm-frontend;mm-frontend-stg user: root gather_facts: True @@ -90,3 +93,21 @@ handlers: - include: "{{ handlers }}/restart_services.yml" + +# Do this one last, since the mirrormanager user needs to exist so that it can +# own the fedmsg certs we put in place here. +- name: Put fedmsg stuff in place + hosts: mm;mm-stg + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - fedmsg/base + + handlers: + - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/noc.yml b/playbooks/groups/noc.yml index 9f7fa9d1e0..1d3befee38 100644 --- a/playbooks/groups/noc.yml +++ b/playbooks/groups/noc.yml @@ -27,7 +27,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client @@ -36,12 +36,12 @@ - sudo - { role: openvpn/client, when: env != "staging" } + - apache tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/notifs-backend.yml b/playbooks/groups/notifs-backend.yml index b7cefaf1c5..19b2ccfe64 100644 --- a/playbooks/groups/notifs-backend.yml +++ b/playbooks/groups/notifs-backend.yml @@ -32,7 +32,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - hosts - fas_client - nagios_client @@ -57,6 +57,15 @@ user: root gather_facts: True + pre_tasks: + - name: tell nagios to shush w.r.t. the backend since it usually complains + nagios: action=downtime minutes=20 service=host host={{ inventory_hostname_short }}{{ env_suffix }} + delegate_to: noc01.phx2.fedoraproject.org + ignore_errors: true + tags: + - fedmsgdconfig + - notifs/backend + roles: - fedmsg/hub - notifs/backend diff --git a/playbooks/groups/notifs-web.yml b/playbooks/groups/notifs-web.yml index b3b8a98d21..6dca4fc3c7 100644 --- a/playbooks/groups/notifs-web.yml +++ b/playbooks/groups/notifs-web.yml @@ -32,20 +32,18 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client - collectd/base - fedmsg/base + - apache - notifs/frontend - sudo - { role: openvpn/client, when: env != "staging" } - pre_tasks: - - include: "{{ tasks }}/apache.yml" - tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" diff --git a/playbooks/groups/nuancier.yml b/playbooks/groups/nuancier.yml index 3f2814d33e..8c5524bd2a 100644 --- a/playbooks/groups/nuancier.yml +++ b/playbooks/groups/nuancier.yml @@ -32,7 +32,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client @@ -40,12 +40,12 @@ - sudo - { role: openvpn/client, when: env != "staging" } + - apache tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/osbs.yml b/playbooks/groups/osbs.yml new file mode 100644 index 0000000000..3c07ff0d82 --- /dev/null +++ b/playbooks/groups/osbs.yml @@ -0,0 +1,187 @@ +- name: make osbs server + hosts: osbs-stg + user: root + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - include: "{{ tasks }}/virt_instance_create.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + +# Once the instance exists, configure it. + +- name: make osbs server system + hosts: osbs-stg + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - base + - rkhunter + - nagios_client + - hosts + - fas_client + - collectd/base + - sudo + + tasks: + - include: "{{ tasks }}/yumrepos.yml" + - include: "{{ tasks }}/2fa_client.yml" + - include: "{{ tasks }}/motd.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + +- include: ../openshift_common/openshift-cluster/config.yml + vars: + g_etcd_group: "{{ 'etcd' }}" + g_masters_group: "{{ 'openshift_masters' }}" + g_nodes_group: "{{ 'openshift_nodes' }}" + openshift_cluster_id: "{{ cluster_id | default('default') }}" + openshift_debug_level: 0 + openshift_deployment_type: "{{ deployment_type }}" + tags: + - osbs-openshift + +- name: OpenShift post-install config + hosts: openshift_masters + user: root + gather_facts: True + + tasks: + # This is technically idempotent via the 'oc create' command, it will just + # exit 1 if the service account already exists + - name: add OpenShift router service account + shell: echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"router"}}' | /usr/bin/oc create -f - + ignore_errors: true + + - name: add OpenShift router + shell: /usr/bin/oadm router --create=true --credentials=/etc/openshift/master/openshift-router.kubeconfig --service-account=router + + - name: Create storage location for OpenShift internal registry + file: + path: /var/lib/openshift/docker-registry + state: directory + + # This is technically idempotent via the 'oc create' command, it will just + # exit 1 if the service account already exists + - name: add OpenShift internal registry + shell: echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"registry"}}' | /usr/bin/oc create -f - + ignore_errors: true + + - name: add OpenShift internal registry + shell: /usr/bin/oadm registry --create=true --credentials=/etc/openshift/master/openshift-registry.kubeconfig --mount-host=/var/lib/openshift/docker-registry --service-account=registry + + tags: + - osbs-openshift + - osbs-openshift-postinstall + +- name: docker-registry + hosts: openshift_masters + user: root + gather_facts: True + + tasks: + - name: Install docker-registry + yum: pkg=docker-registry state=installed + - name: Start/enable docker-registry service + service: + name: docker-registry + state: started + enabled: yes + + tags: + - osbs-openshift + - osbs-openshift-postinstall + +- name: atomic-reactor install and config + hosts: openshift_masters + user: root + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + + tasks: + - name: Configure the atomic-reactor COPR + copy: + src: "{{ files }}/osbs/atomic-reactor.repo" + dest: /etc/yum.repos.d/atomic-reactor.repo + + - name: Install atomic-reactor + yum: pkg=atomic-reactor state=present + + - name: Build atomic-reactor base image + shell: atomic-reactor create-build-image --reactor-tarball-path /usr/share/atomic-reactor/atomic-reactor.tar.gz /usr/share/atomic-reactor/images/dockerhost-builder buildroot + + tags: + - osbs-openshift + - osbs-openshift-postinstall + +- name: atomic-reactor install and config + hosts: openshift_masters + user: root + gather_facts: False + + tasks: + - name: Tag the buildroot for builder local registry + shell: docker tag buildroot localhost:5000/buildroot + + - name: Push the buildroot to builder local registry + shell: docker push localhost:5000/buildroot + + - name: Pull fedora docker image + shell: docker pull fedora + + - name: Tag fedora for builder local registry + shell: docker tag fedora localhost:5000/fedora + + - name: Push the fedora image to builder local registry + shell: docker push localhost:5000/fedora + + tags: + - osbs-openshift + - osbs-openshift-postinstall + +- name: OSBS Configuration - OpenShift Auth + hosts: openshift_masters + user: root + gather_facts: False + + tasks: + - name: Set role-to-group for OSBS system:unauthenticated + shell: oadm policy add-role-to-group edit system:unauthenticated system:authenticated + - name: Set role-to-group for OSBS system:authenticated + shell: oadm policy add-role-to-group edit system:authenticated + + tags: + - osbs-openshift + - osbs-openshift-postinstall + +- name: OSBS Client tools config + hosts: openshift_masters:openshift_nodes + user: root + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + + tasks: + - copy: + src: "{{ files }}/osbs/osbs.conf" + dest: /etc/osbs.conf + tags: + - osbs-openshift + - osbs-openshift-postinstall + diff --git a/playbooks/groups/packages.yml b/playbooks/groups/packages.yml index a25717b586..8b8e8344aa 100644 --- a/playbooks/groups/packages.yml +++ b/playbooks/groups/packages.yml @@ -41,12 +41,12 @@ - sudo - { role: openvpn/client, when: env != "staging" } + - apache tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/pagure.yml b/playbooks/groups/pagure.yml index 477fbf2b8f..8cc9ac7d36 100644 --- a/playbooks/groups/pagure.yml +++ b/playbooks/groups/pagure.yml @@ -1,5 +1,5 @@ - name: make the servers - hosts: pagure-stg + hosts: pagure:pagure-stg user: root gather_facts: False @@ -15,7 +15,7 @@ - include: "{{ handlers }}/restart_services.yml" - name: make the boxen be real for real - hosts: pagure-stg + hosts: pagure:pagure-stg user: root gather_facts: True @@ -34,7 +34,6 @@ - collectd/base - openvpn/client - postgresql_server - - git/server tasks: - include: "{{ tasks }}/yumrepos.yml" @@ -45,7 +44,7 @@ - include: "{{ handlers }}/restart_services.yml" - name: deploy pagure itself - hosts: pagure-stg + hosts: pagure:pagure-stg user: root gather_facts: True @@ -54,8 +53,21 @@ - "/srv/private/ansible/vars.yml" - "{{ vars_path }}/{{ ansible_distribution }}.yml" + pre_tasks: + - name: install fedmsg-relay + yum: pkg=fedmsg-relay state=present + tags: + - pagure + - pagure/fedmsg + - name: and start it + service: name=fedmsg-relay state=started + tags: + - pagure + - pagure/fedmsg + roles: - - pagure + - pagure/frontend + - pagure/fedmsg handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/paste.yml b/playbooks/groups/paste.yml index 432a17133c..f17d292cf9 100644 --- a/playbooks/groups/paste.yml +++ b/playbooks/groups/paste.yml @@ -32,6 +32,7 @@ - hosts - fas_client - collectd/base + - apache - paste - rsyncd - sudo @@ -42,7 +43,6 @@ - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/people.yml b/playbooks/groups/people.yml index 18f8fc2915..3684fb0b69 100644 --- a/playbooks/groups/people.yml +++ b/playbooks/groups/people.yml @@ -2,11 +2,9 @@ # # - name: make the people server - hosts: people02.fedoraproject.org + hosts: people01.fedoraproject.org user: root gather_facts: False - accelerate: "{{ accelerated }}" - vars_files: - /srv/web/infra/ansible/vars/global.yml @@ -20,16 +18,56 @@ - include: "{{ handlers }}/restart_services.yml" - name: make the box be real - hosts: people02.fedoraproject.org + hosts: people01.fedoraproject.org user: root gather_facts: True - accelerate: "{{ accelerated }}" vars_files: - /srv/web/infra/ansible/vars/global.yml - "/srv/private/ansible/vars.yml" - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + pre_tasks: + + - name: mount project volume + mount: > + name=/project + src=/dev/mapper/GuestVolGroup00-project + fstype=xfs + opts="noatime,noexec,nosuid,nodev" + passno=0 + dump=0 + state=mounted + tags: + - mount + + - name: mount srv volume + mount: > + name=/srv + src=/dev/mapper/GuestVolGroup00-srv + fstype=xfs + opts="usrquota,gqnoenforce,noatime,noexec,nosuid,nodev" + passno=0 + dump=0 + state=mounted + tags: + - mount + + - name: create /srv/home directory + file: path=/srv/home state=directory owner=root group=root + + - name: bind mount home volume + mount: > + name=/home + src=/srv/home + fstype=none + opts=bind + passno=0 + dump=0 + state=mounted + tags: + - mount + roles: - base - collectd/base @@ -39,43 +77,29 @@ - rkhunter - rsyncd - sudo - - { role: denyhosts, when: ansible_distribution_major_version != '7' } - { role: openvpn/client, when: env != "staging" } - - { role: collectd/fedmsg-service, process: fedmsg-hub } - - git/hooks - - git/make_checkout_seed - - git/server - - gitolite/base - - gitolite/check_fedmsg_hooks - cgit/base - cgit/clean_lock_cron - cgit/make_pkgs_list - clamav - - distgit + - planet + - fedmsg/base + - git/server + - role: apache + + - role: httpd/mod_ssl + + - role: httpd/certificate + name: wildcard-2014.fedorapeople.org + SSLCertificateChainFile: wildcard-2014.fedorapeople.org.intermediate.cert + + - people tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - - handlers: - - include: "{{ handlers }}/restart_services.yml" - -- name: setup fedmsg on people - hosts: people02.fedoraproject.org - user: root - gather_facts: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - roles: - - fedmsg/base - - fedmsg/hub handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/pkgdb.yml b/playbooks/groups/pkgdb.yml index 3470ffe252..e46dd55958 100644 --- a/playbooks/groups/pkgdb.yml +++ b/playbooks/groups/pkgdb.yml @@ -32,7 +32,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client @@ -40,12 +40,12 @@ - sudo - { role: openvpn/client, when: env != "staging" } + - apache tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/pkgs.yml b/playbooks/groups/pkgs.yml index bddcfb64b9..b9372eb18b 100644 --- a/playbooks/groups/pkgs.yml +++ b/playbooks/groups/pkgs.yml @@ -27,11 +27,12 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - fas_client - collectd/base - sudo + - apache - gitolite/base - cgit/base - cgit/clean_lock_cron @@ -47,7 +48,6 @@ tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/drbackupkey.yml" - include: "{{ tasks }}/2fa_client.yml" diff --git a/playbooks/groups/postgresql-server.yml b/playbooks/groups/postgresql-server.yml index a151cb58a4..5df0341368 100644 --- a/playbooks/groups/postgresql-server.yml +++ b/playbooks/groups/postgresql-server.yml @@ -3,7 +3,7 @@ # NOTE: most of these vars_path come from group_vars/backup_server or from hostvars - name: make postgresql-server instance - hosts: db-datanommer02.phx2.fedoraproject.org:db-qa01.qa.fedoraproject.org:db-koji01.phx2.fedoraproject.org:db-fas01.stg.phx2.fedoraproject.org:db-fas01.phx2.fedoraproject.org:db01.phx2.fedoraproject.org:db01.stg.phx2.fedoraproject.org + hosts: db-datanommer02.phx2.fedoraproject.org:db-qa01.qa.fedoraproject.org:db-koji01.phx2.fedoraproject.org:db-fas01.stg.phx2.fedoraproject.org:db-fas01.phx2.fedoraproject.org:db01.phx2.fedoraproject.org:db01.stg.phx2.fedoraproject.org:db-s390-koji01.qa.fedoraproject.org user: root gather_facts: False @@ -21,7 +21,7 @@ # Once the instance exists, configure it. - name: configure postgresql server system - hosts: db-datanommer02.phx2.fedoraproject.org:db-qa01.qa.fedoraproject.org:db-koji01.phx2.fedoraproject.org:db-fas01.stg.phx2.fedoraproject.org:db-fas01.phx2.fedoraproject.org:db01.phx2.fedoraproject.org:db01.stg.phx2.fedoraproject.org + hosts: db-datanommer02.phx2.fedoraproject.org:db-qa01.qa.fedoraproject.org:db-koji01.phx2.fedoraproject.org:db-fas01.stg.phx2.fedoraproject.org:db-fas01.phx2.fedoraproject.org:db01.phx2.fedoraproject.org:db01.stg.phx2.fedoraproject.org:db-s390-koji01.qa.fedoraproject.org user: root gather_facts: True @@ -33,7 +33,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - fas_client - nagios_client - hosts diff --git a/playbooks/groups/proxies.yml b/playbooks/groups/proxies.yml index a9b3e04044..35137fdc65 100644 --- a/playbooks/groups/proxies.yml +++ b/playbooks/groups/proxies.yml @@ -37,12 +37,12 @@ - rsyncd - { role: openvpn/client, when: env != "staging" } + - apache tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" # You might think we would want these tasks on the proxy nodes, but they # actually deliver a configuration that our proxy-specific roles below then go diff --git a/playbooks/groups/qa-stg.yml b/playbooks/groups/qa-stg.yml new file mode 100644 index 0000000000..81c2d5a53f --- /dev/null +++ b/playbooks/groups/qa-stg.yml @@ -0,0 +1,132 @@ +--- +# create a new taskotron CI stg server +# NOTE: make sure there is room/space for this server on the vmhost +# NOTE: most of these vars_path come from group_vars/mirrorlist or from hostvars + +- name: make taskotron-ci staging + hosts: qa-stg + user: root + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - include: "{{ tasks }}/virt_instance_create.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + +- name: make the box be real + hosts: qa-stg + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - { role: base, tags: ['base'] } + - { role: rkhunter, tags: ['rkhunter'] } + - { role: nagios_client, tags: ['nagios_client'] } + - hosts + - { role: fas_client, tags: ['fas_client'] } + - { role: collectd/base, tags: ['collectd_base'] } + - { role: yum-cron, tags: ['yumcron'] } + - { role: sudo, tags: ['sudo'] } + - apache + + tasks: + # this is how you include other task lists + - include: "{{ tasks }}/yumrepos.yml" + - include: "{{ tasks }}/2fa_client.yml" + - include: "{{ tasks }}/motd.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + +- name: configure phabricator + hosts: qa-stg + user: root + + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - { role: mariadb_server, tags: ['mariadb'] } + - { role: phabricator, tags: ['phabricator'] } + + handlers: + - include: "{{ handlers }}/restart_services.yml" + + +- name: configure qa stg buildbot CI + hosts: qa-stg + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - { role: taskotron/buildmaster, tags: ['buildmaster'] } + - { role: taskotron/buildmaster-configure, tags: ['buildmasterconfig'] } + - { role: taskotron/buildslave, tags: ['buildslave'] } + - { role: taskotron/buildslave-configure, tags: ['buildslaveconfig'] } + + handlers: + - include: "{{ handlers }}/restart_services.yml" + +- name: configure static sites for qa-stg + hosts: qa-stg + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - name: ensure ServerName is set in ssl.conf + replace: dest=/etc/httpd/conf.d/ssl.conf regexp='^#ServerName .*$' replace='ServerName {{ external_hostname }}:443' + notify: + - restart httpd + tags: + - qastaticsites + + - name: ensure ServerName is set in httpd.conf + replace: dest=/etc/httpd/conf/httpd.conf regexp='^#ServerName .*$' replace='ServerName {{ external_hostname }}:443' + notify: + - restart httpd + tags: + - qastaticsites + + - name: create dirs for static sites + file: path={{ item.document_root }} state=directory owner=apache group=apache mode=1755 + with_items: static_sites + tags: + - qastaticsites + + - name: generate virtualhosts for static sites + template: src={{ files }}/httpd/newvirtualhost.conf.j2 dest=/etc/httpd/conf.d/{{ item.name }}.conf owner=root group=root mode=0644 + with_items: static_sites + notify: + - restart httpd + tags: + - qastaticsites + + handlers: + - include: "{{ handlers }}/restart_services.yml" + + diff --git a/playbooks/groups/qadevel-stg.yml b/playbooks/groups/qadevel-stg.yml deleted file mode 100644 index 23bd453e6e..0000000000 --- a/playbooks/groups/qadevel-stg.yml +++ /dev/null @@ -1,68 +0,0 @@ ---- -# create a new taskotron CI stg server -# NOTE: make sure there is room/space for this server on the vmhost -# NOTE: most of these vars_path come from group_vars/mirrorlist or from hostvars - -- name: make taskotron-ci staging - hosts: qadevel-stg - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - tasks: - - include: "{{ tasks }}/virt_instance_create.yml" - - handlers: - - include: "{{ handlers }}/restart_services.yml" - -- name: make the box be real - hosts: qadevel-stg - user: root - gather_facts: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - roles: - - { role: base, tags: ['base'] } - - { role: rkhunter, tags: ['rkhunter'] } - - { role: nagios_client, tags: ['nagios_client'] } - - hosts - - { role: fas_client, tags: ['fas_client'] } - - { role: collectd/base, tags: ['collectd_base'] } - - { role: yum-cron, tags: ['yumcron'] } - - { role: sudo, tags: ['sudo'] } - - { role: phabricator, tags: ['phabricator'] } - - tasks: - # this is how you include other task lists - - include: "{{ tasks }}/yumrepos.yml" - - include: "{{ tasks }}/2fa_client.yml" - - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - - handlers: - - include: "{{ handlers }}/restart_services.yml" - -#- name: configure taskotron-ci master -# hosts: qadevel-stg -# user: root -# gather_facts: True -# -# vars_files: -# - /srv/web/infra/ansible/vars/global.yml -# - "/srv/private/ansible/vars.yml" -# - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml -# -# roles: -# - { role: taskotron/buildmaster, tags: ['buildmaster'] } -# - { role: taskotron/buildmaster-configure, tags: ['buildmasterconfig'] } -# -# handlers: -# - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/qadevel.yml b/playbooks/groups/qadevel.yml index a029d976ba..3f180dc56b 100644 --- a/playbooks/groups/qadevel.yml +++ b/playbooks/groups/qadevel.yml @@ -3,7 +3,7 @@ # NOTE: make sure there is room/space for this server on the vmhost # NOTE: most of these vars_path come from group_vars/mirrorlist or from hostvars -- name: make taskotron-ci staging +- name: ensure qadevel instance is created hosts: qadevel user: root gather_facts: False @@ -38,18 +38,37 @@ - { role: collectd/base, tags: ['collectd_base'] } - { role: yum-cron, tags: ['yumcron'] } - { role: sudo, tags: ['sudo'] } + - apache tasks: # this is how you include other task lists - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" handlers: - include: "{{ handlers }}/restart_services.yml" -- name: configure taskotron-ci master +- name: configure phabricator + hosts: qadevel + user: root + + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - { role: mariadb_server, tags: ['mariadb'] } + - { role: phabricator, tags: ['phabricator'] } + + handlers: + - include: "{{ handlers }}/restart_services.yml" + + +- name: configure qadevel buildmaster hosts: qadevel user: root gather_facts: True @@ -66,7 +85,7 @@ handlers: - include: "{{ handlers }}/restart_services.yml" -- name: configure taskotron-ci local slave +- name: configure qadevel local slave hosts: qadevel user: root gather_facts: True @@ -82,3 +101,47 @@ handlers: - include: "{{ handlers }}/restart_services.yml" + +- name: configure static sites for qadevel + hosts: qadevel + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - name: ensure ServerName is set in ssl.conf + replace: dest=/etc/httpd/conf.d/ssl.conf regexp='^#ServerName .*$' replace='ServerName {{ external_hostname }}:443' + notify: + - restart httpd + tags: + - qastaticsites + + - name: ensure ServerName is set in httpd.conf + replace: dest=/etc/httpd/conf/httpd.conf regexp='^#ServerName .*$' replace='ServerName {{ external_hostname }}:443' + notify: + - restart httpd + tags: + - qastaticsites + + - name: create dirs for static sites + file: path={{ item.document_root }} state=directory owner=apache group=apache mode=1755 + with_items: static_sites + tags: + - qastaticsites + + - name: generate virtualhosts for static sites + template: src={{ files }}/httpd/newvirtualhost.conf.j2 dest=/etc/httpd/conf.d/{{ item.name }}.conf owner=root group=root mode=0644 + with_items: static_sites + notify: + - restart httpd + tags: + - qastaticsites + + handlers: + - include: "{{ handlers }}/restart_services.yml" + + diff --git a/playbooks/groups/releng-compose.yml b/playbooks/groups/releng-compose.yml index 9919a70fe5..c8830447bc 100644 --- a/playbooks/groups/releng-compose.yml +++ b/playbooks/groups/releng-compose.yml @@ -37,8 +37,8 @@ mnt_dir: '/mnt/fedora_koji' nfs_src_dir: 'fedora_koji' - role: nfs/client - mnt_dir: '/pub/alt' - nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub/alt' + mnt_dir: '/pub' + nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub' when: not inventory_hostname.startswith('arm01') - releng diff --git a/playbooks/groups/resultsdb-dev.yml b/playbooks/groups/resultsdb-dev.yml index e0ed269eef..94441cb100 100644 --- a/playbooks/groups/resultsdb-dev.yml +++ b/playbooks/groups/resultsdb-dev.yml @@ -38,13 +38,13 @@ - { role: collectd/base, tags: ['collectd_base'] } - { role: yum-cron, tags: ['yumcron'] } - { role: sudo, tags: ['sudo'] } + - apache tasks: # this is how you include other task lists - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/resultsdb-prod.yml b/playbooks/groups/resultsdb-prod.yml index a1d8cf79ee..0a09571948 100644 --- a/playbooks/groups/resultsdb-prod.yml +++ b/playbooks/groups/resultsdb-prod.yml @@ -39,13 +39,13 @@ - { role: yum-cron, tags: ['yumcron'] } - { role: sudo, tags: ['sudo'] } - role: openvpn/client + - apache tasks: # this is how you include other task lists - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" handlers: - include: "{{ handlers }}/restart_services.yml" @@ -63,6 +63,7 @@ roles: - { role: taskotron/resultsdb-backend, tags: ['resultsdb-be'] } - { role: taskotron/resultsdb-frontend, tags: ['resultsdb-fe'] } + - { role: taskotron/execdb, tags: ['execdb'] } handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/resultsdb-stg.yml b/playbooks/groups/resultsdb-stg.yml index 1ccbcd8844..4fdab5230d 100644 --- a/playbooks/groups/resultsdb-stg.yml +++ b/playbooks/groups/resultsdb-stg.yml @@ -38,13 +38,13 @@ - { role: collectd/base, tags: ['collectd_base'] } - { role: yum-cron, tags: ['yumcron'] } - { role: sudo, tags: ['sudo'] } + - apache tasks: # this is how you include other task lists - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/retrace.yml b/playbooks/groups/retrace.yml index 67519ce3e9..11375228b0 100644 --- a/playbooks/groups/retrace.yml +++ b/playbooks/groups/retrace.yml @@ -14,9 +14,10 @@ - hosts - fas_client - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - sudo + - fedmsg/base tasks: - include: "{{ tasks }}/2fa_client.yml" diff --git a/playbooks/groups/secondary.yml b/playbooks/groups/secondary.yml new file mode 100644 index 0000000000..81479b635d --- /dev/null +++ b/playbooks/groups/secondary.yml @@ -0,0 +1,77 @@ +- name: make secondary arch download + hosts: secondary + user: root + + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - include: "{{ tasks }}/virt_instance_create.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + +- name: setup secondary arch download server + hosts: secondary + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - "/srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml" + + roles: + - base + - rkhunter + - nagios_client + - hosts + - fas_client + - collectd/base + - download + - rsyncd + - sudo + - { role: nfs/client, + mnt_dir: '/srv/pub/archive', + nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub/archive' } + - { role: nfs/client, + mnt_dir: '/srv/pub/alt', + nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3", + nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub/alt' } + - { role: nfs/client, + mnt_dir: '/srv/pub/fedora-secondary', + nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3", + nfs_src_dir: 'fedora_ftp/fedora.redhat.com/pub/fedora-secondary' } + + - role: apache + + - role: httpd/mod_ssl + + - role: httpd/certificate + name: wildcard-2014.fedoraproject.org + SSLCertificateChainFile: wildcard-2014.fedoraproject.org.intermediate.cert + + - role: httpd/website + name: secondary.fedoraproject.org + cert_name: "{{wildcard_cert_name}}" + server_aliases: + - alt.fedoraproject.org + - archive.fedoraproject.org + - archives.fedoraproject.org + tasks: + - include: "{{ tasks }}/yumrepos.yml" + - include: "{{ tasks }}/2fa_client.yml" + - include: "{{ tasks }}/motd.yml" + + - name: Install some misc packages needed for various tasks + yum: pkg={{ item }} state=present + with_items: + - createrepo + - koji + + handlers: + - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/manual/sign-bridge.yml b/playbooks/groups/sign-bridge.yml similarity index 100% rename from playbooks/manual/sign-bridge.yml rename to playbooks/groups/sign-bridge.yml diff --git a/playbooks/groups/smtp-mm.yml b/playbooks/groups/smtp-mm.yml index 7332e64e08..7468f8b5bb 100644 --- a/playbooks/groups/smtp-mm.yml +++ b/playbooks/groups/smtp-mm.yml @@ -29,7 +29,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client diff --git a/playbooks/groups/statscache.yml b/playbooks/groups/statscache.yml new file mode 100644 index 0000000000..f84afda0b6 --- /dev/null +++ b/playbooks/groups/statscache.yml @@ -0,0 +1,64 @@ +- name: make statscache server + hosts: statscache;statscache-stg + user: root + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - include: "{{ tasks }}/virt_instance_create.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + +- name: make the box be real + hosts: statscache;statscache-stg + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - base + - rkhunter + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } + - nagios_client + - hosts + - fas_client + - collectd/base + - rsyncd + - sudo + - apache + - fedmsg/base + + tasks: + - include: "{{ tasks }}/yumrepos.yml" + - include: "{{ tasks }}/2fa_client.yml" + - include: "{{ tasks }}/motd.yml" + - include: "{{ tasks }}/mod_wsgi.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + +#- name: dole out the service specific config +# hosts: statscache;statscache-stg +# user: root +# gather_facts: True +# +# vars_files: +# - /srv/web/infra/ansible/vars/global.yml +# - "/srv/private/ansible/vars.yml" +# - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml +# +# roles: +# - statscache +# +# handlers: +# - include: "{{ handlers }}/restart_services.yml" + diff --git a/playbooks/groups/summershum.yml b/playbooks/groups/summershum.yml index 8bc1e16b24..a503723aa4 100644 --- a/playbooks/groups/summershum.yml +++ b/playbooks/groups/summershum.yml @@ -32,7 +32,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - collectd/base - hosts diff --git a/playbooks/groups/sundries.yml b/playbooks/groups/sundries.yml index e359f3f892..24ab8be5b2 100644 --- a/playbooks/groups/sundries.yml +++ b/playbooks/groups/sundries.yml @@ -37,6 +37,7 @@ - hosts - fas_client - collectd/base + - apache - geoip - geoip-city-wsgi/app - role: koji_reminder @@ -46,18 +47,26 @@ - role: fedora_owner_change when: master_sundries_node and env != "staging" - rsyncd - - mirrormanager/frontend - freemedia - sudo - pager_server - { role: openvpn/client, when: env != "staging" } + - role: review-stats/build + when: master_sundries_node + - role: zanata + when: master_sundries_node + - role: fedora-web/build + when: master_sundries_node + - role: fedora-docs/build + when: master_sundries_node + - role: membership-map/build + when: master_sundries_node tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/tagger.yml b/playbooks/groups/tagger.yml index 2ce0e26109..ed5b57277b 100644 --- a/playbooks/groups/tagger.yml +++ b/playbooks/groups/tagger.yml @@ -32,7 +32,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client @@ -41,12 +41,12 @@ - sudo - { role: openvpn/client, when: env != "staging" } + - apache tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" handlers: diff --git a/playbooks/groups/taskotron-dev.yml b/playbooks/groups/taskotron-dev.yml index 25edd43a12..b88ee0cc3e 100644 --- a/playbooks/groups/taskotron-dev.yml +++ b/playbooks/groups/taskotron-dev.yml @@ -38,13 +38,13 @@ - { role: collectd/base, tags: ['collectd_base'] } - { role: yum-cron, tags: ['yumcron'] } - { role: sudo, tags: ['sudo'] } + - apache tasks: # this is how you include other task lists - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/taskotron-prod.yml b/playbooks/groups/taskotron-prod.yml index 6a04cceb70..02dc7dd7eb 100644 --- a/playbooks/groups/taskotron-prod.yml +++ b/playbooks/groups/taskotron-prod.yml @@ -40,13 +40,13 @@ - { role: sudo, tags: ['sudo'] } - { role: openvpn/client, when: env != "staging", tags: ['openvpn_client'] } + - apache tasks: # this is how you include other task lists - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/taskotron-stg.yml b/playbooks/groups/taskotron-stg.yml index a42abb41d0..3859c62bd1 100644 --- a/playbooks/groups/taskotron-stg.yml +++ b/playbooks/groups/taskotron-stg.yml @@ -38,13 +38,13 @@ - { role: collectd/base, tags: ['collectd_base'] } - { role: yum-cron, tags: ['yumcron'] } - { role: sudo, tags: ['sudo'] } + - apache tasks: # this is how you include other task lists - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/torrent.yml b/playbooks/groups/torrent.yml new file mode 100644 index 0000000000..92a9a42801 --- /dev/null +++ b/playbooks/groups/torrent.yml @@ -0,0 +1,57 @@ +- name: make torrent server + hosts: torrent + user: root + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - include: "{{ tasks }}/virt_instance_create.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + +- name: make the box be real + hosts: torrent + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - base + - hosts + - rkhunter + - nagios_client + - fas_client + - collectd/base + - rsyncd + - sudo + - openvpn/client + - torrent + - apache + + - role: httpd/mod_ssl + + - role: httpd/certificate + name: wildcard-2014.fedoraproject.org + SSLCertificateChainFile: wildcard-2014.fedoraproject.org.intermediate.cert + + - role: httpd/website + name: torrent.fedoraproject.org + cert_name: "{{wildcard_cert_name}}" + sslonly: true + + tasks: + - include: "{{ tasks }}/yumrepos.yml" + - include: "{{ tasks }}/2fa_client.yml" + - include: "{{ tasks }}/motd.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/twisted-buildbots.yml b/playbooks/groups/twisted-buildbots.yml new file mode 100644 index 0000000000..53cf7078e5 --- /dev/null +++ b/playbooks/groups/twisted-buildbots.yml @@ -0,0 +1,36 @@ +- name: check/create instances + hosts: twisted-buildbots + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + +- name: setup all the things + hosts: twisted-buildbots + gather_facts: True + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" + + tasks: + + - name: add twisted key + authorized_key: user=root key="{{ item }}" + with_file: + - /srv/web/infra/ansible/files/twisted/ssh-pub-key + tags: + - config + - sshkeys diff --git a/playbooks/groups/value.yml b/playbooks/groups/value.yml index fef651590c..5589c12840 100644 --- a/playbooks/groups/value.yml +++ b/playbooks/groups/value.yml @@ -32,6 +32,7 @@ - hosts - fas_client - collectd/base + - apache - fedmsg/base - fedmsg/irc - supybot @@ -41,12 +42,12 @@ when: env != "staging" } - role: collectd/fedmsg-service process: fedmsg-irc + - mote tasks: - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/groups/virthost.yml b/playbooks/groups/virthost.yml index 12b1e55c8d..c93f09a33b 100644 --- a/playbooks/groups/virthost.yml +++ b/playbooks/groups/virthost.yml @@ -3,7 +3,7 @@ # NOTE: most of these vars_path come from group_vars/backup_server or from hostvars - name: make virthost server system - hosts: virthost:bvirthost:buildvmhost:virthost-comm03.qa.fedoraproject.org:virthost-comm04.qa.fedoraproject.org:ibiblio01.fedoraproject.org:ibiblio04.fedoraproject.org:serverbeach07.fedoraproject.org:coloamer01.fedoraproject.org:osuosl03.fedoraproject.org:host1plus01.fedoraproject.org:dedicatedsolutions01.fedoraproject.org:tummy01.fedoraproject.org:osuosl01.fedoraproject.org:virthost-comm02.qa.fedoraproject.org:serverbeach06.fedoraproject.org:serverbeach08.fedoraproject.org:virthost-s390.qa.fedoraproject.org:osuosl02.fedoraproject.org:serverbeach10.fedoraproject.org + hosts: virthost:bvirthost:buildvmhost:virthost-comm03.qa.fedoraproject.org:virthost-comm04.qa.fedoraproject.org:ibiblio01.fedoraproject.org:ibiblio04.fedoraproject.org:ibiblio03.fedoraproject.org:serverbeach07.fedoraproject.org:coloamer01.fedoraproject.org:osuosl03.fedoraproject.org:host1plus01.fedoraproject.org:dedicatedsolutions01.fedoraproject.org:tummy01.fedoraproject.org:osuosl01.fedoraproject.org:virthost-comm02.qa.fedoraproject.org:serverbeach06.fedoraproject.org:serverbeach08.fedoraproject.org:virthost-s390.qa.fedoraproject.org:osuosl02.fedoraproject.org:serverbeach10.fedoraproject.org:serverbeach09.fedoraproject.org:ibiblio05.fedoraproject.org:virthost19.phx2.fedoraproject.org:virthost20.phx2.fedoraproject.org:virthost21.phx2.fedoraproject.org:virthost22.phx2.fedoraproject.org user: root gather_facts: True @@ -15,7 +15,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client diff --git a/playbooks/groups/wiki.yml b/playbooks/groups/wiki.yml index 06d0423ad5..5adb880ba9 100644 --- a/playbooks/groups/wiki.yml +++ b/playbooks/groups/wiki.yml @@ -40,6 +40,7 @@ - fedmsg/base - { role: nfs/client, when: env == "staging", mnt_dir: '/mnt/web/attachments', nfs_src_dir: 'fedora_app_staging/app/attachments' } - { role: nfs/client, when: env != "staging", mnt_dir: '/mnt/web/attachments', nfs_src_dir: 'fedora_app/app/attachments' } + - apache - mediawiki - sudo - { role: openvpn/client, @@ -49,7 +50,6 @@ - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/2fa_client.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/host_reboot.yml b/playbooks/host_reboot.yml index a974b4155c..4f30a602ee 100644 --- a/playbooks/host_reboot.yml +++ b/playbooks/host_reboot.yml @@ -8,7 +8,7 @@ tasks: - name: tell nagios to shush - nagios: action=downtime minutes=60 service=host host={{ inventory_hostname }} + nagios: action=downtime minutes=60 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true @@ -22,6 +22,6 @@ command: ntpdate -u 66.187.233.4 - name: tell nagios to unshush - nagios: action=unsilence service=host host={{ inventory_hostname }} + nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true diff --git a/playbooks/hosts/209.132.184.148.yml b/playbooks/hosts/209.132.184.148.yml deleted file mode 100644 index dea2091249..0000000000 --- a/playbooks/hosts/209.132.184.148.yml +++ /dev/null @@ -1,28 +0,0 @@ -- name: check/create instance - hosts: 209.132.184.148 - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - tasks: - - include: "{{ tasks }}/persistent_cloud.yml" - -- name: provision instance - hosts: 209.132.184.148 - user: root - gather_facts: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars//{{ ansible_distribution }}.yml - - tasks: - - include: "{{ tasks }}/cloud_setup_basic.yml" - # fill in other actions/includes/etc here - # - # handlers: - # - include: "{{ handlers }}/restart_services.yml diff --git a/playbooks/hosts/artboard.cloud.fedoraproject.org.yml b/playbooks/hosts/artboard.cloud.fedoraproject.org.yml deleted file mode 100644 index 406981aded..0000000000 --- a/playbooks/hosts/artboard.cloud.fedoraproject.org.yml +++ /dev/null @@ -1,78 +0,0 @@ -- name: check/create instance - hosts: 209.132.184.143 - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - tasks: - - include: "{{ tasks }}/persistent_cloud.yml" - - include: "{{ tasks }}/growroot_cloud.yml" - -- name: provision instance - hosts: 209.132.184.143 - user: root - gather_facts: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - tasks: - - include: "{{ tasks }}/cloud_setup_basic.yml" - - include: "{{ tasks }}/postfix_basic.yml" - - - name: mount up disk of persistent storage - action: mount name=/srv/persist src='LABEL=artboard' fstype=ext4 state=mounted - - # open up ports (22, 80, 443) - - name: poke holes in the firewall - action: command lokkit {{ item }} - with_items: - - --service=ssh - - --service=https - - --service=http - - # packages needed - - name: add packages - action: yum state=present name={{ item }} - with_items: - - rsync - - openssh-clients - - httpd - - httpd-tools - - php - - php-gd - - php-mysql - - cronie-noanacron - - # packages needed to be gone - - name: erase packages - action: yum state=absent name={{ item }} - with_items: - - cronie-anacron - - - name: artboard backup thing - action: copy src="{{ files }}/artboard/artboard-backup" dest=/etc/cron.daily/artboard-backup mode=0755 - - - name: make artboard subdir - action: file path=/srv/persist/artboard mode=0755 state=directory - - - name: link artboard into /var/www/html - action: file state=link src=/srv/persist/artboard path=/var/www/html/artboard - - - name: add apache confs - action: copy src="{{ files }}/artboard/{{ item }}" dest="/etc/httpd/conf.d/{{ item }}" backup=true - with_items: - - artboard.conf - - redirect.conf - notify: restart httpd - - - name: startup apache - action: service name=httpd state=started - - handlers: - - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/hosts/artboard.fedorainfracloud.org.yml b/playbooks/hosts/artboard.fedorainfracloud.org.yml new file mode 100644 index 0000000000..227255f12c --- /dev/null +++ b/playbooks/hosts/artboard.fedorainfracloud.org.yml @@ -0,0 +1,122 @@ +- name: check/create instance + hosts: artboard.fedorainfracloud.org + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + +- name: setup all the things + hosts: artboard.fedorainfracloud.org + gather_facts: True + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" + + tasks: + + - name: Install common scripts + copy: src={{ item }} dest=/usr/local/bin/ owner=root group=root mode=0755 + with_fileglob: + - "{{ roles }}/base/files/common-scripts/*" + tags: + - config + - base + - artboard + + - name: set sebooleans so artboard can talk to the db + seboolean: name=httpd_can_network_connect_db state=true persistent=true + tags: + - selinux + - artboard + + - name: mount up disk of persistent storage + mount: name=/srv/persist src='LABEL=artboard' fstype=ext4 state=mounted + tags: + - artboard + + - name: check the selinux context of the artboard dirs + command: matchpathcon "/srv/persist/artboard/(.*)" + register: webcontext + always_run: yes + changed_when: false + tags: + - config + - selinux + - artboard + + - name: set the SELinux policy for the artboard web dir + command: semanage fcontext -a -t httpd_sys_content_t "/srv/persist/artboard/(.*)" + when: webcontext.stdout.find('httpd_sys_content_t') == -1 + tags: + - config + - selinux + - artboard + + # packages needed + - name: add packages + yum: state=present name={{ item }} + with_items: + - rsync + - openssh-clients + - httpd + - httpd-tools + - php + - php-gd + - php-mysql + - cronie-noanacron + - mod_ssl + tags: + - artboard + + # packages needed to be gone + - name: erase packages + yum: state=absent name={{ item }} + with_items: + - cronie-anacron + tags: + - artboard + + - name: artboard backup thing + copy: src="{{ files }}/artboard/artboard-backup" dest=/etc/cron.daily/artboard-backup mode=0755 + tags: + - artboard + + - name: make artboard subdir + file: path=/srv/persist/artboard mode=0755 state=directory + tags: + - artboard + + - name: link artboard into /var/www/html + file: state=link src=/srv/persist/artboard path=/var/www/html/artboard + tags: + - artboard + + - name: add apache confs + copy: src="{{ files }}/artboard/{{ item }}" dest="/etc/httpd/conf.d/{{ item }}" backup=true + with_items: + - artboard.conf + - redirect.conf + notify: restart httpd + tags: + - artboard + + - name: startup apache + service: name=httpd state=started + tags: + - artboard + + handlers: + - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/hosts/bodhi.dev.fedoraproject.org.yml b/playbooks/hosts/bodhi.dev.fedoraproject.org.yml deleted file mode 100644 index 41e8b3307a..0000000000 --- a/playbooks/hosts/bodhi.dev.fedoraproject.org.yml +++ /dev/null @@ -1,38 +0,0 @@ -- name: check/create instance - hosts: bodhi.dev.fedoraproject.org - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - tasks: - - include: "{{ tasks }}/persistent_cloud.yml" - - include: "{{ tasks }}/growroot_cloud.yml" - -- name: provision instance - hosts: bodhi.dev.fedoraproject.org - user: root - gather_facts: True - vars: - - tcp_ports: [22, 443] - - udp_ports: [] - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - tasks: - - include: "{{ tasks }}/cloud_setup_basic.yml" - - include: "{{ tasks }}/postfix_basic.yml" - - # open up tcp ports - - name: poke holes in the firewall - action: command lokkit -p '{{ item }}:tcp' - with_items: - - "{{ tcp_ports }}" - - handlers: - - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/hosts/cloud-noc01.cloud.fedoraproject.org.yml b/playbooks/hosts/cloud-noc01.cloud.fedoraproject.org.yml index 7895685aa9..00c36389ba 100644 --- a/playbooks/hosts/cloud-noc01.cloud.fedoraproject.org.yml +++ b/playbooks/hosts/cloud-noc01.cloud.fedoraproject.org.yml @@ -13,7 +13,7 @@ roles: - base - rkhunter - - { role: denyhosts, when: ansible_distribution_major_version != '7' } + - { role: denyhosts, when: ansible_distribution_major_version|int != 7 } - nagios_client - hosts - fas_client diff --git a/playbooks/hosts/communityblog.fedorainfracloud.org.yml b/playbooks/hosts/communityblog.fedorainfracloud.org.yml new file mode 100644 index 0000000000..1ee595851b --- /dev/null +++ b/playbooks/hosts/communityblog.fedorainfracloud.org.yml @@ -0,0 +1,26 @@ +- name: check/create instance + hosts: communityblog.fedorainfracloud.org + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + +- name: setup all the things + hosts: communityblog.fedorainfracloud.org + gather_facts: True + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" diff --git a/playbooks/hosts/darkserver-dev.fedorainfracloud.org.yml b/playbooks/hosts/darkserver-dev.fedorainfracloud.org.yml new file mode 100644 index 0000000000..cb1b22ca7b --- /dev/null +++ b/playbooks/hosts/darkserver-dev.fedorainfracloud.org.yml @@ -0,0 +1,27 @@ +- name: check/create instance + hosts: darkserver-dev.fedorainfracloud.org + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + - include: "{{ tasks }}/growroot_cloud_el7.yml" + +- name: setup all the things + hosts: darkserver-dev.fedorainfracloud.org + gather_facts: True + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" diff --git a/playbooks/hosts/data-analysis01.phx2.fedoraproject.org.yml b/playbooks/hosts/data-analysis01.phx2.fedoraproject.org.yml new file mode 100644 index 0000000000..2b85bcbb8f --- /dev/null +++ b/playbooks/hosts/data-analysis01.phx2.fedoraproject.org.yml @@ -0,0 +1,60 @@ +# This is a basic playbook + +- name: make basic box + hosts: data-analysis01.phx2.fedoraproject.org + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - base + - rkhunter + - nagios_client + - hosts + - fas_client + - collectd/base + - sudo + + tasks: + - include: "{{ tasks }}/yumrepos.yml" + - include: "{{ tasks }}/2fa_client.yml" + - include: "{{ tasks }}/motd.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + - include: "{{ handlers }}/semanage.yml" + +- name: get the system ready for data handling. + hosts: data-analysis01.phx2.fedoraproject.org + user: root + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - { role: nfs/client, mnt_dir: '/srv/fedora_stats', nfs_src_dir: 'fedora_stats' } + - geoip + + tasks: + - name: install needed packages + yum: pkg={{ item }} state=present + with_items: + - httpd + - mod_ssl + - awstats + - rsync + - httpd-tools + - openssh-clients + - emacs-nox + - emacs-git + - git + - bc + - python-geoip-geolite2 +## diff --git a/playbooks/hosts/devpi.cloud.fedoraproject.org.yml b/playbooks/hosts/devpi.cloud.fedoraproject.org.yml deleted file mode 100644 index d7831a80c2..0000000000 --- a/playbooks/hosts/devpi.cloud.fedoraproject.org.yml +++ /dev/null @@ -1,32 +0,0 @@ -- name: check/create instance - hosts: 209.132.184.166 - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - tasks: - - include: "{{ tasks }}/persistent_cloud.yml" - - include: "{{ tasks }}/growroot_cloud.yml" - -- name: provision instance - hosts: 209.132.184.166 - user: root - gather_facts: True - vars: - - tcp_ports: [22, 80, 443] - - udp_ports: [] - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - tasks: - - include: "{{ tasks }}/cloud_setup_basic.yml" - - include: "{{ tasks }}/postfix_basic.yml" - - handlers: - - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/hosts/dopr-dev.cloud.fedoraproject.org.yml b/playbooks/hosts/dopr-dev.cloud.fedoraproject.org.yml new file mode 100644 index 0000000000..dd0017bf70 --- /dev/null +++ b/playbooks/hosts/dopr-dev.cloud.fedoraproject.org.yml @@ -0,0 +1,38 @@ +#- name: clean known hosts +# hosts: dopr-stg +# remote_user: fedora +# sudo: True +# gather_facts: False +# +# tasks: +# - name: clean out old known_hosts for dopr-dev +# local_action: known_hosts path={{item}} host=dopr-dev.cloud.fedoraproject.org state=absent +# ignore_errors: True +# with_items: +# - /root/.ssh/known_hosts +# - /etc/ssh/ssh_known_hosts +# - name: clean out old known_hosts for dopr-dev ip +# local_action: known_hosts path={{item}} host=209.132.184.42 state=absent +# ignore_errors: True +# with_items: +# - /root/.ssh/known_hosts +# - /etc/ssh/ssh_known_hosts + +- name: provision dopr dev instance + hosts: dopr-stg + remote_user: fedora + sudo: True + gather_facts: True + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + + roles: + - base + - dopr + + handlers: + - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/hosts/elections-dev.cloud.fedoraproject.org.yml b/playbooks/hosts/elections-dev.cloud.fedoraproject.org.yml deleted file mode 100644 index 5461c7d6e1..0000000000 --- a/playbooks/hosts/elections-dev.cloud.fedoraproject.org.yml +++ /dev/null @@ -1,56 +0,0 @@ -- name: check/create instance - hosts: 209.132.184.162 - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - tasks: - - include: "{{ tasks }}/persistent_cloud.yml" - - include: "{{ tasks }}/growroot_cloud.yml" - -- name: provision instance - hosts: 209.132.184.162 - user: root - gather_facts: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - tasks: - - include: "{{ tasks }}/cloud_setup_basic.yml" - - - name: mount up disk of persistent storage - action: mount name=/srv/persist src='LABEL=elections' fstype=ext4 state=mounted - - # open up ports (22, 80, 443) - - name: poke holes in the firewall - action: command lokkit {{ item }} - with_items: - - --service=ssh - - --service=https - - --service=http - - # packages needed - - name: add packages for repo - action: yum state=present name={{ item }} - with_items: - - rsync - - openssh-clients - - httpd - - httpd-tools - - cronie-noanacron - - postgresql-server - - python-psycopg2 - - python-sqlalchemy0.7 - - python-flask - - - name: startup apache - action: service name=httpd state=started - - handlers: - - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/hosts/faitout.fedorainfracloud.org.yml b/playbooks/hosts/faitout.fedorainfracloud.org.yml new file mode 100644 index 0000000000..2ad2ffc408 --- /dev/null +++ b/playbooks/hosts/faitout.fedorainfracloud.org.yml @@ -0,0 +1,26 @@ +- name: check/create instance + hosts: faitout.fedorainfracloud.org + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + +- name: setup all the things + hosts: faitout.fedorainfracloud.org + gather_facts: True + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" diff --git a/playbooks/hosts/fas2-dev.fedorainfracloud.org.yml b/playbooks/hosts/fas2-dev.fedorainfracloud.org.yml new file mode 100644 index 0000000000..5663bba216 --- /dev/null +++ b/playbooks/hosts/fas2-dev.fedorainfracloud.org.yml @@ -0,0 +1,26 @@ +- name: check/create instance + hosts: fas2-dev.fedorainfracloud.org + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + +- name: setup all the things + hosts: fas2-dev.fedorainfracloud.org + gather_facts: True + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" diff --git a/playbooks/hosts/fas3-dev.fedorainfracloud.org.yml b/playbooks/hosts/fas3-dev.fedorainfracloud.org.yml new file mode 100644 index 0000000000..b807d97f4f --- /dev/null +++ b/playbooks/hosts/fas3-dev.fedorainfracloud.org.yml @@ -0,0 +1,26 @@ +- name: check/create instance + hosts: fas3-dev.fedorainfracloud.org + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + +- name: setup all the things + hosts: fas3-dev.fedorainfracloud.org + gather_facts: True + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" diff --git a/playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml b/playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml index 69b38b4b72..9f4a18999e 100644 --- a/playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml +++ b/playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml @@ -6,7 +6,8 @@ gather_facts: True vars_files: - - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/web/infra/ansible/vars/global.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml tasks: # This is in fact duplicate from compute nodes, just be sure in case we did not run @@ -16,7 +17,7 @@ - name: Create FS on Swift storage filesystem: fstype=ext4 dev=/dev/vg_server/swift_store - name: SSH authorized key for root user - authorized_key: user=root key="{{fed_cloud09_root_public_key}}" + authorized_key: user=root key="{{ lookup('file', files + '/fedora-cloud/fed09-ssh-key.pub') }}" - name: deploy Open Stack controler hosts: fed-cloud09.cloud.fedoraproject.org @@ -32,11 +33,13 @@ vars_files: - /srv/web/infra/ansible/vars/global.yml - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - /srv/web/infra/ansible/vars/fedora-cloud.yml - /srv/private/ansible/files/openstack/passwords.yml roles: - #- rkhunter + - base + - rkhunter - nagios_client - fas_client - sudo @@ -57,8 +60,12 @@ - rootpw - name: Set the hostname action: hostname name={{ controller_hostname }} + - name: Deploy root private SSH key copy: src={{ private }}/files/openstack/fed-cloud09-root.key dest=/root/.ssh/id_rsa mode=600 owner=root group=root + - name: Deploy root public SSH key + copy: src={{ files }}/fedora-cloud/fed09-ssh-key.pub dest=/root/.ssh/id_rsa.pub mode=600 owner=root group=root + - authorized_key: user=root key="{{ lookup('file', files + '/fedora-cloud/fed09-ssh-key.pub') }}" - name: install core pkgs action: yum state=present pkg={{ item }} @@ -91,7 +98,7 @@ # http://docs.openstack.org/trunk/install-guide/install/yum/content/basics-networking.html - service: name=NetworkManager state=stopped enabled=no - - service: name=network state=started enabled=yes + - service: name=network enabled=yes - service: name=firewalld state=stopped enabled=no ignore_errors: yes - service: name=iptables state=started enabled=yes @@ -151,12 +158,26 @@ # http://docs.openstack.org/trunk/install-guide/install/yum/content/basics-ntp.html - service: name=ntpd state=started enabled=yes - # http://docs.openstack.org/icehouse/install-guide/install/yum/content/basics-packages.html - - action: yum state=present name=https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm + # this two step can be done in one, but Ansible will then always show the action as changed + - name: download RDO release package + get_url: url=https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm dest=/root/ + - yum: state=present name=/root/rdo-release-icehouse-4.noarch.rpm + + - name: make sure epel-release is installed + get_url: url=http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm dest=/root/ + - yum: state=present name=/root/epel-release-latest-7.noarch.rpm + + - name: make sure latest openvswitch is installed + get_url: url=http://people.redhat.com/~lkellogg/rpms/openvswitch-2.3.1-2.git20150113.el7.x86_64.rpm dest=/root/ + - yum: state=present name=/root/openvswitch-2.3.1-2.git20150113.el7.x86_64.rpm + + - name: make sure latest openstack-utils is installed + get_url: url=https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/openstack-utils-2014.2-1.el7.centos.noarch.rpm dest=/root/ + - yum: state=present name=/root/openstack-utils-2014.2-1.el7.centos.noarch.rpm + - name: install basic openstack packages action: yum state=present name={{ item }} with_items: - - http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm - openstack-utils - openstack-selinux - openstack-packstack @@ -167,34 +188,34 @@ - openstack-neutron - openstack-nova-common - haproxy - - http://people.redhat.com/~lkellogg/rpms/openvswitch-2.3.1-2.git20150113.el7.x86_64.rpm - - https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/openstack-utils-2014.2-1.el7.centos.noarch.rpm - - yum: name=* state=latest - - name: add ssl cert - copy: src={{ private }}/files/openstack/fed-cloud09.pem dest=/etc/pki/tls/certs/fed-cloud09.pem mode=600 owner=rabbitmq group=root - - name: add ssl key - copy: src={{ private }}/files/openstack/fed-cloud09.key dest=/etc/pki/tls/private/fed-cloud09.key mode=600 owner=rabbitmq group=root - - name: add cert to ca-bundle.crt so plain curl works - copy: src={{ private }}/files/openstack/fed-cloud09.pem dest=/etc/pki/ca-trust/source/anchors/ mode=600 owner=root group=root - notify: - - update ca-trust + - name: install etckeeper + action: yum state=present name=etckeeper + - name: init etckeeper + shell: cd /etc && etckeeper init - - name: add ssl cert for keystone - copy: src={{ private }}/files/openstack/fed-cloud09.pem dest=/etc/pki/tls/certs/fed-cloud09-keystone.pem mode=644 owner=keystone group=root - - name: add ssl key for keystone - copy: src={{ private }}/files/openstack/fed-cloud09.key dest=/etc/pki/tls/private/fed-cloud09-keystone.key mode=600 owner=keystone group=root - - name: add ssl cert for neutron - copy: src={{ private }}/files/openstack/fed-cloud09.pem dest=/etc/pki/tls/certs/fed-cloud09-neutron.pem mode=600 owner=neutron group=root - - name: add ssl key for neutron - copy: src={{ private }}/files/openstack/fed-cloud09.key dest=/etc/pki/tls/private/fed-cloud09-neutron.key mode=600 owner=neutron group=root - - name: add ssl cert for nova - copy: src={{ private }}/files/openstack/fed-cloud09.pem dest=/etc/pki/tls/certs/fed-cloud09-nova.pem mode=600 owner=nova group=root - - name: add ssl key for nova - copy: src={{ private }}/files/openstack/fed-cloud09.key dest=/etc/pki/tls/private/fed-cloud09-nova.key mode=600 owner=nova group=root + + - name: add ssl cert files + copy: src={{ private }}/files/openstack/fedorainfracloud.org.{{item}} dest=/etc/pki/tls/certs/fedorainfracloud.org.{{item}} mode=0644 owner=root group=root + with_items: + - pem + - digicert.pem + - name: add ssl key file + copy: src={{ private }}/files/openstack/fedorainfracloud.org.key dest=/etc/pki/tls/private/fedorainfracloud.org.key mode=0600 owner=root group=root + + - name: allow services key access + acl: name=/etc/pki/tls/private/fedorainfracloud.org.key entity={{item}} etype=user permissions="r" state=present + with_items: + - keystone + - neutron + - nova + - rabbitmq + - cinder + - ceilometer + - swift - file: state=directory path=/var/www/pub mode=0755 - - copy: src={{ private }}/files/openstack/fed-cloud09.pem dest=/var/www/pub/ mode=644 + - copy: src={{ private }}/files/openstack/fedorainfracloud.org.pem dest=/var/www/pub/ mode=644 # http://docs.openstack.org/trunk/install-guide/install/yum/content/basics-database-controller.html - name: install mysql packages @@ -240,17 +261,20 @@ regexp="RABBITMQ_NODE_PORT" line=" 'RABBITMQ_NODE_PORTTTTT' => $port," backup=yes + - action: yum state=present pkg=mongodb-server + - ini_file: dest=/usr/lib/systemd/system/mongod.service section=Service option=PIDFile value=/var/run/mongodb/mongod.pid - lineinfile: dest=/usr/lib/python2.7/site-packages/packstack/puppet/templates/mongodb.pp regexp="pidfilepath" line=" pidfilepath => '/var/run/mongodb/mongod.pid'" insertbefore="^}" + - meta: flush_handlers # http://openstack.redhat.com/Quickstart - template: src={{ files }}/fedora-cloud/packstack-controller-answers.txt dest=/root/ owner=root mode=0600 - - authorized_key: user=root key="{{ lookup('file', files + '/fedora-cloud/fed09-ssh-key.pub') }}" - command: packstack --answer-file=/root/packstack-controller-answers.txt when: packstack_sucessfully_finished.stat.exists == False - file: path=/etc/packstack_sucessfully_finished state=touch + when: packstack_sucessfully_finished.stat.exists == False # FIXME we should really reboot here - name: Set shell to nova user to allow cold migrations @@ -273,196 +297,284 @@ # ceilometer - shell: source /root/keystonerc_admin && keystone service-list | grep ceilometer | awk '{print $2}' register: SERVICE_ID + always_run: yes + changed_when: false - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}' register: ENDPOINT_ID - - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_hostname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_hostname }}:8777' --adminurl 'https://{{ controller_hostname }}:8777' --internalurl 'https://{{ controller_hostname }}:8777' ) || true + always_run: yes + changed_when: false + - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8777' --adminurl 'https://{{ controller_publicname }}:8777' --internalurl 'https://{{ controller_publicname }}:8777' ) || true # cinder - shell: source /root/keystonerc_admin && keystone service-list | grep 'cinder ' | awk '{print $2}' register: SERVICE_ID + always_run: yes + changed_when: false - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}' register: ENDPOINT_ID - - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_hostname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_hostname }}:8776/v1/%(tenant_id)s' --adminurl 'https://{{ controller_hostname }}:8776/v1/%(tenant_id)s' --internalurl 'https://{{ controller_hostname }}:8776/v1/%(tenant_id)s' ) || true + always_run: yes + changed_when: false + - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8776/v1/%(tenant_id)s' --adminurl 'https://{{ controller_publicname }}:8776/v1/%(tenant_id)s' --internalurl 'https://{{ controller_publicname }}:8776/v1/%(tenant_id)s' ) || true # cinderv2 - shell: source /root/keystonerc_admin && keystone service-list | grep 'cinderv2' | awk '{print $2}' register: SERVICE_ID + always_run: yes + changed_when: false - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}' register: ENDPOINT_ID - - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_hostname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_hostname }}:8776/v2/%(tenant_id)s' --adminurl 'https://{{ controller_hostname }}:8776/v2/%(tenant_id)s' --internalurl 'https://{{ controller_hostname }}:8776/v2/%(tenant_id)s' ) || true + always_run: yes + changed_when: false + - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8776/v2/%(tenant_id)s' --adminurl 'https://{{ controller_publicname }}:8776/v2/%(tenant_id)s' --internalurl 'https://{{ controller_publicname }}:8776/v2/%(tenant_id)s' ) || true # glance - shell: source /root/keystonerc_admin && keystone service-list | grep 'glance' | awk '{print $2}' register: SERVICE_ID + always_run: yes + changed_when: false - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}' register: ENDPOINT_ID - - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_hostname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_hostname }}:9292' --adminurl 'https://{{ controller_hostname }}:9292' --internalurl 'https://{{ controller_hostname }}:9292' ) || true + always_run: yes + changed_when: false + - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:9292' --adminurl 'https://{{ controller_publicname }}:9292' --internalurl 'https://{{ controller_publicname }}:9292' ) || true # neutron - shell: source /root/keystonerc_admin && keystone service-list | grep 'neutron' | awk '{print $2}' + always_run: yes + changed_when: false register: SERVICE_ID - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}' + always_run: yes + changed_when: false register: ENDPOINT_ID - - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_hostname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_hostname }}:9696/' --adminurl 'https://{{ controller_hostname }}:9696/' --internalurl 'https://{{ controller_hostname }}:9696/' ) || true + - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:9696/' --adminurl 'https://{{ controller_publicname }}:9696/' --internalurl 'https://{{ controller_publicname }}:9696/' ) || true # nova - shell: source /root/keystonerc_admin && keystone service-list | grep 'nova ' | awk '{print $2}' + always_run: yes + changed_when: false register: SERVICE_ID - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}' + always_run: yes + changed_when: false register: ENDPOINT_ID - - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_hostname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_hostname }}:8774/v2/%(tenant_id)s' --adminurl 'https://{{ controller_hostname }}:8774/v2/%(tenant_id)s' --internalurl 'https://{{ controller_hostname }}:8774/v2/%(tenant_id)s' ) || true + - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8774/v2/%(tenant_id)s' --adminurl 'https://{{ controller_publicname }}:8774/v2/%(tenant_id)s' --internalurl 'https://{{ controller_publicname }}:8774/v2/%(tenant_id)s' ) || true # nova_ec2 - shell: source /root/keystonerc_admin && keystone service-list | grep 'nova_ec2' | awk '{print $2}' + always_run: yes + changed_when: false register: SERVICE_ID - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}' + always_run: yes + changed_when: false register: ENDPOINT_ID - - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_hostname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_hostname }}:8773/services/Cloud' --adminurl 'https://{{ controller_hostname }}:8773/services/Admin' --internalurl 'https://{{ controller_hostname }}:8773/services/Cloud' ) || true + - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8773/services/Cloud' --adminurl 'https://{{ controller_publicname }}:8773/services/Admin' --internalurl 'https://{{ controller_publicname }}:8773/services/Cloud' ) || true # novav3 - shell: source /root/keystonerc_admin && keystone service-list | grep 'novav3' | awk '{print $2}' + always_run: yes + changed_when: false register: SERVICE_ID - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}' + always_run: yes + changed_when: false register: ENDPOINT_ID - - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_hostname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_hostname }}:8774/v3' --adminurl 'https://{{ controller_hostname }}:8774/v3' --internalurl 'https://{{ controller_hostname }}:8774/v3' ) || true + - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8774/v3' --adminurl 'https://{{ controller_publicname }}:8774/v3' --internalurl 'https://{{ controller_publicname }}:8774/v3' ) || true # swift - shell: source /root/keystonerc_admin && keystone service-list | grep 'swift ' | awk '{print $2}' + always_run: yes + changed_when: false register: SERVICE_ID - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}' + always_run: yes + changed_when: false register: ENDPOINT_ID - - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_hostname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{controller_hostname}}:8080/v1/AUTH_%(tenant_id)s' --adminurl 'https://{{controller_hostname}}:8080' --internalurl 'https://{{controller_hostname}}:8080/v1/AUTH_%(tenant_id)s' ) || true + - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{controller_publicname}}:8080/v1/AUTH_%(tenant_id)s' --adminurl 'https://{{controller_publicname}}:8080' --internalurl 'https://{{controller_publicname}}:8080/v1/AUTH_%(tenant_id)s' ) || true # swift_s3 - shell: source /root/keystonerc_admin && keystone service-list | grep 'swift_s3' | awk '{print $2}' + always_run: yes + changed_when: false register: SERVICE_ID - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}' + always_run: yes + changed_when: false register: ENDPOINT_ID - - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_hostname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_hostname }}:8080' --adminurl 'https://{{ controller_hostname }}:8080' --internalurl 'https://{{ controller_hostname }}:8080' ) || true + - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:8080' --adminurl 'https://{{ controller_publicname }}:8080' --internalurl 'https://{{ controller_publicname }}:8080' ) || true # keystone --- !!!!! we need to use ADMIN_TOKEN here - this MUST be last before we restart OS and set up haproxy - shell: source /root/keystonerc_admin && keystone service-list | grep 'keystone' | awk '{print $2}' + always_run: yes + changed_when: false register: SERVICE_ID - shell: source /root/keystonerc_admin && keystone endpoint-list | grep {{SERVICE_ID.stdout}} | awk '{print $2}' + always_run: yes + changed_when: false register: ENDPOINT_ID - - ini_file: dest=/etc/keystone/keystone.conf section=ssl option=certfile value=/etc/pki/tls/certs/fed-cloud09-keystone.pem - - ini_file: dest=/etc/keystone/keystone.conf section=ssl option=keyfile value=/etc/pki/tls/private/fed-cloud09-keystone.key - - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_hostname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone --os-token '{{ADMIN_TOKEN}}' --os-endpoint 'http://{{ controller_hostname }}:35357/v2.0' endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_hostname }}:5000/v2.0' --adminurl 'https://{{ controller_hostname }}:35357/v2.0' --internalurl 'https://{{ controller_hostname }}:5000/v2.0' ) || true + - ini_file: dest=/etc/keystone/keystone.conf section=ssl option=certfile value=/etc/haproxy/fedorainfracloud.org.combined + - ini_file: dest=/etc/keystone/keystone.conf section=ssl option=keyfile value=/etc/pki/tls/private/fedorainfracloud.org.key + - ini_file: dest=/etc/keystone/keystone.conf section=ssl option=ca_certs value=/etc/pki/tls/private/fedorainfracloud.org.digicert.pem + - shell: source /root/keystonerc_admin && keystone endpoint-list |grep {{SERVICE_ID.stdout}} |grep -v {{ controller_publicname }} && (keystone endpoint-delete {{ENDPOINT_ID.stdout}} && keystone --os-token '{{ADMIN_TOKEN}}' --os-endpoint 'http://{{ controller_publicname }}:35357/v2.0' endpoint-create --region 'RegionOne' --service {{SERVICE_ID.stdout}} --publicurl 'https://{{ controller_publicname }}:5000/v2.0' --adminurl 'https://{{ controller_publicname }}:35357/v2.0' --internalurl 'https://{{ controller_publicname }}:5000/v2.0' ) || true - ini_file: dest=/etc/keystone/keystone.conf section=ssl option=enable value=True - - lineinfile: dest=/root/keystonerc_admin regexp="^export OS_AUTH_URL" line="export OS_AUTH_URL=https://{{ controller_hostname }}:5000/v2.0/" - - lineinfile: dest=/root/keystonerc_admin line="export OS_CACERT=/etc/pki/tls/certs/fed-cloud09-keystone.pem" + - lineinfile: dest=/root/keystonerc_admin regexp="^export OS_AUTH_URL" line="export OS_AUTH_URL=https://{{ controller_publicname }}:5000/v2.0/" # Setup sysconfig file for novncproxy - copy: src={{ files }}/fedora-cloud/openstack-nova-novncproxy dest=/etc/sysconfig/openstack-nova-novncproxy mode=644 owner=root group=root - - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=novncproxy_base_url value=https://{{ controller_hostname }}:6080/vnc_auto.html + - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=novncproxy_base_url value=https://{{ controller_publicname }}:6080/vnc_auto.html # set SSL for services - - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_hostname }}:5000 + - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000 - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=auth_protocol value=https - - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=auth_host value={{ controller_hostname }} - - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fed-cloud09-keystone.pem - - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=neutron_admin_auth_url value=https://{{ controller_hostname }}:35357/v2.0 - - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=neutron_url value=https://{{ controller_hostname }}:9696 + - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }} + - ini_file: dest=/etc/nova/nova.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem + - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=neutron_admin_auth_url value=https://{{ controller_publicname }}:35357/v2.0 + - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=neutron_url value=https://{{ controller_publicname }}:9696 - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=osapi_compute_listen_port value=6774 - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=ec2_listen_port value=6773 - - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=glance_api_servers value=https://{{ controller_hostname }}:9292 - - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=cert value=/etc/pki/tls/certs/fed-cloud09-nova.pem - - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=key value=/etc/pki/tls/private/fed-cloud09-nova.key - - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=novncproxy_host value={{ controller_hostname }} + - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=glance_api_servers value=https://{{ controller_publicname }}:9292 + - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=cert value=/etc/pki/tls/certs/fedorainfracloud.org.pem + - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=key value=/etc/pki/tls/private/fedorainfracloud.org.key + - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=ca value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem + - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=novncproxy_host value={{ controller_publicname }} - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=ssl_only value=False - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=scheduler_default_filters value=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,DiskFilter + - ini_file: dest=/etc/nova/nova.conf section=DEFAULT option=default_floating_pool value=external - - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_hostname }}:5000 + - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000 - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=auth_protocol value=https - - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=auth_host value={{ controller_hostname }} - - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fed-cloud09-keystone.pem + - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }} + - ini_file: dest=/etc/glance/glance-api.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=bind_port value=7292 + # configure Glance to use Swift as backend + - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=default_store value=swift + - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=stores value=glance.store.swift.Store + - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=swift_store_auth_address value=https://{{ controller_publicname }}:5000/v2.0 + - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=swift_store_user value="services:swift" + - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=swift_store_key value="{{ SWIFT_PASS }}" + - ini_file: dest=/etc/glance/glance-api.conf section=DEFAULT option=swift_store_create_container_on_put value="True" + - shell: rsync /usr/share/glance/glance-api-dist-paste.ini /etc/glance/glance-api-paste.ini + - shell: rsync /usr/share/glance/glance-registry-dist-paste.ini /etc/glance/glance-registry-paste.ini - - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_hostname }}:5000 - - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=auth_host value={{ controller_hostname }} + - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000 + - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }} - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=auth_protocol value=https - - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fed-cloud09-keystone.pem + - ini_file: dest=/etc/glance/glance-registry.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem - - ini_file: dest=/etc/glance/glance-cache.conf section=DEFAULT option=auth_url value=https://{{ controller_hostname }}:5000/v2.0 + - ini_file: dest=/etc/glance/glance-cache.conf section=DEFAULT option=auth_url value=https://{{ controller_publicname }}:5000/v2.0 - - ini_file: dest=/etc/glance/glance-scrubber.conf section=DEFAULT option=auth_url value=https://{{ controller_hostname }}:5000/v2.0 + - ini_file: dest=/etc/glance/glance-scrubber.conf section=DEFAULT option=auth_url value=https://{{ controller_publicname }}:5000/v2.0 - - ini_file: dest=/etc/cinder/cinder.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_hostname }}:5000 + - ini_file: dest=/etc/cinder/cinder.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000 - ini_file: dest=/etc/cinder/cinder.conf section=keystone_authtoken option=auth_protocol value=https - - ini_file: dest=/etc/cinder/cinder.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fed-cloud09-keystone.pem - - ini_file: dest=/etc/cinder/cinder.conf section=DEFAULT option=backup_swift_url value=https://{{ controller_hostname }}:8080/v1/AUTH_ + - ini_file: dest=/etc/cinder/cinder.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem + - ini_file: dest=/etc/cinder/cinder.conf section=DEFAULT option=backup_swift_url value=https://{{ controller_publicname }}:8080/v1/AUTH_ - ini_file: dest=/etc/cinder/cinder.conf section=DEFAULT option=osapi_volume_listen_port value=6776 - - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=auth_uri value=https://{{ controller_hostname }}:5000 - - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=auth_host value={{ controller_hostname }} + - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=auth_uri value=https://{{ controller_publicname }}:5000 + - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=auth_host value={{ controller_publicname }} - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=auth_protocol value=https - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=service_protocol value=https - - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fed-cloud09-keystone.pem - - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=auth_uri value=https://{{ controller_hostname }}:5000 - - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=auth_host value={{ controller_hostname }} + - ini_file: dest=/etc/cinder/api-paste.conf section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem + - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=auth_uri value=https://{{ controller_publicname }}:5000 + - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=auth_host value={{ controller_publicname }} - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=auth_protocol value=https - - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=service_host value={{ controller_hostname }} - - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fed-cloud09-keystone.pem + - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=service_host value={{ controller_publicname }} + - ini_file: dest=/etc/cinder/api-paste.ini section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem - - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_hostname }}:5000 + - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000 - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=auth_protocol value=https - - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=auth_host value={{ controller_hostname }} - - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fed-cloud09-keystone.pem - - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=nova_url value=https://{{ controller_hostname }}:8774/v2 - - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=nova_admin_auth_url value=https://{{ controller_hostname }}:35357/v2.0 + - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }} + - ini_file: dest=/etc/neutron/neutron.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem + - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=nova_url value=https://{{ controller_publicname }}:8774/v2 + - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=nova_admin_auth_url value=https://{{ controller_publicname }}:35357/v2.0 - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=use_ssl value=False - - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=ssl_cert_file value=/etc/pki/tls/certs/fed-cloud09-neutron.pem - - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=ssl_key_file value=/etc/pki/tls/private/fed-cloud09-neutron.key - - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=ssl_ca_file value=/etc/pki/tls/certs/fed-cloud09-neutron.pem + - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=ssl_cert_file value=/etc/pki/tls/certs/fedorainfracloud.org.pem + - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=ssl_key_file value=/etc/pki/tls/private/fedorainfracloud.org.key + - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=ssl_ca_file value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem - ini_file: dest=/etc/neutron/neutron.conf section=DEFAULT option=bind_port value=8696 + - lineinfile: dest=/etc/neutron/neutron.conf regexp="^service_provider = LOADBALANCER" line="service_provider = LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default" insertafter="\[service_providers]" + - lineinfile: dest=/etc/neutron/neutron.conf regexp="^service_provider = FIREWALL" line="service_provider = FIREWALL:Iptables:neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver:default" insertafter="\[service_providers]" - - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=auth_uri value=https://{{ controller_hostname }}:5000 - - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=auth_host value={{ controller_hostname }} + - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=auth_uri value=https://{{ controller_publicname }}:5000 + - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=auth_host value={{ controller_publicname }} - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=auth_protocol value=https - - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fed-cloud09-keystone.pem + - ini_file: dest=/etc/neutron/api-paste.conf section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem - - ini_file: dest=/etc/neutron/metadata_agent.ini section="filter:authtoken" option=auth_url value=https://{{ controller_hostname }}:35357/v2.0 - - ini_file: dest=/etc/neutron/metadata_agent.ini section=DEFAULT option=auth_url value=https://{{ controller_hostname }}:35357/v2.0 + - ini_file: dest=/etc/neutron/metadata_agent.ini section="filter:authtoken" option=auth_url value=https://{{ controller_publicname }}:35357/v2.0 + - ini_file: dest=/etc/neutron/metadata_agent.ini section=DEFAULT option=auth_url value=https://{{ controller_publicname }}:35357/v2.0 - - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=auth_uri value=https://{{ controller_hostname }}:5000 + - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=auth_uri value=https://{{ controller_publicname }}:5000 - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=auth_protocol value=https - - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=auth_host value={{ controller_hostname }} - - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fed-cloud09-keystone.pem + - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=auth_host value={{ controller_publicname }} + - ini_file: dest=/etc/swift/proxy-server.conf section="filter:authtoken" option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem - ini_file: dest=/etc/swift/proxy-server.conf section=DEFAULT option=bind_port value=7080 - ini_file: dest=/etc/swift/proxy-server.conf section=DEFAULT option=bind_ip value=127.0.0.1 - - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_hostname }}:5000 + - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=auth_uri value=https://{{ controller_publicname }}:5000 - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=auth_protocol value=https - - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=auth_host value={{ controller_hostname }} - - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fed-cloud09-keystone.pem - - ini_file: dest=/etc/ceilometer/ceilometer.conf section=service_credentials option=os_auth_url value=https://{{ controller_hostname }}:35357/v2.0 + - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=auth_host value={{ controller_publicname }} + - ini_file: dest=/etc/ceilometer/ceilometer.conf section=keystone_authtoken option=cafile value=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem + - ini_file: dest=/etc/ceilometer/ceilometer.conf section=service_credentials option=os_auth_url value=https://{{ controller_publicname }}:35357/v2.0 - ini_file: dest=/etc/ceilometer/ceilometer.conf section=api option=port value=6777 # enable stunell to neutron - - shell: cat /etc/pki/tls/certs/fed-cloud09-keystone.pem /etc/pki/tls/private/fed-cloud09.key > /etc/haproxy/fed-cloud09.combined - - file: path=/etc/haproxy/fed-cloud09.combined owner=haproxy mode=644 + - shell: cat /etc/pki/tls/certs/fedorainfracloud.org.pem /etc/pki/tls/certs/fedorainfracloud.org.digicert.pem /etc/pki/tls/private/fedorainfracloud.org.key > /etc/haproxy/fedorainfracloud.org.combined + - file: path=/etc/haproxy/fedorainfracloud.org.combined owner=haproxy mode=644 - copy: src={{ files }}/fedora-cloud/haproxy.cfg dest=/etc/haproxy/haproxy.cfg mode=644 owner=root group=root # first OS have to free ports so haproxy can bind it, then we start OS on modified ports - shell: openstack-service stop - service: name=haproxy state=started enabled=yes - shell: openstack-service start - - lineinfile: dest=/etc/openstack-dashboard/local_settings regexp="^OPENSTACK_KEYSTONE_URL " line="OPENSTACK_KEYSTONE_URL = 'https://{{controller_hostname}}:5000/v2.0'" + - lineinfile: dest=/etc/openstack-dashboard/local_settings regexp="^OPENSTACK_KEYSTONE_URL " line="OPENSTACK_KEYSTONE_URL = 'https://{{controller_publicname}}:5000/v2.0'" notify: - restart httpd - - lineinfile: dest=/etc/openstack-dashboard/local_settings regexp="OPENSTACK_SSL_CACERT " line="OPENSTACK_SSL_CACERT = '/etc/pki/tls/certs/fed-cloud09-keystone.pem'" + - lineinfile: dest=/etc/openstack-dashboard/local_settings regexp="OPENSTACK_SSL_CACERT " line="OPENSTACK_SSL_CACERT = '/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem'" notify: - restart httpd - # configure cider with multi back-end # https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Cloud_Administrator_Guide/section_manage-volumes.html - ini_file: dest=/etc/cinder/cinder.conf section=DEFAULT option="enabled_backends" value="equallogic-1,lvmdriver-1" + notify: + - restart cinder # LVM - ini_file: dest=/etc/cinder/cinder.conf section="lvmdriver-1" option="volume_group" value="cinder-volumes" + notify: + - restart cinder - ini_file: dest=/etc/cinder/cinder.conf section="lvmdriver-1" option="volume_driver" value="cinder.volume.drivers.lvm.LVMISCSIDriver" + notify: + - restart cinder - ini_file: dest=/etc/cinder/cinder.conf section="lvmdriver-1" option="volume_backend_name" value="LVM_iSCSI" + notify: + - restart cinder # Dell EqualLogic - http://docs.openstack.org/trunk/config-reference/content/dell-equallogic-driver.html - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="volume_driver" value="cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver" + notify: + - restart cinder - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="san_ip" value="{{ IP_EQLX }}" + notify: + - restart cinder - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="san_login" value="{{ SAN_UNAME }}" + notify: + - restart cinder - name: set password for equallogic-1 ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="san_password" value="{{ SAN_PW }}" + notify: + - restart cinder - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="eqlx_group_name" value="{{ EQLX_GROUP }}" + notify: + - restart cinder - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="eqlx_pool" value="{{ EQLX_POOL }}" - - service: name=openstack-cinder-api state=restarted - - service: name=openstack-cinder-scheduler state=restarted - - service: name=openstack-cinder-volume state=restarted + notify: + - restart cinder + - ini_file: dest=/etc/cinder/cinder.conf section="equallogic-1" option="volume_backend_name" value="equallogic" + notify: + - restart cinder + + # flush handlers here in case cinder changes and we need to restart it. + - meta: flush_handlers + + # create storage types + # note that existing keys can be retrieved using: cinder extra-specs-list + - shell: source /root/keystonerc_admin && cinder type-create lvm + ignore_errors: yes + - shell: source /root/keystonerc_admin && cinder type-key lvm set volume_backend_name=lvm + - shell: source /root/keystonerc_admin && cinder type-create equallogic + ignore_errors: yes + - shell: source /root/keystonerc_admin && cinder type-key equallogic set volume_backend_name=equallogic # http://docs.openstack.org/icehouse/install-guide/install/yum/content/glance-verify.html - file: path=/root/images state=directory @@ -470,7 +582,7 @@ - name: Add the cirros-0.3.2-x86_64 image glance_image: login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin" - auth_url="https://{{controller_hostname}}:35357/v2.0" + auth_url="https://{{controller_publicname}}:35357/v2.0" name=cirros-0.3.2-x86_64 disk_format=qcow2 is_public=True @@ -479,39 +591,93 @@ - name: create non-standard flavor nova_flavor: login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin" - auth_url="https://{{controller_hostname}}:35357/v2.0" - name="m1.builder" ram="5120" disk="50" vcpus="2" swap="5120" + auth_url="https://{{controller_publicname}}:35357/v2.0" + name="{{item.name}}" ram="{{item.ram}}" disk="{{item.disk}}" vcpus="{{item.vcpus}}" swap="{{item.swap}}" + with_items: + - { name: m1.builder, ram: 5120, disk: 50, vcpus: 2, swap: 5120 } + - { name: ms2.builder, ram: 5120, disk: 20, vcpus: 2, swap: 50000 } + - { name: m2.prepare_builder, ram: 5120, disk: 20, vcpus: 2, swap: 0 } + # same as m.* but with swap + - { name: ms1.tiny, ram: 512, disk: 1, vcpus: 1, swap: 512 } + - { name: ms1.small, ram: 2048, disk: 20, vcpus: 1, swap: 2048 } + - { name: ms1.medium, ram: 4096, disk: 40, vcpus: 2, swap: 4096 } + - { name: ms1.large, ram: 8192, disk: 50, vcpus: 4, swap: 4096 } + - { name: ms1.xlarge, ram: 16384, disk: 160, vcpus: 8, swap: 16384 } + # inspired by http://aws.amazon.com/ec2/instance-types/ + - { name: c4.large, ram: 3072, disk: 0, vcpus: 2, swap: 0 } + - { name: c4.xlarge, ram: 7168, disk: 0, vcpus: 4, swap: 0 } + - { name: c4.2xlarge, ram: 14336, disk: 0, vcpus: 8, swap: 0 } + - { name: r3.large, ram: 16384, disk: 32, vcpus: 2, swap: 16384 } + ##### download common Images ##### - - get_url: url=http://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/x86_64/Fedora-x86_64-20-20140407-sda.qcow2 dest=/root/images/Fedora-x86_64-20-20140407-sda.qcow2 mode=0440 - - get_url: url=http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2 dest=/root/images/Fedora-Cloud-Base-20141203-21.x86_64.qcow2 mode=0440 - # RHEL6 can be downloaded from https://rhn.redhat.com/rhn/software/channel/downloads/Download.do?cid=16952 - # RHEL7 can be download from https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.0/x86_64/product-downloads + # restricted images (RHEL) are handled two steps below - name: Add the images glance_image: login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin" - auth_url="https://{{controller_hostname}}:35357/v2.0" + auth_url="https://{{controller_publicname}}:35357/v2.0" name="{{ item.name }}" disk_format=qcow2 is_public=True - file="{{ item.file }}" + copy_from="{{ item.copy_from }}" with_items: - - name: fedora-cloud-64-20-20140407 - file: /root/images/Fedora-x86_64-20-20140407-sda.qcow2 - - name: Fedora-Cloud-Base-20141203-21 - file: /root/images/Fedora-Cloud-Base-20141203-21.x86_64.qcow2 - # FIXME uncomment when you manualy download - #- name: rhel-guest-image-6.5-20140630.0.x86_64 - # file: /root/images/rhel-guest-image-6.5-20140630.0.x86_64.qcow2 - #- name: rhel-guest-image-7.0-20140618.1.x86_64 - # file: /root/images/rhel-guest-image-7.0-20140618.1.x86_64.qcow2 + - name: Fedora-x86_64-20-20131211.1 + copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/20/Images/x86_64/Fedora-x86_64-20-20131211.1-sda.qcow2 + - name: Fedora-x86_64-20-20140407 + copy_from: https://dl.fedoraproject.org/pub/fedora/linux/updates/20/Images/x86_64/Fedora-x86_64-20-20140407-sda.qcow2 + - name: Fedora-Cloud-Base-20141203-21.x86_64 + copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2 + - name: Fedora-Cloud-Base-20141203-21.i386 + copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Base-20141203-21.i386.qcow2 + - name: Fedora-Cloud-Atomic-22_Alpha-20150305.x86_64 + copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/test/22_Alpha/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22_Alpha-20150305.x86_64.qcow2 + - name: Fedora-Cloud-Base-22_Alpha-20150305.x86_64 + copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/test/22_Alpha/Cloud/x86_64/Images/Fedora-Cloud-Base-22_Alpha-20150305.x86_64.qcow2 + - name: Fedora-Cloud-Atomic-22_Beta-20150415.x86_64 + copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/test/22_Beta/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22_Beta-20150415.x86_64.qcow2 + - name: Fedora-Cloud-Base-22_Beta-20150415.x86_64 + copy_from: https://dl.fedoraproject.org/pub/fedora/linux/releases/test/22_Beta/Cloud/x86_64/Images/Fedora-Cloud-Base-22_Beta-20150415.x86_64.qcow2 + - name: Fedora-Cloud-Atomic-22-20150521.x86_64 + copy_from: http://dl.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22-20150521.x86_64.qcow2 + - name: Fedora-Cloud-Base-22-20150521.x86_64 + copy_from: http://dl.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2 + - name: CentOS-7-x86_64-GenericCloud-1503 + copy_from: http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1503.qcow2 + - name: CentOS-6-x86_64-GenericCloud-20141129_01 + copy_from: http://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud-20141129_01.qcow2 + + # RHEL6 can be downloaded from https://rhn.redhat.com/rhn/software/channel/downloads/Download.do?cid=16952 + - stat: path=/root/images/rhel-guest-image-6.6-20141222.0.x86_64.qcow2 + register: rhel6_image + - name: Add the RHEL6 image + glance_image: + login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin" + auth_url="https://{{controller_publicname}}:35357/v2.0" + name="rhel-guest-image-6.6-20141222.0.x86_64" + disk_format=qcow2 + is_public=True + file="/root/images/rhel-guest-image-6.6-20141222.0.x86_64.qcow2" + when: rhel6_image.stat.exists == True + + # RHEL7 can be download from https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.0/x86_64/product-downloads + - stat: path=/root/images/rhel-guest-image-7.0-20140930.0.x86_64.qcow2 + register: rhel7_image + - name: Add the RHEL7 image + glance_image: + login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin" + auth_url="https://{{controller_publicname}}:35357/v2.0" + name="rhel-guest-image-7.0-20140930.0.x86_64" + disk_format=qcow2 + is_public=True + file="/root/images/rhel-guest-image-7.0-20140930.0.x86_64.qcow2" + when: rhel7_image.stat.exists == True ##### PROJECTS ###### - name: Create tenants keystone_user: login_user="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin" - endpoint="https://{{controller_hostname}}:35357/v2.0" + endpoint="https://{{controller_publicname}}:35357/v2.0" tenant="{{ item.name }}" tenant_description="{{ item.desc }}" state=present @@ -532,7 +698,7 @@ - name: Create users keystone_user: login_user="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin" - endpoint="https://{{controller_hostname}}:35357/v2.0" + endpoint="https://{{controller_publicname}}:35357/v2.0" user="{{ item.name }}" email="{{ item.email }}" tenant="{{ item.tenant }}" @@ -554,41 +720,50 @@ - { name: nb, email: 'nb@fedoraproject.org', tenant: infrastructure, password: "{{nb_password}}" } - { name: pingou, email: 'pingou@pingoured.fr', tenant: infrastructure, password: "{{pingou_password}}" } - { name: puiterwijk, email: 'puiterwijk@fedoraproject.org', tenant: infrastructure, password: "{{puiterwijk_password}}" } + - { name: kushal, email: 'kushal@fedoraproject.org', tenant: infrastructure, password: "{{kushal_password}}" } - { name: red, email: 'red@fedoraproject.org', tenant: infrastructure, password: "{{red_password}}" } - { name: samkottler, email: 'samkottler@fedoraproject.org', tenant: infrastructure, password: "{{samkottler_password}}" } - { name: tflink, email: 'tflink@fedoraproject.org', tenant: qa, password: "{{tflink_password}}" } - { name: twisted, email: 'buildbot@twistedmatrix.com', tenant: pythonbots, password: "{{twisted_password}}" } - { name: vgologuz, email: 'vgologuz@redhat.com', tenant: copr, password: "{{vgologuz_password}}" } - { name: roshi, email: 'roshi@fedoraproject.org', tenant: qa, password: "{{roshi_password}}" } + - { name: maxamillion, email: 'maxamillion@fedoraproject.org', tenant: infrastructure, password: "{{maxamillion_password}}" } + tags: + - openstack_users - name: upload SSH keys for users nova_keypair: - auth_url="https://{{controller_hostname}}:35357/v2.0" - login_username="{{ item.name }}" + auth_url="https://{{controller_publicname}}:35357/v2.0" + login_username="{{ item.username }}" login_password="{{ item.password }}" login_tenant_name="{{item.tenant}}" name="{{ item.name }}" public_key="{{ item.public_key }}" ignore_errors: yes no_log: True with_items: - - { name: anthomas, tenant: cloudintern, password: "{{anthomas_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas anthomas') }}" } - - { name: ausil, tenant: infrastructure, password: "{{ausil_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas ausil') }}" } - - { name: codeblock, tenant: infrastructure, password: "{{codeblock_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas codeblock') }}" } - - { name: buildsys, tenant: copr, password: "{{copr_password}}", public_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCeTO0ddXuhDZYM9HyM0a47aeV2yIVWhTpddrQ7/RAIs99XyrsicQLABzmdMBfiZnP0FnHBF/e+2xEkT8hHJpX6bX81jjvs2bb8KP18Nh8vaXI3QospWrRygpu1tjzqZT0Llh4ZVFscum8TrMw4VWXclzdDw6x7csCBjSttqq8F3iTJtQ9XM9/5tCAAOzGBKJrsGKV1CNIrfUo5CSzY+IUVIr8XJ93IB2ZQVASK34T/49egmrWlNB32fqAbDMC+XNmobgn6gO33Yq5Ly7Dk4kqTUx2TEaqDkZfhsVu0YcwV81bmqsltRvpj6bIXrEoMeav7nbuqKcPLTxWEY/2icePF" } - - { name: gholms, tenant: cloudintern, password: "{{gholms_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas gholms') }}" } - - { name: jskladan, tenant: qa, password: "{{jskladan_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas jskladan') }}" } - - { name: kevin, tenant: infrastructure, password: "{{kevin_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas kevin') }}" } - - { name: laxathom, tenant: infrastructure, password: "{{laxathom_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas laxathom') }}" } - - { name: mattdm, tenant: infrastructure, password: "{{mattdm_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas mattdm') }}" } - - { name: msuchy, tenant: copr, password: "{{msuchy_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas msuchy') }}" } - - { name: nb, tenant: infrastructure, password: "{{nb_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas nb') }}" } - - { name: pingou, tenant: infrastructure, password: "{{pingou_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas pingou') }}" } - - { name: puiterwijk, tenant: infrastructure, password: "{{puiterwijk_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas puiterwijk') }}" } - - { name: red, tenant: infrastructure, password: "{{red_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas red') }}" } - - { name: roshi, tenant: qa, password: "{{roshi_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas roshi') }}" } - - { name: samkottler, tenant: infrastructure, password: "{{samkottler_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas skottler') }}" } - - { name: tflink, tenant: qa, password: "{{tflink_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas tflink') }}" } - - { name: atomic, tenant: scratch, password: "{{cockpit_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas walters') }}" } + - { username: anthomas, name: anthomas, tenant: cloudintern, password: "{{anthomas_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas anthomas') }}" } + - { username: ausil, name: ausil, tenant: infrastructure, password: "{{ausil_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas ausil') }}" } + - { username: codeblock, name: codeblock, tenant: infrastructure, password: "{{codeblock_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas codeblock') }}" } + - { username: buildsys, name: buildsys, tenant: copr, password: "{{copr_password}}", public_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCeTO0ddXuhDZYM9HyM0a47aeV2yIVWhTpddrQ7/RAIs99XyrsicQLABzmdMBfiZnP0FnHBF/e+2xEkT8hHJpX6bX81jjvs2bb8KP18Nh8vaXI3QospWrRygpu1tjzqZT0Llh4ZVFscum8TrMw4VWXclzdDw6x7csCBjSttqq8F3iTJtQ9XM9/5tCAAOzGBKJrsGKV1CNIrfUo5CSzY+IUVIr8XJ93IB2ZQVASK34T/49egmrWlNB32fqAbDMC+XNmobgn6gO33Yq5Ly7Dk4kqTUx2TEaqDkZfhsVu0YcwV81bmqsltRvpj6bIXrEoMeav7nbuqKcPLTxWEY/2icePF" } + - { username: gholms, name: gholms, tenant: cloudintern, password: "{{gholms_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas gholms') }}" } + - { username: jskladan, name: jskladan, tenant: qa, password: "{{jskladan_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas jskladan') }}" } + - { username: kevin, name: kevin, tenant: infrastructure, password: "{{kevin_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas kevin') }}" } + - { username: maxamillion, name: maxamillion, tenant: infrastructure, password: "{{maxamillion_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas maxamillion') }}" } + - { username: laxathom, name: laxathom, tenant: infrastructure, password: "{{laxathom_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas laxathom') }}" } + - { username: mattdm, name: mattdm, tenant: infrastructure, password: "{{mattdm_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas mattdm') }}" } + - { username: msuchy, name: msuchy, tenant: copr, password: "{{msuchy_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas msuchy') }}" } + - { username: nb, name: nb, tenant: infrastructure, password: "{{nb_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas nb') }}" } + - { username: pingou, name: pingou, tenant: infrastructure, password: "{{pingou_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas pingou') }}" } + - { username: puiterwijk, name: puiterwijk, tenant: infrastructure, password: "{{puiterwijk_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas puiterwijk') }}" } + - { username: kushal, name: kushal, tenant: infrastructure, password: "{{kushal_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas kushal') }}" } + - { username: red, name: red, tenant: infrastructure, password: "{{red_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas red') }}" } + - { username: roshi, name: roshi, tenant: qa, password: "{{roshi_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas roshi') }}" } + - { username: samkottler, name: samkottler, tenant: infrastructure, password: "{{samkottler_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas skottler') }}" } + - { username: tflink, name: tflink, tenant: qa, password: "{{tflink_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas tflink') }}" } + - { username: atomic, name: atomic, tenant: scratch, password: "{{cockpit_password}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas walters') }}" } # - { name: twisted, tenant: pythonbots, password: "{{twisted_password}}", public_key: "" } - - { name: admin, tenant: admin, password: "{{ADMIN_PASS}}", public_key: "{{ lookup('file', files + '/fedora-cloud/fedora-admin-20130801.pub') }}" } + - { username: admin, name: fedora-admin-20130801, tenant: admin, password: "{{ADMIN_PASS}}", public_key: "{{ lookup('file', files + '/fedora-cloud/fedora-admin-20130801.pub') }}" } + - { username: admin, name: asamalik, tenant: scratch, password: "{{ADMIN_PASS}}", public_key: "{{ lookup('pipe', '/srv/web/infra/ansible/scripts/auth-keys-from-fas asamalik') }}" } + tags: + - openstack_users - name: Create roles for additional tenants shell: source /root/keystonerc_admin && keystone role-list |grep ' {{item}} ' || keystone role-create --name {{ item }} @@ -596,7 +771,7 @@ - name: Assign users to secondary tentants shell: source /root/keystonerc_admin && keystone user-role-list --user "{{item.user}}" --tenant "{{item.tenant}}" | grep ' {{item.tenant }} ' || keystone user-role-add --user {{item.user}} --role {{item.tenant}} --tenant {{item.tenant}} || true #keystone_user: - # endpoint="https://{{controller_hostname}}:35357/v2.0" + # endpoint="https://{{controller_publicname}}:35357/v2.0" # login_user="admin" login_password="{{ ADMIN_PASS }}" # role=coprdev user={{ item }} tenant=coprdev with_items: @@ -629,6 +804,7 @@ - { user: msuchy, tenant: qa } - { user: msuchy, tenant: scratch } - { user: msuchy, tenant: transient } + - { user: pingou, tenant: persistent } - { user: puiterwijk, tenant: cloudintern } - { user: puiterwijk, tenant: cloudsig } - { user: puiterwijk, tenant: copr } @@ -653,7 +829,7 @@ - name: Create en external network neutron_network: login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin" - auth_url="https://{{controller_hostname}}:35357/v2.0" + auth_url="https://{{controller_publicname}}:35357/v2.0" name=external router_external=True provider_network_type=flat @@ -662,7 +838,7 @@ - name: Create an external subnet neutron_subnet: login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin" - auth_url="https://{{controller_hostname}}:35357/v2.0" + auth_url="https://{{controller_publicname}}:35357/v2.0" name=external-subnet network_name=external cidr="{{ public_interface_cidr }}" @@ -694,21 +870,21 @@ - name: Create a router for all tenants neutron_router: login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin" - auth_url="https://{{controller_hostname}}:35357/v2.0" + auth_url="https://{{controller_publicname}}:35357/v2.0" tenant_name="{{ item }}" name="ext-to-{{ item }}" with_items: all_tenants - name: "Connect router's gateway to the external network" neutron_router_gateway: login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin" - auth_url="https://{{controller_hostname}}:35357/v2.0" + auth_url="https://{{controller_publicname}}:35357/v2.0" router_name="ext-to-{{ item }}" network_name="external" with_items: all_tenants - name: Create a private network for all tenants neutron_network: login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin" - auth_url="https://{{controller_hostname}}:35357/v2.0" + auth_url="https://{{controller_publicname}}:35357/v2.0" tenant_name="{{ item.name }}" name="{{ item.name }}-net" shared="{{ item.shared }}" @@ -726,7 +902,7 @@ - name: Create a subnet for all tenants neutron_subnet: login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin" - auth_url="https://{{controller_hostname}}:35357/v2.0" + auth_url="https://{{controller_publicname}}:35357/v2.0" tenant_name="{{ item.name }}" network_name="{{ item.name }}-net" name="{{ item.name }}-subnet" @@ -747,7 +923,7 @@ - name: "Connect router's interface to the TENANT-subnet" neutron_router_interface: login_username="admin" login_password="{{ ADMIN_PASS }}" login_tenant_name="admin" - auth_url="https://{{controller_hostname}}:35357/v2.0" + auth_url="https://{{controller_publicname}}:35357/v2.0" tenant_name="{{ item }}" router_name="ext-to-{{ item }}" subnet_name="{{ item }}-subnet" @@ -761,7 +937,7 @@ login_username: "admin" login_password: "{{ ADMIN_PASS }}" login_tenant_name: "admin" - auth_url: "https://{{controller_hostname}}:35357/v2.0" + auth_url: "https://{{controller_publicname}}:35357/v2.0" state: "present" name: 'ssh-anywhere-{{item}}' description: "allow ssh from anywhere" @@ -780,7 +956,7 @@ login_username: "admin" login_password: "{{ ADMIN_PASS }}" login_tenant_name: "admin" - auth_url: "https://{{controller_hostname}}:35357/v2.0" + auth_url: "https://{{controller_publicname}}:35357/v2.0" state: "present" name: 'allow-nagios-{{item}}' description: "allow nagios checks" @@ -804,7 +980,7 @@ login_username: "admin" login_password: "{{ ADMIN_PASS }}" login_tenant_name: "admin" - auth_url: "https://{{controller_hostname}}:35357/v2.0" + auth_url: "https://{{controller_publicname}}:35357/v2.0" state: "present" name: 'ssh-from-persistent-{{item}}' description: "allow ssh from persistent" @@ -826,7 +1002,7 @@ login_username: "admin" login_password: "{{ ADMIN_PASS }}" login_tenant_name: "admin" - auth_url: "https://{{controller_hostname}}:35357/v2.0" + auth_url: "https://{{controller_publicname}}:35357/v2.0" state: "present" name: 'ssh-internal-{{item.name}}' description: "allow ssh from {{item.name}}-network" @@ -855,7 +1031,7 @@ login_username: "admin" login_password: "{{ ADMIN_PASS }}" login_tenant_name: "admin" - auth_url: "https://{{controller_hostname}}:35357/v2.0" + auth_url: "https://{{controller_publicname}}:35357/v2.0" state: "present" name: 'web-80-anywhere-{{item}}' description: "allow web-80 from anywhere" @@ -874,7 +1050,7 @@ login_username: "admin" login_password: "{{ ADMIN_PASS }}" login_tenant_name: "admin" - auth_url: "https://{{controller_hostname}}:35357/v2.0" + auth_url: "https://{{controller_publicname}}:35357/v2.0" state: "present" name: 'web-443-anywhere-{{item}}' description: "allow web-443 from anywhere" @@ -893,7 +1069,7 @@ login_username: "admin" login_password: "{{ ADMIN_PASS }}" login_tenant_name: "admin" - auth_url: "https://{{controller_hostname}}:35357/v2.0" + auth_url: "https://{{controller_publicname}}:35357/v2.0" state: "present" name: 'wide-open-{{item}}' description: "allow anything from anywhere" @@ -912,7 +1088,7 @@ login_username: "admin" login_password: "{{ ADMIN_PASS }}" login_tenant_name: "admin" - auth_url: "https://{{controller_hostname}}:35357/v2.0" + auth_url: "https://{{controller_publicname}}:35357/v2.0" state: "present" name: 'all-icmp-{{item}}' description: "allow all ICMP traffic" @@ -924,6 +1100,50 @@ remote_ip_prefix: "0.0.0.0/0" with_items: all_tenants + - name: "Create 'keygen-persistent' security group" + neutron_sec_group: + login_username: "admin" + login_password: "{{ ADMIN_PASS }}" + login_tenant_name: "admin" + auth_url: "https://{{controller_publicname}}:35357/v2.0" + state: "present" + name: 'keygen-persistent' + description: "rules for copr-keygen" + tenant_name: "persistent" + rules: + - direction: "ingress" + port_range_min: "5167" + port_range_max: "5167" + ethertype: "IPv4" + protocol: "tcp" + remote_ip_prefix: "172.25.32.1/20" + - direction: "ingress" + port_range_min: "80" + port_range_max: "80" + ethertype: "IPv4" + protocol: "tcp" + remote_ip_prefix: "172.25.32.1/20" + + - name: "Create 'pg-5432-anywhere' security group" + neutron_sec_group: + login_username: "admin" + login_password: "{{ ADMIN_PASS }}" + login_tenant_name: "admin" + auth_url: "https://{{controller_publicname}}:35357/v2.0" + state: "present" + name: 'pg-5432-anywhere-{{item}}' + description: "allow postgresql-5432 from anywhere" + tenant_name: "{{item}}" + rules: + - direction: "ingress" + port_range_min: "5432" + port_range_max: "5432" + ethertype: "IPv4" + protocol: "tcp" + remote_ip_prefix: "0.0.0.0/0" + with_items: all_tenants + + # Update quota for Copr # SEE: # nova quota-defaults @@ -931,10 +1151,23 @@ # default is 10 instances, 20 cores, 51200 RAM, 10 floating IPs - shell: source /root/keystonerc_admin && keystone tenant-list | grep 'copr ' | awk '{print $2}' register: TENANT_ID + always_run: true + changed_when: false - shell: source /root/keystonerc_admin && nova quota-update --instances 40 --cores 80 --ram 300000 --floating-ips 10 --security-groups 20 {{ TENANT_ID.stdout }} - shell: source /root/keystonerc_admin && keystone tenant-list | grep 'coprdev ' | awk '{print $2}' + always_run: true + changed_when: false register: TENANT_ID - shell: source /root/keystonerc_admin && nova quota-update --instances 40 --cores 80 --ram 300000 --floating-ips 10 --security-groups 20 {{ TENANT_ID.stdout }} +# +# Note that we set manually the amount of volumes for this tenant to 20 in the web interface. +# nova quota-update cannot do so. +# + - shell: source /root/keystonerc_admin && keystone tenant-list | grep 'persistent ' | awk '{print $2}' + always_run: true + changed_when: false + register: TENANT_ID + - shell: source /root/keystonerc_admin && nova quota-update --instances 30 --cores 80 --ram 192200 --security-groups 20 {{ TENANT_ID.stdout }} diff --git a/playbooks/hosts/fedocal.dev.fedoraproject.org.yml b/playbooks/hosts/fedocal.dev.fedoraproject.org.yml deleted file mode 100644 index 20f179b749..0000000000 --- a/playbooks/hosts/fedocal.dev.fedoraproject.org.yml +++ /dev/null @@ -1,69 +0,0 @@ -- name: check/create instance - hosts: 209.132.184.147 - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - tasks: - - include: "{{ tasks }}/persistent_cloud.yml" - - include: "{{ tasks }}/growroot_cloud.yml" - -- name: provision instance - hosts: 209.132.184.147 - user: root - gather_facts: True - vars: - - tcp_ports: [22, 80, 443] - - udp_ports: [] - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - tasks: - - include: "{{ tasks }}/cloud_setup_basic.yml" - - include: "{{ tasks }}/postfix_basic.yml" - - # packages needed - - name: add packages for repo - action: yum state=present name={{ item }} - with_items: - - euca2ools - - rsync - - openssh-clients - - system-config-firewall-base - - - name: install dependencies of fedocal - action: yum state=present pkg={{ item }} - with_items: - - mod_wsgi - - mod_ssl - - python-flask - - python-flask-wtf - - mysql-server - - MySQL-python - - python-sqlalchemy - - python-kitchen - - python-fedora - - python-vobject - - pytz - - python-alembic - - python-fedora-flask - tags: - - packages - - - name: mount up disk of fedocal persistent storage - action: mount name=/srv/persist src='LABEL=fedocal.dev' fstype=ext4 state=mounted - - # open up tcp ports - - name: poke holes in the firewall - action: command lokkit -p '{{ item }}:tcp' - with_items: - - "{{ tcp_ports }}" - - handlers: - - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/hosts/glittergallery-dev.fedorainfracloud.org.yml b/playbooks/hosts/glittergallery-dev.fedorainfracloud.org.yml new file mode 100644 index 0000000000..4129d8f4a7 --- /dev/null +++ b/playbooks/hosts/glittergallery-dev.fedorainfracloud.org.yml @@ -0,0 +1,30 @@ +- name: check/create instance + hosts: glittergallery-dev.fedorainfracloud.org + user: fedora + sudo: True + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + +- name: setup all the things + hosts: glittergallery-dev.fedorainfracloud.org + user: fedora + sudo: True + gather_facts: True + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" diff --git a/playbooks/hosts/grafana.cloud.fedoraproject.org.yml b/playbooks/hosts/grafana.cloud.fedoraproject.org.yml new file mode 100644 index 0000000000..f6769cfd64 --- /dev/null +++ b/playbooks/hosts/grafana.cloud.fedoraproject.org.yml @@ -0,0 +1,49 @@ +- name: check/create instance + hosts: grafana.cloud.fedoraproject.org + user: fedora + sudo: True + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + +- name: setup all the things + hosts: grafana.cloud.fedoraproject.org + user: fedora + sudo: True + gather_facts: True + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + roles: + - base + - rkhunter + #- { role: denyhosts, when: ansible_distribution_major_version|int != 7 } + - apache + - graphite/graphite + - graphite/statsd + - graphite/fedmsg2statsd + - graphite/grafana + + tasks: + - include: "{{ tasks }}/yumrepos.yml" + #- include: "{{ tasks }}/2fa_client.yml" + - include: "{{ tasks }}/motd.yml" + - include: "{{ tasks }}/mod_wsgi.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" diff --git a/playbooks/hosts/java-deptools.fedorainfracloud.org b/playbooks/hosts/java-deptools.fedorainfracloud.org new file mode 100644 index 0000000000..c46f321f8f --- /dev/null +++ b/playbooks/hosts/java-deptools.fedorainfracloud.org @@ -0,0 +1,30 @@ +- name: check/create instance + hosts: java-deptools.fedorainfracloud.org + user: fedora + sudo: True + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + +- name: setup all the things + hosts: java-deptools.fedorainfracloud.org + user: fedora + sudo: True + gather_facts: True + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" diff --git a/playbooks/hosts/junk01.phx2.fedoraproject.org b/playbooks/hosts/junk01.phx2.fedoraproject.org deleted file mode 100644 index 612ac498ba..0000000000 --- a/playbooks/hosts/junk01.phx2.fedoraproject.org +++ /dev/null @@ -1,29 +0,0 @@ -# This is a basic playbook - -- name: make basic box - hosts: junk01.phx2.fedoraproject.org - user: root - gather_facts: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - roles: - - base - - rkhunter - - nagios_client - - hosts - - fas_client - - collectd/base - - sudo - - tasks: - - include: "{{ tasks }}/yumrepos.yml" - - include: "{{ tasks }}/2fa_client.yml" - - include: "{{ tasks }}/motd.yml" - - handlers: - - include: "{{ handlers }}/restart_services.yml" - - include: "{{ handlers }}/semanage.yml" diff --git a/playbooks/hosts/junk02.phx2.fedoraproject.org.yml b/playbooks/hosts/junk02.phx2.fedoraproject.org.yml deleted file mode 100644 index 5d7c301418..0000000000 --- a/playbooks/hosts/junk02.phx2.fedoraproject.org.yml +++ /dev/null @@ -1,29 +0,0 @@ -# This is a basic playbook - -- name: make basic box - hosts: junk02.phx2.fedoraproject.org - user: root - gather_facts: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - roles: - - base - - rkhunter - - nagios_client - - hosts - - fas_client - - collectd/base - - sudo - - tasks: - - include: "{{ tasks }}/yumrepos.yml" - - include: "{{ tasks }}/2fa_client.yml" - - include: "{{ tasks }}/motd.yml" - - handlers: - - include: "{{ handlers }}/restart_services.yml" - - include: "{{ handlers }}/semanage.yml" diff --git a/playbooks/hosts/koschei.cloud.fedoraproject.org.yml b/playbooks/hosts/koschei.cloud.fedoraproject.org.yml deleted file mode 100644 index 200896779f..0000000000 --- a/playbooks/hosts/koschei.cloud.fedoraproject.org.yml +++ /dev/null @@ -1,129 +0,0 @@ -- name: check/create instance - hosts: koschei.cloud.fedoraproject.org - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - tasks: - - include: "{{ tasks }}/persistent_cloud.yml" - -- name: provision instance - hosts: koschei.cloud.fedoraproject.org - gather_facts: True - user: fedora - sudo: yes - tags: koschei - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - vars: - packages: - - koschei - services: - - koschei-polling - - koschei-resolver - - koschei-scheduler - - koschei-watcher - # httpd is here temporarly only, it will be removed once koschei - # implements "base" role - - httpd - # flag controlling whether koji PEM private key and certificate - # should be deployed by playbook - cert: false - - tasks: - - - include: "{{ tasks }}/growroot_cloud.yml" - - include: "{{ tasks }}/cloud_setup_basic.yml" - - include: "{{ tasks }}/postfix_basic.yml" - - # Temporary yum repo hosted on fedorapeople, it will be replaced by - # Fedora infra repo once Koschei completes RFR. Copr can't be used - # because of limitations of Fedora cloud routing -- machines in - # different networks can't access each other, even through public IP - - name: add koschei yum repo - action: copy src="{{ files }}/koschei/koschei.repo" dest="/etc/yum.repos.d/koschei.repo" - - - name: yum update koschei package - yum: name={{item}} state=latest - with_items: "{{packages}}" - register: yumupdate - # TODO: restart httpd - tags: - - packages - - - name: stop koschei - action: service name={{item}} state=stopped - with_items: "{{services}}" - when: yumupdate.changed - - - name: install /etc/koschei/config.cfg file - template: src="{{ files }}/koschei/config.cfg.j2" dest="/etc/koschei/config.cfg" - notify: - - restart koschei - # TODO: restart httpd - tags: - - config - - - name: install koschei.pem koji key and cert - copy: > - src="{{ private }}/files/koschei/koschei.pem" - dest="/etc/koschei/koschei.pem" - owner=koschei - group=koschei - mode=0400 - when: cert - tags: - - config - - - name: install koji ca cert - copy: > - src="{{ puppet_private }}/fedora-ca.cert" - dest="/etc/koschei/fedora-ca.cert" - owner=root - group=root - mode=0644 - tags: - - config - - - name: run koschei migration - command: alembic -c /usr/share/koschei/alembic.ini upgrade head - sudo_user: koschei - when: yumupdate.changed - - - name: enable koschei to start - action: service name={{item}} state=running enabled=true - with_items: "{{services}}" - tags: - - service - - handlers: - - include: "{{ handlers }}/restart_services.yml" - - - name: restart koschei - action: service name={{item}} state=restarted - with_items: "{{services}}" - -- name: setup fedmsg - hosts: koschei.cloud.fedoraproject.org - user: root - gather_facts: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - roles: - - nagios_client - - role: fedmsg/base - fedmsg_fqdn: koschei.cloud.fedoraproject.org - - handlers: - - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/hosts/lists-dev.cloud.fedoraproject.org.yml b/playbooks/hosts/lists-dev.fedorainfracloud.org.yml similarity index 65% rename from playbooks/hosts/lists-dev.cloud.fedoraproject.org.yml rename to playbooks/hosts/lists-dev.fedorainfracloud.org.yml index 2780daa572..532c9754d6 100644 --- a/playbooks/hosts/lists-dev.cloud.fedoraproject.org.yml +++ b/playbooks/hosts/lists-dev.fedorainfracloud.org.yml @@ -1,57 +1,61 @@ - name: check/create instance - hosts: lists-dev.cloud.fedoraproject.org - user: root + hosts: lists-dev.fedorainfracloud.org gather_facts: False - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml tasks: - - include: "{{ tasks }}/persistent_cloud.yml" + - include: "{{ tasks }}/persistent_cloud_new.yml" -- name: provisions basics onto system/setup paths - hosts: lists-dev.cloud.fedoraproject.org - user: root +- name: setup all the things + hosts: lists-dev.fedorainfracloud.org gather_facts: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml vars: - - mailman_vardir: /srv/persist/mailman - tcp_ports: [22, 25, 80, 443] - udp_ports: [] - - postfix_maincf: "{{ roles }}/base/files/postfix/main.cf/main.cf.lists-dev.cloud.fedoraproject.org" + - postfix_maincf: "{{ roles }}/base/files/postfix/main.cf/main.cf.{{ inventory_hostname }}" + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" roles: - sudo - hosts + - apache + - base tasks: - - include: "{{ tasks }}/cloud_setup_basic.yml" - include: "{{ tasks }}/postfix_basic.yml" - include: "{{ tasks }}/yumrepos.yml" - include: "{{ tasks }}/motd.yml" - - include: "{{ tasks }}/apache.yml" - include: "{{ tasks }}/mod_wsgi.yml" - - name: mount up disk of persistent storage - action: mount name=/srv/persist src='LABEL=lists-dev' fstype=ext4 state=mounted + # Basic Apache config + - name: install mod_ssl + yum: name=mod_ssl state=present - - name: selinux status - selinux: policy=targeted state=enforcing + - name: copy ssl.conf + copy: src="{{ files }}/lists-dev/ssl.conf" dest=/etc/httpd/conf.d/ssl.conf + owner=root group=root mode=0644 + notify: + - restart httpd - # /srv/persist - - name: mount up bind mount for postgres - action: mount src=/srv/persist/pgsqldb name=/var/lib/pgsql fstype=auto opts=bind state=mounted - - name: mount up bind mount for mailman - action: mount src=/srv/persist/mailman name=/var/lib/mailman3 fstype=auto opts=bind state=mounted - - - name: get the repo file - get_url: url=https://repos.fedorapeople.org/repos/abompard/hyperkitty/hyperkitty.repo - dest=/etc/yum.repos.d/hyperkitty.repo mode=0444 + - name: basic apache virtualhost config + template: src="{{ files }}/lists-dev/apache.conf.j2" dest=/etc/httpd/conf.d/lists-dev.conf + owner=root group=root mode=0644 + notify: + - restart httpd # Database - name: install postgresql server packages @@ -72,7 +76,7 @@ - restart postgresql - name: start postgresql - service: state=started name=postgresql + service: state=started enabled=yes name=postgresql - name: allow running sudo commands as postgresql for ansible copy: src="{{ files }}/lists-dev/sudoers-norequiretty-postgres" dest=/etc/sudoers.d/norequiretty-postgres @@ -85,8 +89,12 @@ -- name: setup db users/passwords for hyperkitty - hosts: hyperkitty-stg +# +# Database setup +# + +- name: setup db users/passwords for mailman and hyperkitty + hosts: lists-dev.fedorainfracloud.org gather_facts: no sudo: yes sudo_user: postgres @@ -94,8 +102,6 @@ - /srv/web/infra/ansible/vars/global.yml - "/srv/private/ansible/vars.yml" - "{{ vars_path }}/{{ ansible_distribution }}.yml" - vars: - - mailman_vardir: /srv/persist/mailman tasks: @@ -111,10 +117,13 @@ with_items: - mailman - hyperkitty + - name: test database creation + postgresql_db: name=test_hyperkitty owner=hyperkittyadmin encoding=UTF-8 + - name: setup mailman and hyperkitty - hosts: hyperkitty-stg - gather_facts: no + hosts: lists-dev.fedorainfracloud.org + gather_facts: True vars_files: - /srv/web/infra/ansible/vars/global.yml - "/srv/private/ansible/vars.yml" @@ -151,6 +160,13 @@ notify: - reload aliases + - name: start services + service: state=started enabled=yes name={{ item }} + with_items: + - httpd + - mailman3 + - postfix + handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/hosts/logstash-dev.cloud.fedoraproject.org.yml b/playbooks/hosts/logstash-dev.cloud.fedoraproject.org.yml deleted file mode 100644 index 45caf15572..0000000000 --- a/playbooks/hosts/logstash-dev.cloud.fedoraproject.org.yml +++ /dev/null @@ -1,44 +0,0 @@ -- name: check/create instance - hosts: 209.132.184.146 - user: root - gather_facts: False - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - tasks: - - include: "{{ tasks }}/persistent_cloud.yml" - - include: "{{ tasks }}/growroot_cloud.yml" - -- name: provision instance - hosts: 209.132.184.146 - user: root - gather_facts: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - tasks: - - include: "{{ tasks }}/cloud_setup_basic.yml" - - # packages needed - - name: add packages for repo - action: yum state=present name={{ item }} - with_items: - - rsync - - openssh-clients - - httpd - - httpd-tools - - cronie-noanacron - - - - name: mount up disk of persistent storage - action: mount name=/srv/persist src='LABEL=logstash01' fstype=ext4 state=mounted - tags: - - mount_disk - - handlers: - - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/hosts/shumgrepper-dev.fedorainfracloud.org.yml b/playbooks/hosts/shumgrepper-dev.fedorainfracloud.org.yml new file mode 100644 index 0000000000..9ca73fb916 --- /dev/null +++ b/playbooks/hosts/shumgrepper-dev.fedorainfracloud.org.yml @@ -0,0 +1,26 @@ +- name: check/create instance + hosts: shumgrepper-dev.fedorainfracloud.org + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + +- name: setup all the things + hosts: shumgrepper-dev.fedorainfracloud.org + gather_facts: True + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" diff --git a/playbooks/hosts/taiga.cloud.fedoraproject.org.yml b/playbooks/hosts/taiga.cloud.fedoraproject.org.yml new file mode 100644 index 0000000000..5ab335fda5 --- /dev/null +++ b/playbooks/hosts/taiga.cloud.fedoraproject.org.yml @@ -0,0 +1,33 @@ +- name: check/create instance + hosts: taiga.cloud.fedoraproject.org + user: fedora + sudo: True + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + + tasks: + - include: "{{ tasks }}/persistent_cloud_new.yml" + +- name: setup all the things + hosts: taiga.cloud.fedoraproject.org + user: fedora + sudo: True + gather_facts: True + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/private/ansible/files/openstack/passwords.yml + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + pre_tasks: + - include: "{{ tasks }}/cloud_setup_basic.yml" + - name: set hostname (required by some services, at least postfix need it) + shell: "hostname {{inventory_hostname}}" + + roles: + - taiga diff --git a/playbooks/include/proxies-fedora-web.yml b/playbooks/include/proxies-fedora-web.yml index c82a540bb5..2170e4288d 100644 --- a/playbooks/include/proxies-fedora-web.yml +++ b/playbooks/include/proxies-fedora-web.yml @@ -31,6 +31,10 @@ website: fedoramagazine.org - role: fedora-web/getfedora website: getfedora.org + - role: fedora-web/labs + website: labs.fedoraproject.org + - role: fedora-web/arm + website: arm.fedoraproject.org # Some other static content, not strictly part of "fedora-web" goes below here - role: fedora-docs/proxy diff --git a/playbooks/include/proxies-redirects.yml b/playbooks/include/proxies-redirects.yml index dff18045dc..d4ad30f0bd 100644 --- a/playbooks/include/proxies-redirects.yml +++ b/playbooks/include/proxies-redirects.yml @@ -23,13 +23,13 @@ name: community website: admin.fedoraproject.org path: /community - target: http://apps.fedoraproject.org/packages + target: https://apps.fedoraproject.org/packages - role: httpd/redirect name: docs website: fedoraproject.org path: /docs - target: http://docs.fedoraproject.org/ + target: https://docs.fedoraproject.org/ - role: httpd/redirect name: elections @@ -55,19 +55,31 @@ - role: httpd/redirect name: get-fedora website: get.fedoraproject.org - target: http://fedoraproject.org/get-fedora + target: https://getfedora.org/ + status: 302 + + - role: httpd/redirect + name: flocktofedora + website: flocktofedora.org + target: http://www.flocktofedora.org/ + status: 302 + + - role: httpd/redirect + name: fedoramy + website: fedora.my + target: http://www.fedora.my/ status: 302 - role: httpd/redirect name: join-fedora website: join.fedoraproject.org - target: http://fedoraproject.org/wiki/Join + target: https://fedoraproject.org/wiki/Join status: 302 - role: httpd/redirect name: get-help website: help.fedoraproject.org - target: http://fedoraproject.org/get-help + target: https://fedoraproject.org/get-help status: 302 - role: httpd/redirect @@ -119,6 +131,25 @@ path: /static/js/release-counter-ext.js target: https://getfedora.org/static/js/release-counter-ext.js +# +# When there is no prerelease we redirect the prerelease urls +# back to the main release. +# This should be disabled when there is a prerelease +# +# - role: httpd/redirectmatch +# name: prerelease-to-final +# website: getfedora.org +# regex: /(.*)/prerelease.*$ +# target: https://stg.getfedora.org/$1 +# when: env == 'staging' + +# - role: httpd/redirectmatch +# name: prerelease-to-final +# website: getfedora.org +# regex: /(.*)/prerelease.*$ +# target: https://getfedora.org/$1 +# when: env != 'staging' + - role: httpd/redirect name: store website: store.fedoraproject.org @@ -143,7 +174,7 @@ - role: httpd/redirect name: site website: fedoraproject.com - target: http://fedoraproject.org/ + target: https://getfedora.org/ # Planet/people convenience @@ -151,19 +182,19 @@ name: infofeed website: fedoraproject.org path: /infofeed - target: http://planet.fedoraproject.org/infofeed + target: http://fedoraplanet.org/infofeed - role: httpd/redirect name: people website: fedoraproject.org path: /people - target: http://planet.fedoraproject.org/ + target: http://fedoraplanet.org/ - role: httpd/redirect name: fedorapeople website: fedoraproject.org path: /fedorapeople - target: http://planet.fedoraproject.org/ + target: http://fedoraplanet.org/ # QA @@ -191,7 +222,7 @@ - role: httpd/redirect name: kde website: kde.fedoraproject.org - target: http://spins.fedoraproject.org/kde/ + target: https://spins.fedoraproject.org/kde/ status: 302 @@ -211,7 +242,7 @@ - role: httpd/redirect name: cloud-front-page website: cloud.fedoraproject.org - target: http://fedoraproject.org/en/get-fedora#clouds + target: https://getfedora.org/cloud/ - role: httpd/redirectmatch name: redirect-cloudstart @@ -220,117 +251,149 @@ target: https://$1 ## Cloud image redirects + + # Redirects/pointers for fedora 22 BASE cloud images + - role: httpd/redirect + name: cloud-base-64bit-22 + website: cloud.fedoraproject.org + path: /fedora-22.x86_64.qcow2 + target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2 + + - role: httpd/redirect + name: cloud-base-64bit-22-raw + website: cloud.fedoraproject.org + path: /fedora-22.x86_64.raw.xz + target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.raw.xz + + - role: httpd/redirect + name: cloud-base-32bit-22-raw + website: cloud.fedoraproject.org + path: /fedora-22.i386.raw.xz + target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/i386/Images/Fedora-Cloud-Base-22-20150521.i386.raw.xz + + - role: httpd/redirect + name: cloud-base-32bit-22 + website: cloud.fedoraproject.org + path: /fedora-22.i386.qcow2 + target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/i386/Images/Fedora-Cloud-Base-22-20150521.i386.qcow2 + + # Redirects/pointers for fedora 22 ATOMIC cloud images + - role: httpd/redirect + name: cloud-atomic-64bit-22 + website: cloud.fedoraproject.org + path: /fedora-atomic-22.x86_64.qcow2 + target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22-20150521.x86_64.qcow2 + + - role: httpd/redirect + name: cloud-atomic-64bit-22-raw + website: cloud.fedoraproject.org + path: /fedora-atomic-22.x86_64.raw.xz + target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22-20150521.x86_64.raw.xz + # Redirects/pointers for fedora 21 BASE cloud images - role: httpd/redirect name: cloud-base-64bit-21 website: cloud.fedoraproject.org path: /fedora-21.x86_64.qcow2 - target: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2 + target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2 - role: httpd/redirect name: cloud-base-64bit-21-raw website: cloud.fedoraproject.org path: /fedora-21.x86_64.raw.xz - target: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.raw.xz + target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.raw.xz - role: httpd/redirect name: cloud-base-32bit-21-raw website: cloud.fedoraproject.org path: /fedora-21.i386.raw.xz - target: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Base-20141203-21.i386.raw.xz + target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Base-20141203-21.i386.raw.xz - role: httpd/redirect name: cloud-base-32bit-21 website: cloud.fedoraproject.org path: /fedora-21.i386.qcow2 - target: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Base-20141203-21.i386.qcow2 + target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Base-20141203-21.i386.qcow2 # Redirects/pointers for fedora 21 ATOMIC cloud images - role: httpd/redirect name: cloud-atomic-64bit-21 website: cloud.fedoraproject.org path: /fedora-atomic-21.x86_64.qcow2 - target: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Atomic-20141203-21.x86_64.qcow2 + target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Atomic-20141203-21.x86_64.qcow2 - role: httpd/redirect name: cloud-atomic-64bit-21-raw website: cloud.fedoraproject.org path: /fedora-atomic-21.x86_64.raw.xz - target: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Atomic-20141203-21.x86_64.raw.xz + target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Atomic-20141203-21.x86_64.raw.xz # Except, there are no 32bit atomic images atm. #- role: httpd/redirect # name: cloud-atomic-32bit-21-raw # website: cloud.fedoraproject.org # path: /fedora-atomic-21.i386.raw.xz - # target: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Atomic-20141203-21.i386.raw.xz + # target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Atomic-20141203-21.i386.raw.xz #- role: httpd/redirect # name: cloud-atomic-32bit-21 # website: cloud.fedoraproject.org # path: /fedora-atomic-21.i386.qcow2 - # target: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Atomic-20141203-21.i386.qcow2 + # target: https://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Atomic-20141203-21.i386.qcow2 # Redirects/pointers for fedora 20 cloud images - role: httpd/redirect name: cloud-64bit-20 website: cloud.fedoraproject.org path: /fedora-20.x86_64.qcow2 - target: http://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/x86_64/Fedora-x86_64-20-20140407-sda.qcow2 + target: https://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/x86_64/Fedora-x86_64-20-20140407-sda.qcow2 - role: httpd/redirect name: cloud-32bit-20 website: cloud.fedoraproject.org path: /fedora-20.i386.qcow2 - target: http://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/i386/Fedora-i386-20-20140407-sda.qcow2 + target: https://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/i386/Fedora-i386-20-20140407-sda.qcow2 - role: httpd/redirect name: cloud-64bit-20-raw website: cloud.fedoraproject.org path: /fedora-20.x86_64.raw.xz - target: http://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/x86_64/Fedora-x86_64-20-20140407-sda.raw.xz + target: https://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/x86_64/Fedora-x86_64-20-20140407-sda.raw.xz - role: httpd/redirect name: cloud-32bit-20-raw website: cloud.fedoraproject.org path: /fedora-20.i386.raw.xz - target: http://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/i386/Fedora-i386-20-20140407-sda.raw.xz + target: https://download.fedoraproject.org/pub/fedora/linux/updates/20/Images/i386/Fedora-i386-20-20140407-sda.raw.xz # Redirects/pointers for fedora 19 cloud images - role: httpd/redirect name: cloud-64bit-19 website: cloud.fedoraproject.org path: /fedora-19.x86_64.qcow2 - target: http://download.fedoraproject.org/pub/fedora/linux/updates/19/Images/x86_64/Fedora-x86_64-19-20140407-sda.qcow2 + target: https://download.fedoraproject.org/pub/fedora/linux/updates/19/Images/x86_64/Fedora-x86_64-19-20140407-sda.qcow2 - role: httpd/redirect name: cloud-32bit-19 website: cloud.fedoraproject.org path: /fedora-19.i386.qcow2 - target: http://download.fedoraproject.org/pub/fedora/linux/updates/19/Images/i386/Fedora-i386-19-20140407-sda.qcow2 + target: https://download.fedoraproject.org/pub/fedora/linux/updates/19/Images/i386/Fedora-i386-19-20140407-sda.qcow2 # Redirects/pointers for latest fedora cloud images. - role: httpd/redirect name: cloud-64bit-latest website: cloud.fedoraproject.org path: /fedora-latest.x86_64.qcow2 - target: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2 + target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2 - role: httpd/redirect name: cloud-32bit-latest website: cloud.fedoraproject.org path: /fedora-latest.i386.qcow2 - target: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Base-20141203-21.i386.qcow2 + target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/i386/Images/Fedora-Cloud-Base-22-20150521.i386.qcow2 - role: httpd/redirect name: cloud-atomic-64bit-latest website: cloud.fedoraproject.org path: /fedora-atomic-latest.x86_64.qcow2 - target: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Atomic-20141203-21.x86_64.qcow2 - - # At this time, we are not producing 32bit atomic images. - #- role: httpd/redirect - # name: cloud-atomic-32bit-latest - # website: cloud.fedoraproject.org - # path: /fedora-atomic-latest.i386.qcow2 - # target: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Atomic-20141203-21.i386.qcow2 + target: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22-20150521.x86_64.qcow2 diff --git a/playbooks/include/proxies-reverseproxy.yml b/playbooks/include/proxies-reverseproxy.yml index ef84bbbaee..516222ace4 100644 --- a/playbooks/include/proxies-reverseproxy.yml +++ b/playbooks/include/proxies-reverseproxy.yml @@ -14,6 +14,20 @@ vars: - varnish_url: http://localhost:6081 + pre_tasks: + + - name: Remove some crusty files from bygone eras + file: dest=/etc/httpd/conf.d/{{item}} state=absent + with_items: + - meetbot.fedoraproject.org/reversepassproxy.conf + - meetbot.fedoraproject.org/meetbot.conf + notify: + - restart apache + tags: + - httpd + - httpd/reverseproxy + + roles: - role: httpd/reverseproxy @@ -50,13 +64,22 @@ destname: mailman3 when: env == "staging" + # The place for the raw originals - role: httpd/reverseproxy - website: meetbot.fedoraproject.org + website: meetbot-raw.fedoraproject.org destname: meetbot remotepath: /meetbot/ # Talk directly to the app server, not haproxy proxyurl: http://value01 + # The place for the fancy mote view + - role: httpd/reverseproxy + website: meetbot.fedoraproject.org + destname: mote + #remotepath: /mote/ + # Talk directly to the app server, not haproxy + proxyurl: http://value01 + - role: httpd/reverseproxy website: apps.fedoraproject.org destname: gallery @@ -214,13 +237,22 @@ remotepath: /updates localpath: /updates proxyurl: http://localhost:10009 + when: env != "staging" + + - role: httpd/reverseproxy + website: admin.fedoraproject.org + destname: bodhi + remotepath: / + localpath: /updates + proxyurl: http://localhost:10010 + when: env == "staging" - role: httpd/reverseproxy website: admin.fedoraproject.org destname: mirrormanager remotepath: /mirrormanager localpath: /mirrormanager - proxyurl: http://localhost:10008 + proxyurl: "{{ varnish_url }}" - role: httpd/reverseproxy website: mirrors.fedoraproject.org @@ -233,12 +265,11 @@ proxyurl: http://localhost:10002 - role: httpd/reverseproxy - website: admin.fedoraproject.org - destname: mirrormanager2 - localpath: /mirrormanager2 - remotepath: /mirrormanager2 - proxyurl: http://localhost:10039 - when: env == "staging" + website: apps.fedoraproject.org + destname: koschei + localpath: /koschei + remotepath: /koschei + proxyurl: "{{ varnish_url }}" - role: httpd/reverseproxy website: admin.fedoraproject.org @@ -301,7 +332,7 @@ # Talk directly to the app server, not haproxy proxyurl: http://log01 - ### Three entries for taskotron for production + ### Four entries for taskotron for production - role: httpd/reverseproxy website: taskotron.fedoraproject.org destname: taskotron @@ -324,7 +355,15 @@ # Talk directly to the app server, not haproxy proxyurl: http://resultsdb01.vpn.fedoraproject.org - ### And three entries for taskotron for staging + - role: httpd/reverseproxy + website: taskotron.fedoraproject.org + destname: taskotron-execdb + localpath: /execdb + remotepath: /execdb + # Talk directly to the app server, not haproxy + proxyurl: http://resultsdb01.vpn.fedoraproject.org + + ### And four entries for taskotron for staging - role: httpd/reverseproxy website: taskotron.stg.fedoraproject.org destname: taskotron @@ -359,6 +398,15 @@ proxyurl: http://resultsdb-stg01.qa.fedoraproject.org when: env == "staging" + ### Beaker staging + - role: httpd/reverseproxy + website: beaker.stg.fedoraproject.org + destname: beaker-stg + # Talk directly to the app server, not haproxy + proxyurl: http://beaker-stg01.qa.fedoraproject.org + when: env == "staging" + + # This one gets its own role (instead of httpd/reverseproxy) so that it can # copy in some silly static resources (globe.png, index.html) - role: geoip-city-wsgi/proxy diff --git a/playbooks/include/proxies-rewrites.yml b/playbooks/include/proxies-rewrites.yml index 6aac442223..c72f73bb56 100644 --- a/playbooks/include/proxies-rewrites.yml +++ b/playbooks/include/proxies-rewrites.yml @@ -28,21 +28,21 @@ website: admin.fedoraproject.org path: ^/favicon.ico$ status: 301 - target: http://fedoraproject.org/static/images/favicon.ico + target: https://fedoraproject.org/static/images/favicon.ico - role: httpd/domainrewrite destname: 00-docs website: docs.fedoraproject.org path: ^/favicon.ico$ status: 301 - target: http://fedoraproject.org/static/images/favicon.ico + target: https://fedoraproject.org/static/images/favicon.ico - role: httpd/domainrewrite destname: 00-start website: start.fedoraproject.org path: ^/favicon.ico$ status: 301 - target: http://fedoraproject.org/static/images/favicon.ico + target: https://fedoraproject.org/static/images/favicon.ico - role: httpd/domainrewrite destname: translate @@ -55,4 +55,4 @@ website: translate.fedoraproject.org path: ^/favicon.ico$ status: 301 - target: http://fedoraproject.org/static/images/favicon.ico + target: https://fedoraproject.org/static/images/favicon.ico diff --git a/playbooks/include/proxies-websites.yml b/playbooks/include/proxies-websites.yml index efc46d5515..a71cd66ffc 100644 --- a/playbooks/include/proxies-websites.yml +++ b/playbooks/include/proxies-websites.yml @@ -49,10 +49,12 @@ - role: httpd/website name: fedoraproject.org cert_name: "{{wildcard_cert_name}}" - server_aliases: [stg.fedoraproject.org] + server_aliases: + - stg.fedoraproject.org + - localhost # This is for all the other domains we own - # that redirect to http://fedoraproject.org + # that redirect to https://fedoraproject.org - role: httpd/website name: fedoraproject.com cert_name: "{{wildcard_cert_name}}" @@ -179,6 +181,18 @@ - spins-test.fedoraproject.org cert_name: "{{wildcard_cert_name}}" + - role: httpd/website + name: labs.fedoraproject.org + server_aliases: + - labs.stg.fedoraproject.org + cert_name: "{{wildcard_cert_name}}" + + - role: httpd/website + name: arm.fedoraproject.org + server_aliases: + - arm.stg.fedoraproject.org + cert_name: "{{wildcard_cert_name}}" + - role: httpd/website name: boot.fedoraproject.org server_aliases: [boot.stg.fedoraproject.org] @@ -210,6 +224,20 @@ server_aliases: [bodhi.stg.fedoraproject.org] cert_name: "{{wildcard_cert_name}}" + - role: httpd/website + name: flocktofedora.org + server_aliases: + - flocktofedora.org + - flocktofedora.net + - flocktofedora.com + ssl: false + + - role: httpd/website + name: fedora.my + server_aliases: + - fedora.my + ssl: false + - role: httpd/website name: bugz.fedoraproject.org server_aliases: [bugz.stg.fedoraproject.org] @@ -296,7 +324,15 @@ - www.389tcp.org ssl: false cert_name: "{{wildcard_cert_name}}" + + - role: httpd/website + name: whatcanidoforfedora.org + server_aliases: + - www.whatcanidoforfedora.org + ssl: false + cert_name: "{{wildcard_cert_name}}" + - role: httpd/website name: fedoramagazine.org server_aliases: [www.fedoramagazine.org stg.fedoramagazine.org] @@ -320,6 +356,11 @@ server_aliases: [meetbot.stg.fedoraproject.org] cert_name: "{{wildcard_cert_name}}" + - role: httpd/website + name: meetbot-raw.fedoraproject.org + server_aliases: [meetbot-raw.stg.fedoraproject.org] + cert_name: "{{wildcard_cert_name}}" + - role: httpd/website name: fudcon.fedoraproject.org server_aliases: [fudcon.stg.fedoraproject.org] @@ -429,3 +470,13 @@ server_aliases: [geoip.stg.fedoraproject.org] sslonly: true cert_name: "{{wildcard_cert_name}}" + + - role: httpd/website + name: beaker.stg.fedoraproject.org + server_aliases: [beaker.stg.fedoraproject.org] + # Set this explicitly to stg here.. as per the original puppet config. + SSLCertificateChainFile: wildcard-2014.stg.fedoraproject.org.intermediate.cert + sslonly: true + cert_name: "{{wildcard_cert_name}}" + when: env == "staging" + diff --git a/playbooks/manual/nagios/shush-fmn.yml b/playbooks/manual/nagios/shush-fmn.yml new file mode 100644 index 0000000000..5bb9d4ae9c --- /dev/null +++ b/playbooks/manual/nagios/shush-fmn.yml @@ -0,0 +1,13 @@ +- name: be quiet please... + hosts: notifs-backend;notifs-backend-stg + user: root + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - name: tell nagios to shush. + nagios: action=downtime minutes=15 service=host host={{ inventory_hostname_short }}{{ env_suffix }} + delegate_to: noc01.phx2.fedoraproject.org + ignore_errors: true diff --git a/playbooks/manual/rebuild/fedora-packages.yml b/playbooks/manual/rebuild/fedora-packages.yml index 1a5150ee6b..5c53a4bc19 100644 --- a/playbooks/manual/rebuild/fedora-packages.yml +++ b/playbooks/manual/rebuild/fedora-packages.yml @@ -23,7 +23,7 @@ when: install_packages_indexer - name: tell nagios to shush for these hosts - nagios: action=downtime minutes=300 service=host host={{ inventory_hostname }} + nagios: action=downtime minutes=300 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true @@ -91,6 +91,6 @@ - fcomm-cache-worker - name: tell nagios to start bothering us again - nagios: action=unsilence service=host host={{ inventory_hostname }} + nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true diff --git a/playbooks/manual/rebuild/mote.yml b/playbooks/manual/rebuild/mote.yml new file mode 100644 index 0000000000..9bbb9d875b --- /dev/null +++ b/playbooks/manual/rebuild/mote.yml @@ -0,0 +1,14 @@ +- name: Nuke the mote cache and restart the services to rebuild it. + hosts: value;value-stg + user: root + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + handlers: + - include: "{{ handlers }}/restart_services.yml" + + tasks: + - file: dest=/var/cache/httpd/mote/cache.json state=absent + - service: name="httpd" state=restarted + - service: name="mote-updater" state=restarted diff --git a/playbooks/manual/restart-fedmsg-services.yml b/playbooks/manual/restart-fedmsg-services.yml index 5af80b64be..0e3f93a99a 100644 --- a/playbooks/manual/restart-fedmsg-services.yml +++ b/playbooks/manual/restart-fedmsg-services.yml @@ -59,7 +59,7 @@ tasks: - name: schedule a 15 minute downtime. give notifs backend time to start up. - nagios: action=downtime minutes=15 service=host host={{ inventory_hostname }} + nagios: action=downtime minutes=15 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true diff --git a/playbooks/manual/sign-and-import.yml b/playbooks/manual/sign-and-import.yml index c268a43d23..fac69b722d 100644 --- a/playbooks/manual/sign-and-import.yml +++ b/playbooks/manual/sign-and-import.yml @@ -16,7 +16,7 @@ - name: batch sign and import a directory full of rpms user: root - hosts: lockbox01.phx2.fedoraproject.org + hosts: localhost connection: local # Toggle this variable to import to the testing repo as opposed to the staging diff --git a/playbooks/manual/sign-vault.yml b/playbooks/manual/sign-vault.yml index 7ebb7c83b6..c4d09f36df 100644 --- a/playbooks/manual/sign-vault.yml +++ b/playbooks/manual/sign-vault.yml @@ -6,6 +6,22 @@ # Access is via management interface only. This playbook does initial setup. # Please check with rel-eng before doing anything here. +- name: make sign-vault server vm (secondary only) + hosts: secondary-vault01.qa.fedoraproject.org + user: root + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - include: "{{ tasks }}/virt_instance_create.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" + - name: make sign vault server hosts: sign-vault user: root diff --git a/playbooks/manual/staging-sync/koji.yml b/playbooks/manual/staging-sync/koji.yml new file mode 100644 index 0000000000..5e3a472505 --- /dev/null +++ b/playbooks/manual/staging-sync/koji.yml @@ -0,0 +1,111 @@ +# This playbook syncs the production koji instance with staging and manages all +# the steps we need to keep our setup intact. +# +# For a description of what we're doing, see +# https://lists.fedoraproject.org/pipermail/infrastructure/2015-June/016377.html +# For a description of the koji 'secondary volumes' feature, see +# https://lists.fedoraproject.org/pipermail/buildsys/2012-May/003892.html +# For a description of the sql migration we do, see +# https://lists.fedoraproject.org/pipermail/buildsys/2015-June/004779.html + + +- name: grab the latest production backup + hosts: db-koji01.phx2.fedoraproject.org + user: root + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + handlers: + - include: "{{ handlers }}/restart_services.yml" + + tasks: + - fetch: src=/backups/koji-{{ansible_date_time['date']}}.dump.xz + dest=/var/tmp/prod-koji-dump/ + fail_on_missing=yes + +- name: sync config and sql migration script + hosts: koji-stg + user: root + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + handlers: + - include: "{{ handlers }}/restart_services.yml" + + roles: + - koji_hub + +- name: bring staging services down + hosts: koji-stg + user: root + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + handlers: + - include: "{{ handlers }}/restart_services.yml" + + tasks: + - service: name=httpd state=stopped + - service: name=kojid state=stopped + - service: name=kojira state=stopped + + +- name: drop and re-create the staging db entirely + hosts: db01.stg.phx2.fedoraproject.org + user: root + become: yes + become_user: postgres + become_method: sudo + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + handlers: + - include: "{{ handlers }}/restart_services.yml" + + tasks: + - template: src=templates/koji-reset-staging.sql dest=/var/lib/pgsql/koji-reset-staging.sql + - copy: src=/var/tmp/prod-koji-dump/db-koji01.phx2.fedoraproject.org/backups/koji-{{ansible_date_time['date']}}.dump.xz + dest=/var/tmp/koji-{{ansible_date_time['date']}}.dump.xz + owner=postgres group=postgres + - command: unxz /var/tmp/koji-{{ansible_date_time['date']}}.dump.xz + creates=/var/tmp/koji-{{ansible_date_time['date']}}.dump + - command: dropdb koji + - command: createdb -O koji koji + - name: Import the prod db. This will take quite a while. Go get a snack! + shell: cat /var/tmp/koji-{{ansible_date_time['date']}}.dump | psql koji + - name: repoint all the prod rpm entries at the secondary volume (and other stuff) + shell: psql koji < /var/lib/pgsql/koji-reset-staging.sql + +# TODO -- nuke old staging content in /mnt/fedora_koji/koji/ + +- name: bring staging services up + hosts: koji-stg + user: root + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + handlers: + - include: "{{ handlers }}/restart_services.yml" + + tasks: + - service: name=httpd state=started + - service: name=kojid state=started + - service: name=kojira state=started + +- name: Nuke the prod db dump that we cached on lockbox + hosts: lockbox + user: root + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - name: Nuke the prod db dump that we cached on lockbox + file: dest=/var/tmp/prod-koji-dump/ state=absent diff --git a/playbooks/manual/staging-sync/lookaside.yml b/playbooks/manual/staging-sync/lookaside.yml new file mode 100644 index 0000000000..1a51bdd43a --- /dev/null +++ b/playbooks/manual/staging-sync/lookaside.yml @@ -0,0 +1,61 @@ +# This playbook syncs a *subset* of the production lookaside cache to stg. +# bochecha asked for this in 2015 -- implemented by ralph. +# +# Presently, we only do the packages that start with 'a', because there's just +# too much data otherwise. + + +- name: tar up a subset of the prod lookaside cache + hosts: pkgs + user: root + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + vars: + tarball: /var/tmp/prod-lookaside-subset.tar.xz + intermediary: /var/tmp/prod-lookaside + target: /srv/cache/lookaside/pkgs + + tasks: + - shell: tar -cJvf {{tarball}} {{target}}/ad* + creates={{tarball}} + - fetch: src={{tarball}} + dest={{intermediary}} + fail_on_missing=yes + +- name: copy and expand that subset to staging lookaside + hosts: pkgs-stg + user: root + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + vars: + tarball: /var/tmp/prod-lookaside-subset.tar.xz + intermediary: /var/tmp/prod-lookaside + target: /srv/cache/lookaside/pkgs + + tasks: + - unarchive: src={{intermediary}}/pkgs02.phx2.fedoraproject.org/{{tarball}} dest=/ + +- name: finish cleaning up after ourselves + hosts: lockbox01.phx2.fedoraproject.org;pkgs;pkgs-stg + user: root + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + vars: + tarball: /var/tmp/prod-lookaside-subset.tar.xz + intermediary: /var/tmp/prod-lookaside + target: /srv/cache/lookaside/pkgs + + tasks: + - file: dest={{intermediary}} state=absent + tags: cleanup + - file: dest={{tarball}} state=absent + tags: cleanup diff --git a/playbooks/manual/staging-sync/templates/koji-reset-staging.sql b/playbooks/manual/staging-sync/templates/koji-reset-staging.sql new file mode 100644 index 0000000000..cb20653c33 --- /dev/null +++ b/playbooks/manual/staging-sync/templates/koji-reset-staging.sql @@ -0,0 +1,98 @@ +-- The following commands tweak a koji db snapshot for use with the koji test environment +-- In addition to this script, the following actions may also need to be taken (generally afterward) +-- * apply any needed schema upgrades +-- * reset the koji-test fs volume +-- +-- Example commands for db reset: +-- % su - postgres +-- % dropdb koji +-- % createdb -O koji koji +-- % pg_restore -c -d koji koji.dmp +-- % psql koji koji < koji-stage-reset.sql +-- +-- Alternate example for shorter downtime: +-- % su - postgres +-- restore to a different db first +-- % createdb -O koji koji-new +-- % pg_restore -c -d koji-new koji.dmp +-- % psql koji-new koji < koji-stage-reset.sql +-- [apply db updates if needed] +-- [set kojihub ServerOffline setting] +-- => alter database koji rename to koji_save_YYYYMMDD; +-- => alter database koji-new rename to koji; +-- [reset koji-test fs] +-- [unset kojihub ServerOffline setting] + + +BEGIN; + +-- bump sequences (not strictly needed anymore) +select now() as time, 'bumping sequences' as msg; +alter sequence task_id_seq restart with 90000000; +alter sequence repo_id_seq restart with 9000000; +alter sequence imageinfo_id_seq restart with 900000; + +-- truncate sessions +select now() as time, 'truncating sessions' as msg; +truncate table sessions; + +-- prod volume +select now() as time, 'setting up prod volume' as msg; +insert into volume(name) values('prod'); +update build set volume_id=(select id from volume where name='prod') where volume_id=0; + +-- cancel any open tasks +select now() as time, 'canceling open tasks' as msg; +update task set state=3 where state in (0,1,4); + +-- cancel any builds in progress +select now() as time, 'canceling builds in progress' as msg; +update build set state=4, completion_time=now() where state=0; + +-- expire any active buildroots +select now() as time, 'expiring active buildroots' as msg; +update buildroot set state=3, retire_event=get_event() where state=0; + +-- enable/disable hosts +update host set enabled=False; + +-- fix host_channels +truncate host_channels; + +-- expire all the repos +select now() as time, 'expiring repos' as msg; +update repo set state = 3 where state in (0, 1, 2); + + +COMMIT; + + +BEGIN; + +-- add our staging builders, dynamically pulled from ansible inventory +select now() as time, 'adding staging host(s)' as msg; + +{% for host in groups['buildvm-stg'] + groups['koji-stg'] %} +insert into users (name, usertype, status) values ('{{ host }}', 1, 0); +insert into host (user_id, name, arches) values ( + (select id from users where name='{{host}}'), '{{host}}', 'i386 x86_64'); +{% for channel in [ 'default', 'createrepo', 'maven', 'appliance', 'livecd', 'vm', 'secure-boot', 'compose', 'eclipse', 'images', 'image'] %} +insert into host_channels (host_id, channel_id) values ( + (select id from host where name='{{host}}'), (select id from channels where name='{{channel}}')); +{% endfor %} +{% endfor %} + +-- Add some people to be admins, only in staging. Feel free to grow this list.. +select now() as time, 'adding staging admin(s)' as msg; + +{% for username in ['ralph', 'imcleod'] %} +insert into user_perms (user_id, perm_id, active, creator_id) values ( + (select id from users where name='{{username}}'), + (select id from permissions where name='admin'), + True, + (select id from users where name='{{username}}')); +{% endfor %} + +COMMIT; + +VACUUM ANALYZE; diff --git a/playbooks/manual/upgrade/badges.yml b/playbooks/manual/upgrade/badges.yml index ae60a12d4a..1a90883bd5 100644 --- a/playbooks/manual/upgrade/badges.yml +++ b/playbooks/manual/upgrade/badges.yml @@ -68,7 +68,7 @@ pre_tasks: - name: tell nagios to shush w.r.t. the frontend - nagios: action=downtime minutes=15 service=host host={{ inventory_hostname }} + nagios: action=downtime minutes=15 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true @@ -90,7 +90,7 @@ pre_tasks: - name: tell nagios to shush w.r.t. the backend - nagios: action=downtime minutes=15 service=host host={{ inventory_hostname }} + nagios: action=downtime minutes=15 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true @@ -105,13 +105,14 @@ command: /usr/bin/alembic -c /usr/share/tahrir_api/alembic.ini upgrade head args: chdir: /usr/share/tahrir_api/ + ignore_errors: true - name: And... start the backend again service: name="fedmsg-hub" state=started post_tasks: - name: tell nagios to unshush w.r.t. the backend - nagios: action=unsilence service=host host={{ inventory_hostname }} + nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true @@ -130,6 +131,6 @@ post_tasks: - name: tell nagios to unshush w.r.t. the frontend - nagios: action=unsilence service=host host={{ inventory_hostname }} + nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true diff --git a/playbooks/manual/upgrade/datanommer.yml b/playbooks/manual/upgrade/datanommer.yml index fb12cf7177..8af88e3450 100644 --- a/playbooks/manual/upgrade/datanommer.yml +++ b/playbooks/manual/upgrade/datanommer.yml @@ -40,7 +40,7 @@ - include: "{{ handlers }}/restart_services.yml" pre_tasks: - name: tell nagios to shush - nagios: action=downtime minutes=120 service=host host={{ inventory_hostname }} + nagios: action=downtime minutes=120 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true roles: @@ -59,7 +59,7 @@ - include: "{{ handlers }}/restart_services.yml" pre_tasks: - name: tell nagios to shush - nagios: action=downtime minutes=120 service=host host={{ inventory_hostname }} + nagios: action=downtime minutes=120 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true roles: @@ -78,7 +78,7 @@ - include: "{{ handlers }}/restart_services.yml" pre_tasks: - name: tell nagios to shush - nagios: action=downtime minutes=120 service=host host={{ inventory_hostname }} + nagios: action=downtime minutes=120 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true roles: @@ -97,7 +97,7 @@ - include: "{{ handlers }}/restart_services.yml" pre_tasks: - name: tell nagios to shush - nagios: action=downtime minutes=120 service=host host={{ inventory_hostname }} + nagios: action=downtime minutes=120 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true roles: @@ -118,7 +118,7 @@ post_tasks: - name: tell nagios to unshush - nagios: action=unsilence service=host host={{ inventory_hostname }} + nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true @@ -137,7 +137,7 @@ - service: name="httpd" state=started post_tasks: - name: tell nagios to unshush - nagios: action=unsilence service=host host={{ inventory_hostname }} + nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true @@ -154,6 +154,6 @@ - service: name="fedmsg-hub" state=started post_tasks: - name: tell nagios to unshush - nagios: action=unsilence service=host host={{ inventory_hostname }} + nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true diff --git a/playbooks/manual/upgrade/fedmsg.yml b/playbooks/manual/upgrade/fedmsg.yml index ef8a10ac40..fd7874c5e1 100644 --- a/playbooks/manual/upgrade/fedmsg.yml +++ b/playbooks/manual/upgrade/fedmsg.yml @@ -31,6 +31,7 @@ packages: - fedmsg - python-fedmsg-meta-fedora-infrastructure + - python-moksha-hub handlers: - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/manual/upgrade/fmn.yml b/playbooks/manual/upgrade/fmn.yml index 70e5f22746..ef53369184 100644 --- a/playbooks/manual/upgrade/fmn.yml +++ b/playbooks/manual/upgrade/fmn.yml @@ -33,7 +33,7 @@ pre_tasks: - name: tell nagios to shush w.r.t. the frontend - nagios: action=downtime minutes=15 service=host host={{ inventory_hostname }} + nagios: action=downtime minutes=15 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true @@ -55,7 +55,7 @@ pre_tasks: - name: tell nagios to shush w.r.t. the backend - nagios: action=downtime minutes=15 service=host host={{ inventory_hostname }} + nagios: action=downtime minutes=15 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true @@ -78,7 +78,7 @@ # up anyways, so just let the downtime expire. #post_tasks: #- name: tell nagios to unshush w.r.t. the backend - # nagios: action=unsilence service=host host={{ inventory_hostname }} + # nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }} # delegate_to: noc01.phx2.fedoraproject.org # ignore_errors: true @@ -97,6 +97,6 @@ post_tasks: - name: tell nagios to unshush w.r.t. the frontend - nagios: action=unsilence service=host host={{ inventory_hostname }} + nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true diff --git a/playbooks/manual/upgrade/hotness.yml b/playbooks/manual/upgrade/hotness.yml index 8517d90965..fe5244401c 100644 --- a/playbooks/manual/upgrade/hotness.yml +++ b/playbooks/manual/upgrade/hotness.yml @@ -33,7 +33,7 @@ pre_tasks: - name: tell nagios to shush - nagios: action=downtime minutes=60 service=host host={{ inventory_hostname }} + nagios: action=downtime minutes=60 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true @@ -43,6 +43,6 @@ post_tasks: - service: name="fedmsg-hub" state=restarted - name: tell nagios to unshush - nagios: action=unsilence service=host host={{ inventory_hostname }} + nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true diff --git a/playbooks/manual/upgrade/mote.yml b/playbooks/manual/upgrade/mote.yml new file mode 100644 index 0000000000..5bc2977886 --- /dev/null +++ b/playbooks/manual/upgrade/mote.yml @@ -0,0 +1,49 @@ +- name: push packages out + hosts: value;value-stg + user: root + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + vars: + testing: False + handlers: + - include: "{{ handlers }}/restart_services.yml" + + tasks: + - name: clean all metadata {%if testing%}(with infrastructure-testing on){%endif%} + command: yum clean all {%if testing%} --enablerepo=infrastructure-testing {%endif%} + always_run: yes + - name: yum update mote packages from main repo + yum: name="mote" state=latest + when: not testing + - name: yum update mote packages from testing repo + yum: name="mote" state=latest enablerepo=infrastructure-testing + when: testing + +- name: verify the config and restart it + hosts: value;value-stg + user: root + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + handlers: + - include: "{{ handlers }}/restart_services.yml" + + pre_tasks: + - name: tell nagios to shush + nagios: action=downtime minutes=60 service=host host={{ inventory_hostname_short }}{{ env_suffix }} + delegate_to: noc01.phx2.fedoraproject.org + ignore_errors: true + + roles: + - mote + + post_tasks: + - service: name="httpd" state=restarted + - service: name="mote-updater" state=restarted + - name: tell nagios to unshush + nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }} + delegate_to: noc01.phx2.fedoraproject.org + ignore_errors: true diff --git a/playbooks/manual/upgrade/tagger.yml b/playbooks/manual/upgrade/tagger.yml new file mode 100644 index 0000000000..509b6a3f38 --- /dev/null +++ b/playbooks/manual/upgrade/tagger.yml @@ -0,0 +1,57 @@ +- name: push packages out + hosts: tagger;tagger-stg + user: root + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + vars: + testing: False + handlers: + - include: "{{ handlers }}/restart_services.yml" + + tasks: + - name: clean all metadata {%if testing%}(with infrastructure-testing on){%endif%} + command: yum clean all {%if testing%} --enablerepo=infrastructure-testing {%endif%} + always_run: yes + - name: yum update fedora-tagger packages from main repo + yum: name="fedora-tagger" state=latest + when: not testing + - name: yum update fedora-tagger packages from testing repo + yum: name="fedora-tagger" state=latest enablerepo=infrastructure-testing + when: testing + +- name: verify the config and restart it + hosts: tagger;tagger-stg + user: root + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + handlers: + - include: "{{ handlers }}/restart_services.yml" + + pre_tasks: + - name: tell nagios to shush + nagios: action=downtime minutes=60 service=host host={{ inventory_hostname_short }}{{ env_suffix }} + delegate_to: noc01.phx2.fedoraproject.org + ignore_errors: true + + roles: + - tagger + + post_tasks: + - service: name="httpd" state=stopped + + - name: Upgrade the database (only on one of the two nodes...) + command: /usr/bin/alembic -c /usr/share/fedoratagger/alembic.ini upgrade head + args: + chdir: /usr/share/fedoratagger/ + when: inventory_hostname.startswith('tagger01') + + - service: name="httpd" state=started + + - name: tell nagios to unshush + nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }} + delegate_to: noc01.phx2.fedoraproject.org + ignore_errors: true diff --git a/playbooks/openshift_common/README b/playbooks/openshift_common/README new file mode 100644 index 0000000000..f338a70377 --- /dev/null +++ b/playbooks/openshift_common/README @@ -0,0 +1,20 @@ +This file contains playbooks imported from the upstream OpenShift Ansible +github repository[0]. + +In order to re-import/update these scripts, + + # This can really be anywhere, just outside this git tree + $ cd /tmp/ + + $ git clone https://github.com/openshift/openshift-ansible.git + + # Assuming your local copy of this git repo lives in ~/src/fedora-ansible/ + $ cp -r \ + openshift-ansible/playbooks/common/* \ + ~/src/fedora-ansible/playbooks/openshift_common/ + +There are relative symlinks involved and at the time of this writing, the +directory structure of this git repository matches where appropriate with the +upstream repository making this mostly a clean import. + +[0] - https://github.com/openshift/openshift-ansible.git diff --git a/playbooks/openshift_common/openshift-cluster/config.yml b/playbooks/openshift_common/openshift-cluster/config.yml new file mode 100644 index 0000000000..4c74f96db0 --- /dev/null +++ b/playbooks/openshift_common/openshift-cluster/config.yml @@ -0,0 +1,74 @@ +--- +- name: Populate config host groups + hosts: localhost + gather_facts: no + tasks: + - fail: + msg: This playbook rquires g_etcd_group to be set + when: g_etcd_group is not defined + + - fail: + msg: This playbook rquires g_masters_group to be set + when: g_masters_group is not defined + + - fail: + msg: This playbook rquires g_nodes_group to be set + when: g_nodes_group is not defined + + - name: Evaluate oo_etcd_to_config + add_host: + name: "{{ item }}" + groups: oo_etcd_to_config + ansible_ssh_user: "{{ g_ssh_user | default(omit) }}" + ansible_sudo: "{{ g_sudo | default(omit) }}" + with_items: groups[g_etcd_group] | default([]) + + - name: Evaluate oo_masters_to_config + add_host: + name: "{{ item }}" + groups: oo_masters_to_config + ansible_ssh_user: "{{ g_ssh_user | default(omit) }}" + ansible_sudo: "{{ g_sudo | default(omit) }}" + with_items: groups[g_masters_group] | default([]) + + - name: Evaluate oo_nodes_to_config + add_host: + name: "{{ item }}" + groups: oo_nodes_to_config + ansible_ssh_user: "{{ g_ssh_user | default(omit) }}" + ansible_sudo: "{{ g_sudo | default(omit) }}" + with_items: groups[g_nodes_group] | default([]) + + - name: Evaluate oo_nodes_to_config + add_host: + name: "{{ item }}" + groups: oo_nodes_to_config + ansible_ssh_user: "{{ g_ssh_user | default(omit) }}" + ansible_sudo: "{{ g_sudo | default(omit) }}" + with_items: groups[g_masters_group] | default([]) + when: g_nodeonmaster is defined and g_nodeonmaster == true + + - name: Evaluate oo_first_etcd + add_host: + name: "{{ groups[g_etcd_group][0] }}" + groups: oo_first_etcd + ansible_ssh_user: "{{ g_ssh_user | default(omit) }}" + ansible_sudo: "{{ g_sudo | default(omit) }}" + when: g_etcd_group in groups and (groups[g_etcd_group] | length) > 0 + + - name: Evaluate oo_first_master + add_host: + name: "{{ groups[g_masters_group][0] }}" + groups: oo_first_master + ansible_ssh_user: "{{ g_ssh_user | default(omit) }}" + ansible_sudo: "{{ g_sudo | default(omit) }}" + when: g_masters_group in groups and (groups[g_masters_group] | length) > 0 + +- include: ../openshift-etcd/config.yml + +- include: ../openshift-master/config.yml + +- include: ../openshift-node/config.yml + vars: + osn_cluster_dns_domain: "{{ hostvars[groups.oo_first_master.0].openshift.dns.domain }}" + osn_cluster_dns_ip: "{{ hostvars[groups.oo_first_master.0].openshift.dns.ip }}" diff --git a/playbooks/openshift_common/openshift-cluster/create_services.yml b/playbooks/openshift_common/openshift-cluster/create_services.yml new file mode 100644 index 0000000000..e70709d191 --- /dev/null +++ b/playbooks/openshift_common/openshift-cluster/create_services.yml @@ -0,0 +1,8 @@ +--- +- name: Deploy OpenShift Services + hosts: "{{ g_svc_master }}" + connection: ssh + gather_facts: yes + roles: + - openshift_registry + - openshift_router diff --git a/playbooks/openshift_common/openshift-cluster/filter_plugins b/playbooks/openshift_common/openshift-cluster/filter_plugins new file mode 120000 index 0000000000..99a95e4ca3 --- /dev/null +++ b/playbooks/openshift_common/openshift-cluster/filter_plugins @@ -0,0 +1 @@ +../../../filter_plugins \ No newline at end of file diff --git a/playbooks/openshift_common/openshift-cluster/lookup_plugins b/playbooks/openshift_common/openshift-cluster/lookup_plugins new file mode 120000 index 0000000000..ac79701db8 --- /dev/null +++ b/playbooks/openshift_common/openshift-cluster/lookup_plugins @@ -0,0 +1 @@ +../../../lookup_plugins \ No newline at end of file diff --git a/playbooks/openshift_common/openshift-cluster/roles b/playbooks/openshift_common/openshift-cluster/roles new file mode 120000 index 0000000000..20c4c58cfa --- /dev/null +++ b/playbooks/openshift_common/openshift-cluster/roles @@ -0,0 +1 @@ +../../../roles \ No newline at end of file diff --git a/playbooks/openshift_common/openshift-cluster/set_etcd_launch_facts_tasks.yml b/playbooks/openshift_common/openshift-cluster/set_etcd_launch_facts_tasks.yml new file mode 100644 index 0000000000..1a6580795b --- /dev/null +++ b/playbooks/openshift_common/openshift-cluster/set_etcd_launch_facts_tasks.yml @@ -0,0 +1,13 @@ +--- +- set_fact: k8s_type="etcd" + +- name: Generate etcd instance names(s) + set_fact: + scratch_name: "{{ cluster_id }}-{{ k8s_type }}-{{ '%05x' | format(1048576 | random) }}" + register: etcd_names_output + with_sequence: count={{ num_etcd }} + +- set_fact: + etcd_names: "{{ etcd_names_output.results | default([]) + | oo_collect('ansible_facts') + | oo_collect('scratch_name') }}" diff --git a/playbooks/openshift_common/openshift-cluster/set_master_launch_facts_tasks.yml b/playbooks/openshift_common/openshift-cluster/set_master_launch_facts_tasks.yml new file mode 100644 index 0000000000..36d7b78704 --- /dev/null +++ b/playbooks/openshift_common/openshift-cluster/set_master_launch_facts_tasks.yml @@ -0,0 +1,13 @@ +--- +- set_fact: k8s_type="master" + +- name: Generate master instance names(s) + set_fact: + scratch_name: "{{ cluster_id }}-{{ k8s_type }}-{{ '%05x' | format(1048576 | random) }}" + register: master_names_output + with_sequence: count={{ num_masters }} + +- set_fact: + master_names: "{{ master_names_output.results | default([]) + | oo_collect('ansible_facts') + | oo_collect('scratch_name') }}" diff --git a/playbooks/openshift_common/openshift-cluster/set_node_launch_facts_tasks.yml b/playbooks/openshift_common/openshift-cluster/set_node_launch_facts_tasks.yml new file mode 100644 index 0000000000..278942f8b0 --- /dev/null +++ b/playbooks/openshift_common/openshift-cluster/set_node_launch_facts_tasks.yml @@ -0,0 +1,15 @@ +--- +- set_fact: k8s_type=node +- set_fact: sub_host_type="{{ type }}" +- set_fact: number_nodes="{{ count }}" + +- name: Generate node instance names(s) + set_fact: + scratch_name: "{{ cluster_id }}-{{ k8s_type }}-{{ sub_host_type }}-{{ '%05x' | format(1048576 | random) }}" + register: node_names_output + with_sequence: count={{ number_nodes }} + +- set_fact: + node_names: "{{ node_names_output.results | default([]) + | oo_collect('ansible_facts') + | oo_collect('scratch_name') }}" diff --git a/playbooks/openshift_common/openshift-cluster/update_repos_and_packages.yml b/playbooks/openshift_common/openshift-cluster/update_repos_and_packages.yml new file mode 100644 index 0000000000..190e2d8622 --- /dev/null +++ b/playbooks/openshift_common/openshift-cluster/update_repos_and_packages.yml @@ -0,0 +1,12 @@ +--- +- hosts: oo_hosts_to_update + vars: + openshift_deployment_type: "{{ deployment_type }}" + roles: + - role: rhel_subscribe + when: deployment_type == "enterprise" and + ansible_distribution == "RedHat" and + lookup('oo_option', 'rhel_skip_subscription') | default(rhsub_skip, True) | + default('no', True) | lower in ['no', 'false'] + - openshift_repos + - os_update_latest diff --git a/playbooks/openshift_common/openshift-etcd/config.yml b/playbooks/openshift_common/openshift-etcd/config.yml new file mode 100644 index 0000000000..3cc561ba00 --- /dev/null +++ b/playbooks/openshift_common/openshift-etcd/config.yml @@ -0,0 +1,96 @@ +--- +- name: Set etcd facts needed for generating certs + hosts: oo_etcd_to_config + roles: + - openshift_facts + tasks: + - openshift_facts: + role: "{{ item.role }}" + local_facts: "{{ item.local_facts }}" + with_items: + - role: common + local_facts: + hostname: "{{ openshift_hostname | default(None) }}" + public_hostname: "{{ openshift_public_hostname | default(None) }}" + deployment_type: "{{ openshift_deployment_type }}" + - name: Check status of etcd certificates + stat: + path: "{{ item }}" + with_items: + - /etc/etcd/server.crt + - /etc/etcd/peer.crt + - /etc/etcd/ca.crt + register: g_etcd_server_cert_stat_result + - set_fact: + etcd_server_certs_missing: "{{ g_etcd_server_cert_stat_result.results | map(attribute='stat.exists') + | list | intersect([false])}}" + etcd_cert_subdir: etcd-{{ openshift.common.hostname }} + etcd_cert_config_dir: /etc/etcd + etcd_cert_prefix: + +- name: Create temp directory for syncing certs + hosts: localhost + connection: local + sudo: false + gather_facts: no + tasks: + - name: Create local temp directory for syncing certs + local_action: command mktemp -d /tmp/openshift-ansible-XXXXXXX + register: g_etcd_mktemp + changed_when: False + +- name: Configure etcd certificates + hosts: oo_first_etcd + vars: + etcd_generated_certs_dir: /etc/etcd/generated_certs + etcd_needing_server_certs: "{{ hostvars + | oo_select_keys(groups['oo_etcd_to_config']) + | oo_filter_list(filter_attr='etcd_server_certs_missing') }}" + sync_tmpdir: "{{ hostvars.localhost.g_etcd_mktemp.stdout }}" + roles: + - etcd_certificates + post_tasks: + - name: Create a tarball of the etcd certs + command: > + tar -czvf {{ etcd_generated_certs_dir }}/{{ item.etcd_cert_subdir }}.tgz + -C {{ etcd_generated_certs_dir }}/{{ item.etcd_cert_subdir }} . + args: + creates: "{{ etcd_generated_certs_dir }}/{{ item.etcd_cert_subdir }}.tgz" + with_items: etcd_needing_server_certs + - name: Retrieve the etcd cert tarballs + fetch: + src: "{{ etcd_generated_certs_dir }}/{{ item.etcd_cert_subdir }}.tgz" + dest: "{{ sync_tmpdir }}/" + flat: yes + fail_on_missing: yes + validate_checksum: yes + with_items: etcd_needing_server_certs + +- name: Configure etcd hosts + hosts: oo_etcd_to_config + vars: + sync_tmpdir: "{{ hostvars.localhost.g_etcd_mktemp.stdout }}" + etcd_url_scheme: https + etcd_peer_url_scheme: https + etcd_peers_group: oo_etcd_to_config + pre_tasks: + - name: Ensure certificate directory exists + file: + path: "{{ etcd_cert_config_dir }}" + state: directory + - name: Unarchive the tarball on the etcd host + unarchive: + src: "{{ sync_tmpdir }}/{{ etcd_cert_subdir }}.tgz" + dest: "{{ etcd_cert_config_dir }}" + when: etcd_server_certs_missing + roles: + - etcd + +- name: Delete temporary directory on localhost + hosts: localhost + connection: local + sudo: false + gather_facts: no + tasks: + - file: name={{ g_etcd_mktemp.stdout }} state=absent + changed_when: False diff --git a/playbooks/openshift_common/openshift-etcd/filter_plugins b/playbooks/openshift_common/openshift-etcd/filter_plugins new file mode 120000 index 0000000000..99a95e4ca3 --- /dev/null +++ b/playbooks/openshift_common/openshift-etcd/filter_plugins @@ -0,0 +1 @@ +../../../filter_plugins \ No newline at end of file diff --git a/playbooks/openshift_common/openshift-etcd/lookup_plugins b/playbooks/openshift_common/openshift-etcd/lookup_plugins new file mode 120000 index 0000000000..ac79701db8 --- /dev/null +++ b/playbooks/openshift_common/openshift-etcd/lookup_plugins @@ -0,0 +1 @@ +../../../lookup_plugins \ No newline at end of file diff --git a/playbooks/openshift_common/openshift-etcd/roles b/playbooks/openshift_common/openshift-etcd/roles new file mode 120000 index 0000000000..e2b799b9d7 --- /dev/null +++ b/playbooks/openshift_common/openshift-etcd/roles @@ -0,0 +1 @@ +../../../roles/ \ No newline at end of file diff --git a/playbooks/openshift_common/openshift-etcd/service.yml b/playbooks/openshift_common/openshift-etcd/service.yml new file mode 100644 index 0000000000..0bf69b22fa --- /dev/null +++ b/playbooks/openshift_common/openshift-etcd/service.yml @@ -0,0 +1,18 @@ +--- +- name: Populate g_service_masters host group if needed + hosts: localhost + gather_facts: no + tasks: + - fail: msg="new_cluster_state is required to be injected in this playbook" + when: new_cluster_state is not defined + + - name: Evaluate g_service_etcd + add_host: name={{ item }} groups=g_service_etcd + with_items: oo_host_group_exp | default([]) + +- name: Change etcd state on etcd instance(s) + hosts: g_service_etcd + connection: ssh + gather_facts: no + tasks: + - service: name=etcd state="{{ new_cluster_state }}" diff --git a/playbooks/openshift_common/openshift-master/config.yml b/playbooks/openshift_common/openshift-master/config.yml new file mode 100644 index 0000000000..904ad2dab8 --- /dev/null +++ b/playbooks/openshift_common/openshift-master/config.yml @@ -0,0 +1,233 @@ +--- +- name: Set master facts and determine if external etcd certs need to be generated + hosts: oo_masters_to_config + pre_tasks: + - set_fact: + openshift_master_etcd_port: "{{ (etcd_client_port | default('2379')) if (groups.oo_etcd_to_config is defined and groups.oo_etcd_to_config) else none }}" + openshift_master_etcd_hosts: "{{ hostvars + | oo_select_keys(groups['oo_etcd_to_config'] + | default([])) + | oo_collect('openshift.common.hostname') + | default(none, true) }}" + roles: + - openshift_facts + post_tasks: + - openshift_facts: + role: "{{ item.role }}" + local_facts: "{{ item.local_facts }}" + with_items: + - role: common + local_facts: + hostname: "{{ openshift_hostname | default(None) }}" + public_hostname: "{{ openshift_public_hostname | default(None) }}" + deployment_type: "{{ openshift_deployment_type }}" + - role: master + local_facts: + api_port: "{{ openshift_master_api_port | default(None) }}" + api_url: "{{ openshift_master_api_url | default(None) }}" + api_use_ssl: "{{ openshift_master_api_use_ssl | default(None) }}" + public_api_url: "{{ openshift_master_public_api_url | default(None) }}" + cluster_hostname: "{{ openshift_master_cluster_hostname | default(None) }}" + cluster_public_hostname: "{{ openshift_master_cluster_public_hostname | default(None) }}" + cluster_defer_ha: "{{ openshift_master_cluster_defer_ha | default(None) }}" + console_path: "{{ openshift_master_console_path | default(None) }}" + console_port: "{{ openshift_master_console_port | default(None) }}" + console_url: "{{ openshift_master_console_url | default(None) }}" + console_use_ssl: "{{ openshift_master_console_use_ssl | default(None) }}" + public_console_url: "{{ openshift_master_public_console_url | default(None) }}" + - name: Check status of external etcd certificatees + stat: + path: "/etc/openshift/master/{{ item }}" + with_items: + - master.etcd-client.crt + - master.etcd-ca.crt + register: g_external_etcd_cert_stat_result + - set_fact: + etcd_client_certs_missing: "{{ g_external_etcd_cert_stat_result.results + | map(attribute='stat.exists') + | list | intersect([false])}}" + etcd_cert_subdir: openshift-master-{{ openshift.common.hostname }} + etcd_cert_config_dir: /etc/openshift/master + etcd_cert_prefix: master.etcd- + when: groups.oo_etcd_to_config is defined and groups.oo_etcd_to_config + +- name: Create temp directory for syncing certs + hosts: localhost + connection: local + sudo: false + gather_facts: no + tasks: + - name: Create local temp directory for syncing certs + local_action: command mktemp -d /tmp/openshift-ansible-XXXXXXX + register: g_master_mktemp + changed_when: False + +- name: Configure etcd certificates + hosts: oo_first_etcd + vars: + etcd_generated_certs_dir: /etc/etcd/generated_certs + etcd_needing_client_certs: "{{ hostvars + | oo_select_keys(groups['oo_masters_to_config']) + | oo_filter_list(filter_attr='etcd_client_certs_missing') }}" + sync_tmpdir: "{{ hostvars.localhost.g_master_mktemp.stdout }}" + roles: + - etcd_certificates + post_tasks: + - name: Create a tarball of the etcd certs + command: > + tar -czvf {{ etcd_generated_certs_dir }}/{{ item.etcd_cert_subdir }}.tgz + -C {{ etcd_generated_certs_dir }}/{{ item.etcd_cert_subdir }} . + args: + creates: "{{ etcd_generated_certs_dir }}/{{ item.etcd_cert_subdir }}.tgz" + with_items: etcd_needing_client_certs + - name: Retrieve the etcd cert tarballs + fetch: + src: "{{ etcd_generated_certs_dir }}/{{ item.etcd_cert_subdir }}.tgz" + dest: "{{ sync_tmpdir }}/" + flat: yes + fail_on_missing: yes + validate_checksum: yes + with_items: etcd_needing_client_certs + +- name: Copy the external etcd certs to the masters + hosts: oo_masters_to_config + vars: + sync_tmpdir: "{{ hostvars.localhost.g_master_mktemp.stdout }}" + tasks: + - name: Ensure certificate directory exists + file: + path: /etc/openshift/master + state: directory + when: etcd_client_certs_missing is defined and etcd_client_certs_missing + - name: Unarchive the tarball on the master + unarchive: + src: "{{ sync_tmpdir }}/{{ etcd_cert_subdir }}.tgz" + dest: "{{ etcd_cert_config_dir }}" + when: etcd_client_certs_missing is defined and etcd_client_certs_missing + - file: + path: "{{ etcd_cert_config_dir }}/{{ item }}" + owner: root + group: root + mode: 0600 + with_items: + - master.etcd-client.crt + - master.etcd-client.key + - master.etcd-ca.crt + when: etcd_client_certs_missing is defined and etcd_client_certs_missing + +- name: Determine if master certificates need to be generated + hosts: oo_masters_to_config + tasks: + - set_fact: + openshift_master_certs_no_etcd: + - admin.crt + - master.kubelet-client.crt + - master.server.crt + - openshift-master.crt + - openshift-registry.crt + - openshift-router.crt + - etcd.server.crt + openshift_master_certs_etcd: + - master.etcd-client.crt + - set_fact: + openshift_master_certs: "{{ (openshift_master_certs_no_etcd | union(openshift_master_certs_etcd)) if (groups.oo_etcd_to_config is defined and groups.oo_etcd_to_config) else openshift_master_certs_no_etcd }}" + + - name: Check status of master certificates + stat: + path: "/etc/openshift/master/{{ item }}" + with_items: openshift_master_certs + register: g_master_cert_stat_result + - set_fact: + master_certs_missing: "{{ g_master_cert_stat_result.results + | map(attribute='stat.exists') + | list | intersect([false])}}" + master_cert_subdir: master-{{ openshift.common.hostname }} + master_cert_config_dir: /etc/openshift/master + +- name: Configure master certificates + hosts: oo_first_master + vars: + master_generated_certs_dir: /etc/openshift/generated-configs + masters_needing_certs: "{{ hostvars + | oo_select_keys(groups['oo_masters_to_config'] | difference(groups['oo_first_master'])) + | oo_filter_list(filter_attr='master_certs_missing') }}" + sync_tmpdir: "{{ hostvars.localhost.g_master_mktemp.stdout }}" + roles: + - openshift_master_certificates + post_tasks: + - name: Remove generated etcd client certs when using external etcd + file: + path: "{{ master_generated_certs_dir }}/{{ item.0.master_cert_subdir }}/{{ item.1 }}" + state: absent + when: groups.oo_etcd_to_config is defined and groups.oo_etcd_to_config + with_nested: + - masters_needing_certs + - - master.etcd-client.crt + - master.etcd-client.key + + - name: Create a tarball of the master certs + command: > + tar -czvf {{ master_generated_certs_dir }}/{{ item.master_cert_subdir }}.tgz + -C {{ master_generated_certs_dir }}/{{ item.master_cert_subdir }} . + args: + creates: "{{ master_generated_certs_dir }}/{{ item.master_cert_subdir }}.tgz" + with_items: masters_needing_certs + - name: Retrieve the master cert tarball from the master + fetch: + src: "{{ master_generated_certs_dir }}/{{ item.master_cert_subdir }}.tgz" + dest: "{{ sync_tmpdir }}/" + flat: yes + fail_on_missing: yes + validate_checksum: yes + with_items: masters_needing_certs + +- name: Configure master instances + hosts: oo_masters_to_config + vars: + sync_tmpdir: "{{ hostvars.localhost.g_master_mktemp.stdout }}" + openshift_master_ha: "{{ groups.oo_masters_to_config | length > 1 }}" + pre_tasks: + - name: Ensure certificate directory exists + file: + path: /etc/openshift/master + state: directory + when: master_certs_missing and 'oo_first_master' not in group_names + - name: Unarchive the tarball on the master + unarchive: + src: "{{ sync_tmpdir }}/{{ master_cert_subdir }}.tgz" + dest: "{{ master_cert_config_dir }}" + when: master_certs_missing and 'oo_first_master' not in group_names + roles: + - openshift_master + - role: fluentd_master + when: openshift.common.use_fluentd | bool + post_tasks: + - name: Create group for deployment type + group_by: key=oo_masters_deployment_type_{{ openshift.common.deployment_type }} + changed_when: False + +- name: Additional master configuration + hosts: oo_first_master + vars: + openshift_master_ha: "{{ groups.oo_masters_to_config | length > 1 }}" + omc_cluster_hosts: "{{ groups.oo_masters_to_config | join(' ')}}" + roles: + - role: openshift_master_cluster + when: openshift_master_ha | bool + - openshift_examples + +# Additional instance config for online deployments +- name: Additional instance config + hosts: oo_masters_deployment_type_online + roles: + - pods + - os_env_extras + +- name: Delete temporary directory on localhost + hosts: localhost + connection: local + sudo: false + gather_facts: no + tasks: + - file: name={{ g_master_mktemp.stdout }} state=absent + changed_when: False diff --git a/playbooks/openshift_common/openshift-master/filter_plugins b/playbooks/openshift_common/openshift-master/filter_plugins new file mode 120000 index 0000000000..99a95e4ca3 --- /dev/null +++ b/playbooks/openshift_common/openshift-master/filter_plugins @@ -0,0 +1 @@ +../../../filter_plugins \ No newline at end of file diff --git a/playbooks/openshift_common/openshift-master/lookup_plugins b/playbooks/openshift_common/openshift-master/lookup_plugins new file mode 120000 index 0000000000..ac79701db8 --- /dev/null +++ b/playbooks/openshift_common/openshift-master/lookup_plugins @@ -0,0 +1 @@ +../../../lookup_plugins \ No newline at end of file diff --git a/playbooks/openshift_common/openshift-master/roles b/playbooks/openshift_common/openshift-master/roles new file mode 120000 index 0000000000..e2b799b9d7 --- /dev/null +++ b/playbooks/openshift_common/openshift-master/roles @@ -0,0 +1 @@ +../../../roles/ \ No newline at end of file diff --git a/playbooks/openshift_common/openshift-master/service.yml b/playbooks/openshift_common/openshift-master/service.yml new file mode 100644 index 0000000000..5636ad156d --- /dev/null +++ b/playbooks/openshift_common/openshift-master/service.yml @@ -0,0 +1,18 @@ +--- +- name: Populate g_service_masters host group if needed + hosts: localhost + gather_facts: no + tasks: + - fail: msg="new_cluster_state is required to be injected in this playbook" + when: new_cluster_state is not defined + + - name: Evaluate g_service_masters + add_host: name={{ item }} groups=g_service_masters + with_items: oo_host_group_exp | default([]) + +- name: Change openshift-master state on master instance(s) + hosts: g_service_masters + connection: ssh + gather_facts: no + tasks: + - service: name=openshift-master state="{{ new_cluster_state }}" diff --git a/playbooks/openshift_common/openshift-node/config.yml b/playbooks/openshift_common/openshift-node/config.yml new file mode 100644 index 0000000000..6ef375bbb0 --- /dev/null +++ b/playbooks/openshift_common/openshift-node/config.yml @@ -0,0 +1,142 @@ +--- +- name: Gather and set facts for node hosts + hosts: oo_nodes_to_config + roles: + - openshift_facts + tasks: + # Since the master is generating the node certificates before they are + # configured, we need to make sure to set the node properties beforehand if + # we do not want the defaults + - openshift_facts: + role: "{{ item.role }}" + local_facts: "{{ item.local_facts }}" + with_items: + - role: common + local_facts: + hostname: "{{ openshift_hostname | default(None) }}" + public_hostname: "{{ openshift_public_hostname | default(None) }}" + deployment_type: "{{ openshift_deployment_type }}" + - role: node + local_facts: + labels: "{{ openshift_node_labels | default(None) }}" + annotations: "{{ openshift_node_annotations | default(None) }}" + - name: Check status of node certificates + stat: + path: "/etc/openshift/node/{{ item }}" + with_items: + - "system:node:{{ openshift.common.hostname }}.crt" + - "system:node:{{ openshift.common.hostname }}.key" + - "system:node:{{ openshift.common.hostname }}.kubeconfig" + - ca.crt + - server.key + - server.crt + register: stat_result + - set_fact: + certs_missing: "{{ stat_result.results | map(attribute='stat.exists') + | list | intersect([false])}}" + node_subdir: node-{{ openshift.common.hostname }} + config_dir: /etc/openshift/generated-configs/node-{{ openshift.common.hostname }} + node_cert_dir: /etc/openshift/node + +- name: Create temp directory for syncing certs + hosts: localhost + connection: local + sudo: false + gather_facts: no + tasks: + - name: Create local temp directory for syncing certs + local_action: command mktemp -d /tmp/openshift-ansible-XXXXXXX + register: mktemp + changed_when: False + +- name: Create node certificates + hosts: oo_first_master + vars: + nodes_needing_certs: "{{ hostvars + | oo_select_keys(groups['oo_nodes_to_config'] + | default([])) + | oo_filter_list(filter_attr='certs_missing') }}" + sync_tmpdir: "{{ hostvars.localhost.mktemp.stdout }}" + roles: + - openshift_node_certificates + post_tasks: + - name: Create a tarball of the node config directories + command: > + tar -czvf {{ item.config_dir }}.tgz + --transform 's|system:{{ item.node_subdir }}|node|' + -C {{ item.config_dir }} . + args: + creates: "{{ item.config_dir }}.tgz" + with_items: nodes_needing_certs + + - name: Retrieve the node config tarballs from the master + fetch: + src: "{{ item.config_dir }}.tgz" + dest: "{{ sync_tmpdir }}/" + flat: yes + fail_on_missing: yes + validate_checksum: yes + with_items: nodes_needing_certs + +- name: Configure node instances + hosts: oo_nodes_to_config + vars: + sync_tmpdir: "{{ hostvars.localhost.mktemp.stdout }}" + openshift_node_master_api_url: "{{ hostvars[groups.oo_first_master.0].openshift.master.api_url }}" + pre_tasks: + - name: Ensure certificate directory exists + file: + path: "{{ node_cert_dir }}" + state: directory + + # TODO: notify restart openshift-node + # possibly test service started time against certificate/config file + # timestamps in openshift-node to trigger notify + - name: Unarchive the tarball on the node + unarchive: + src: "{{ sync_tmpdir }}/{{ node_subdir }}.tgz" + dest: "{{ node_cert_dir }}" + when: certs_missing + roles: + - openshift_node + - role: fluentd_node + when: openshift.common.use_fluentd | bool + tasks: + - name: Create group for deployment type + group_by: key=oo_nodes_deployment_type_{{ openshift.common.deployment_type }} + changed_when: False + +- name: Delete temporary directory on localhost + hosts: localhost + connection: local + sudo: false + gather_facts: no + tasks: + - file: name={{ mktemp.stdout }} state=absent + changed_when: False + +# Additional config for online type deployments +- name: Additional instance config + hosts: oo_nodes_deployment_type_online + gather_facts: no + roles: + - os_env_extras + - os_env_extras_node + +- name: Set scheduleability + hosts: oo_first_master + vars: + openshift_nodes: "{{ hostvars + | oo_select_keys(groups['oo_nodes_to_config']) + | oo_collect('openshift.common.hostname') }}" + openshift_unscheduleable_nodes: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] | default([])) + | oo_collect('openshift.common.hostname', {'openshift_scheduleable': False}) }}" + pre_tasks: + - set_fact: + openshift_scheduleable_nodes: "{{ hostvars + | oo_select_keys(groups['oo_nodes_to_config'] | default([])) + | oo_collect('openshift.common.hostname') + | difference(openshift_unscheduleable_nodes) }}" + + roles: + - openshift_manage_node diff --git a/playbooks/openshift_common/openshift-node/filter_plugins b/playbooks/openshift_common/openshift-node/filter_plugins new file mode 120000 index 0000000000..99a95e4ca3 --- /dev/null +++ b/playbooks/openshift_common/openshift-node/filter_plugins @@ -0,0 +1 @@ +../../../filter_plugins \ No newline at end of file diff --git a/playbooks/openshift_common/openshift-node/lookup_plugins b/playbooks/openshift_common/openshift-node/lookup_plugins new file mode 120000 index 0000000000..ac79701db8 --- /dev/null +++ b/playbooks/openshift_common/openshift-node/lookup_plugins @@ -0,0 +1 @@ +../../../lookup_plugins \ No newline at end of file diff --git a/playbooks/openshift_common/openshift-node/roles b/playbooks/openshift_common/openshift-node/roles new file mode 120000 index 0000000000..e2b799b9d7 --- /dev/null +++ b/playbooks/openshift_common/openshift-node/roles @@ -0,0 +1 @@ +../../../roles/ \ No newline at end of file diff --git a/playbooks/openshift_common/openshift-node/service.yml b/playbooks/openshift_common/openshift-node/service.yml new file mode 100644 index 0000000000..f76df089f3 --- /dev/null +++ b/playbooks/openshift_common/openshift-node/service.yml @@ -0,0 +1,18 @@ +--- +- name: Populate g_service_nodes host group if needed + hosts: localhost + gather_facts: no + tasks: + - fail: msg="new_cluster_state is required to be injected in this playbook" + when: new_cluster_state is not defined + + - name: Evaluate g_service_nodes + add_host: name={{ item }} groups=g_service_nodes + with_items: oo_host_group_exp | default([]) + +- name: Change openshift-node state on node instance(s) + hosts: g_service_nodes + connection: ssh + gather_facts: no + tasks: + - service: name=openshift-node state="{{ new_cluster_state }}" diff --git a/playbooks/rdiff-backup.yml b/playbooks/rdiff-backup.yml index 9accda60b1..0627d0af2a 100644 --- a/playbooks/rdiff-backup.yml +++ b/playbooks/rdiff-backup.yml @@ -20,11 +20,11 @@ tasks: - name: run rdiff-backup hitting all the global targets - local_action: "shell rdiff-backup --create-full-path --print-statistics {{ inventory_hostname }}::{{ item }} /fedora_backups/{{ inventory_hostname }}/`basename {{ item }}` | mail -r sysadmin-backup-members@fedoraproject.org -s 'rdiff-backup: {{ inventory_hostname }}:{{ item }}' sysadmin-backup-members@fedoraproject.org" + local_action: "shell rdiff-backup --remote-schema 'ssh -p {{ ansible_ssh_port|default(22) }} -C %s rdiff-backup --server' --create-full-path --print-statistics {{ inventory_hostname }}::{{ item }} /fedora_backups/{{ inventory_hostname }}/`basename {{ item }}` | mail -r sysadmin-backup-members@fedoraproject.org -s 'rdiff-backup: {{ inventory_hostname }}:{{ item }}' sysadmin-backup-members@fedoraproject.org" with_items: global_backup_targets when: global_backup_targets is defined - name: run rdiff-backup hitting all the host targets - local_action: "shell rdiff-backup --exclude='**git-seed*' --exclude='**git_seed' --exclude='**.snapshot' --create-full-path --print-statistics {{ inventory_hostname }}::{{ item }} /fedora_backups/{{ inventory_hostname }}/`basename {{ item }}` | mail -r sysadmin-backup-members@fedoraproject.org -s 'rdiff-backup: {{ inventory_hostname }}:{{ item }}' sysadmin-backup-members@fedoraproject.org" + local_action: "shell rdiff-backup --remote-schema 'ssh -p {{ ansible_ssh_port|default(22) }} -C %s rdiff-backup --server' --exclude='**git-seed*' --exclude='**git_seed' --exclude='**.snapshot' --create-full-path --print-statistics {{ inventory_hostname }}::{{ item }} /fedora_backups/{{ inventory_hostname }}/`basename {{ item }}` | mail -r sysadmin-backup-members@fedoraproject.org -s 'rdiff-backup: {{ inventory_hostname }}:{{ item }}' sysadmin-backup-members@fedoraproject.org" with_items: host_backup_targets when: host_backup_targets is defined diff --git a/playbooks/run_fasClient.yml b/playbooks/run_fasClient.yml index da855ee669..3c35ff56b5 100644 --- a/playbooks/run_fasClient.yml +++ b/playbooks/run_fasClient.yml @@ -14,7 +14,7 @@ when: inventory_hostname_short.startswith('bastion0') - name: run fasClient on people and hosted and pkgs first as these are the ones most people want updated - hosts: people03.fedoraproject.org:pkgs02.phx2.fedoraproject.org:hosted03.fedoraproject.org + hosts: people01.fedoraproject.org:pkgs02.phx2.fedoraproject.org:hosted03.fedoraproject.org user: root gather_facts: False diff --git a/playbooks/transient_cloud_instance.yml b/playbooks/transient_cloud_instance.yml new file mode 100644 index 0000000000..cffa7cac44 --- /dev/null +++ b/playbooks/transient_cloud_instance.yml @@ -0,0 +1,68 @@ +# +# setup a transient instance in the Fedora infrastructure private cloud +# +# This playbook is used to spin up a transient instance for someone to test something. +# In particular transient instances will all be terminated at least by the next +# maint window for the cloud, but ideally people will terminate instances they +# are done using. +# +# If you have an application or longer term item that should always be around +# please use the persistent playbook instead. +# +# You MUST pass a name to it, ie: -e 'name=somethingdescriptive' +# You can optionally override defaults by passing any of the following: +# image=imagename (default is centos70_x86_64) +# instance_type=some instance type (default is m1.small) +# root_auth_users='user1 user2 user3' (default is sysadmin-main group) +# +# Note: if you run this playbook with the same name= multiple times +# openstack is smart enough to just return the current ip of that instance +# and go on. This way you can re-run if you want to reconfigure it without +# reprovisioning it. +# + +- name: check/create instance + hosts: lockbox01.phx2.fedoraproject.org + user: root + gather_facts: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - /srv/private/ansible/vars.yml + - /srv/web/infra/ansible/vars/fedora-cloud.yml + - /srv/private/ansible/files/openstack/passwords.yml + vars: + image: "{{ centos70_x86_64 }}" + instance_type: m1.small + + tasks: + - name: fail when name is not provided + fail: msg="Please specify the name of the instance" + when: name is not defined + + - include: "{{ tasks }}/transient_cloud.yml" + +- name: provision instance + hosts: tmp_just_created + gather_facts: True + environment: + ANSIBLE_HOST_KEY_CHECKING: False + + vars_files: + - /srv/web/infra/ansible/vars/global.yml + - "/srv/private/ansible/vars.yml" + - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml + + tasks: + - name: install cloud-utils (yum) + yum: pkg=cloud-utils state=present + when: ansible_distribution_major_version|int < 22 + + - name: install cloud-utils (dnf) + command: dnf install -y cloud-utils + when: ansible_distribution_major_version|int > 21 and ansible_cmdline.ostree is not defined + + - include: "{{ tasks }}/cloud_setup_basic.yml" + + handlers: + - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/transient_post_provision.yml b/playbooks/transient_post_provision.yml deleted file mode 100644 index 0543525f86..0000000000 --- a/playbooks/transient_post_provision.yml +++ /dev/null @@ -1,25 +0,0 @@ -- name: add to group - hosts: lockbox01.phx2.fedoraproject.org - user: root - gather_facts: False - - tasks: - - name: add it to the special group - local_action: add_host hostname={{ target }} groupname=tmp_just_created - -- name: provision instance - hosts: tmp_just_created - user: root - gather_facts: True - - vars_files: - - /srv/web/infra/ansible/vars/global.yml - - "/srv/private/ansible/vars.yml" - - /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml - - tasks: - - include: "{{ tasks }}/growroot_cloud.yml" - - include: "{{ tasks }}/cloud_setup_basic.yml" - - handlers: - - include: "{{ handlers }}/restart_services.yml" diff --git a/playbooks/vhost_reboot.yml b/playbooks/vhost_reboot.yml index 21b3c99046..b0ad4c3cc0 100644 --- a/playbooks/vhost_reboot.yml +++ b/playbooks/vhost_reboot.yml @@ -2,6 +2,7 @@ # This playbook lets you safely reboot a virthost and all it's guests. # # requires --extra-vars="target=somevhost fqdn" +# Might add nodns=true or nonagios=true to the extra vars #General overview: # talk to the vhost @@ -34,6 +35,7 @@ # Call out to another playbook. Disable any proxies that may live here - include: update-proxy-dns.yml status=disable proxies=myvms_new:&proxies + when: nodns is not defined or not "true" in nodns - name: halt instances hosts: myvms_new @@ -43,16 +45,10 @@ tasks: - name: schedule regular host downtime - nagios: action=downtime minutes=30 service=host host={{ inventory_hostname_short }} + nagios: action=downtime minutes=30 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true - when: inventory_hostname.find('.stg.') == -1 - - - name: schedule stg host downtime - nagios: action=downtime minutes=30 service=host host={{ inventory_hostname_short }}.stg - delegate_to: noc01.phx2.fedoraproject.org - ignore_errors: true - when: inventory_hostname.find('.stg.') != -1 + when: nonagios is not defined or not "true" in nonagios - name: halt the vm instances - to poweroff command: /sbin/shutdown -h 1 @@ -75,9 +71,10 @@ tasks: - name: tell nagios to shush - nagios: action=downtime minutes=60 service=host host={{ inventory_hostname }} + nagios: action=downtime minutes=60 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true + when: nonagios is not defined or not "true" in nonagios - name: reboot the virthost command: /sbin/shutdown -r 1 @@ -100,12 +97,14 @@ when: inventory_hostname_short.startswith('serverbeach') - name: tell nagios to unshush - nagios: action=unsilence service=host host={{ inventory_hostname }} + nagios: action=unsilence service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true + when: nonagios is not defined or not "true" in nonagios # Call out to that dns playbook. Put proxies back in now that they're back - include: update-proxy-dns.yml status=enable proxies=myvms_new:&proxies + when: not "true" in nodns - name: Fix unbound if necessary # intersection - hosts that are in our dynamic group and also in unbound-dns diff --git a/playbooks/vhost_update.yml b/playbooks/vhost_update.yml index 2ddfa21d5f..942732c722 100644 --- a/playbooks/vhost_update.yml +++ b/playbooks/vhost_update.yml @@ -1,6 +1,7 @@ # This playboook updates a virthost and all it's guests. # # requires --extra-vars="target=somevhostname yumcommand=update" +# Might add nodns=true or nonagios=true at extra-vars # - name: find instances @@ -18,7 +19,7 @@ with_items: vmlist.list_vms # Call out to another playbook. Disable any proxies that may live here -- include: update-proxy-dns.yml status=disable proxies=myvms_new:&proxies +#- include: update-proxy-dns.yml status=disable proxies=myvms_new:&proxies - name: update the system hosts: "{{ target }}:myvms_new" @@ -27,16 +28,10 @@ tasks: - name: schedule regular host downtime - nagios: action=downtime minutes=30 service=host host={{ inventory_hostname_short }} + nagios: action=downtime minutes=30 service=host host={{ inventory_hostname_short }}{{ env_suffix }} delegate_to: noc01.phx2.fedoraproject.org ignore_errors: true - when: inventory_hostname.find('.stg.') == -1 - - - name: schedule stg host downtime - nagios: action=downtime minutes=30 service=host host={{ inventory_hostname_short }}.stg - delegate_to: noc01.phx2.fedoraproject.org - ignore_errors: true - when: inventory_hostname.find('.stg.') != -1 + when: nonagios is not defined or not "true" in nonagios - name: expire-caches command: yum clean expire-cache diff --git a/roles/README b/roles/README index e701682478..4129a1f348 100644 --- a/roles/README +++ b/roles/README @@ -1,2 +1,61 @@ Space for our ansible roles - ansible 1.2 and above only + + +Notes About OpenShift Ansible Roles +----------------------------------- +The following roles that are "imported" at face value from the upstream +OpenShift Ansible project[0] for use by OSBS[1][2][3] + +This is currently required by the playbooks/groups/osbs.yml playbook + +To re-import/update the OpenShift Ansible roles: + + # This can be anywhere, just not in this git tree + $ cd /tmp/ + + $ git clone https://github.com/openshift/openshift-ansible.git + $ cd openshift-ansible/roles/ + + $ oo_roles=( + etcd + etcd_ca + etcd_certificates + fluentd_master + fluentd_node + openshift_common + openshift_examples + openshift_facts + openshift_manage_node + openshift_master + openshift_master_ca + openshift_master_certificates + openshift_master_cluster + openshift_node + openshift_node_certificates + openshift_repos + os_env_extras + os_env_extras_node + os_firewall + pods + ) + + # This assumes your local branch of this git repo exists in + # ~/src/fedora-ansible/ but replace that with the actual path + $ for role in ${oo_roles[@]} + do + cp -r $role ~/src/fedora-ansible/roles/ + done + + # Inspect the changes + $ cd ~/src/fedora-ansible + $ git diff + + # If you're happy with things, then + $ git commit -m "re-import/update openshift roles from upstream" + $ git push + +[0] - https://github.com/openshift/openshift-ansible +[1] - https://github.com/projectatomic/osbs-client +[2] - https://github.com/release-engineering/koji-containerbuild +[3] - https://github.com/projectatomic/atomic-reactor diff --git a/roles/anitya/backend/files/anitya.cron b/roles/anitya/backend/files/anitya.cron index 8822454d85..d208839ae8 100644 --- a/roles/anitya/backend/files/anitya.cron +++ b/roles/anitya/backend/files/anitya.cron @@ -1,3 +1,3 @@ # Checks bi-daily for new versions # -10 */12 * * * root ANITYA_WEB_CONFIG=/etc/anitya/anitya.cfg /usr/local/bin/lock-wrapper anitya /usr/share/anitya/anitya_cron.py +10 */12 * * * root time ANITYA_WEB_CONFIG=/etc/anitya/anitya.cfg /usr/local/bin/lock-wrapper anitya /usr/share/anitya/anitya_cron.py diff --git a/roles/anitya/backend/tasks/main.yml b/roles/anitya/backend/tasks/main.yml index 903c9d9354..7fad7048c9 100644 --- a/roles/anitya/backend/tasks/main.yml +++ b/roles/anitya/backend/tasks/main.yml @@ -103,6 +103,8 @@ command: /usr/bin/python2 /usr/share/anitya/anitya_createdb.py environment: ANITYA_WEB_CONFIG: /etc/anitya/anitya.cfg + tags: + - anitya_backend - name: Install the configuration file of anitya template: src={{ item.file }} diff --git a/roles/anitya/fedmsg/tasks/main.yml b/roles/anitya/fedmsg/tasks/main.yml index 4086bd6abf..b5c8fbc8de 100644 --- a/roles/anitya/fedmsg/tasks/main.yml +++ b/roles/anitya/fedmsg/tasks/main.yml @@ -11,6 +11,7 @@ - policycoreutils-python # This is in the kickstart now. Here for old hosts. tags: - packages + - anitya/fedmsg # We use setgid here so that the monitoring sockets created by fedmsg services # are accessible to the nrpe group. @@ -21,10 +22,13 @@ owner=fedmsg group=nrpe state=directory + tags: + - anitya/fedmsg - name: setup /etc/fedmsg.d directory file: path=/etc/fedmsg.d owner=root group=root mode=0755 state=directory tags: + - anitya/fedmsg - config # Any files that change need to restart any services that depend on them. A @@ -48,6 +52,7 @@ tags: - config - fedmsgdconfig + - anitya/fedmsg notify: - restart httpd - restart fedmsg-relay @@ -59,6 +64,7 @@ tags: - config - fedmsgdconfig + - anitya/fedmsg notify: - restart httpd - restart fedmsg-relay @@ -67,6 +73,7 @@ file: path=/etc/pki/fedmsg owner=root group=root mode=0755 state=directory tags: - config + - anitya/fedmsg - name: install fedmsg ca.cert copy: > @@ -77,6 +84,7 @@ mode=0644 tags: - config + - anitya/fedmsg - name: fedmsg certs copy: > @@ -90,6 +98,7 @@ when: fedmsg_certs != [] tags: - config + - anitya/fedmsg - name: fedmsg keys copy: > @@ -103,15 +112,22 @@ when: fedmsg_certs != [] tags: - config + - anitya/fedmsg # Three tasks for handling our custom selinux module - name: ensure a directory exists for our custom selinux module file: dest=/usr/local/share/fedmsg state=directory + tags: + - anitya/fedmsg - name: copy over our custom selinux module copy: src=selinux/fedmsg.pp dest=/usr/local/share/fedmsg/fedmsg.pp register: selinux_module + tags: + - anitya/fedmsg - name: install our custom selinux module command: semodule -i /usr/local/share/fedmsg/fedmsg.pp when: selinux_module|changed + tags: + - anitya/fedmsg diff --git a/roles/anitya/fedmsg/templates/base.py.j2 b/roles/anitya/fedmsg/templates/base.py.j2 index 6aa831b3e0..8a9bcee4c3 100644 --- a/roles/anitya/fedmsg/templates/base.py.j2 +++ b/roles/anitya/fedmsg/templates/base.py.j2 @@ -1,7 +1,6 @@ config = dict( - topic_prefix="org.release-monitoring", - - environment="prod", + topic_prefix="{{ fedmsg_prefix }}", + environment="{{ fedmsg_env }}", # This used to be set to 1 for safety, but it turns out it was # excessive. It is the number of seconds that fedmsg should sleep diff --git a/roles/apache/defaults/main.yml b/roles/apache/defaults/main.yml new file mode 100644 index 0000000000..372da7ba0c --- /dev/null +++ b/roles/apache/defaults/main.yml @@ -0,0 +1,2 @@ +--- +collectd_apache: true diff --git a/tasks/apache.yml b/roles/apache/tasks/main.yml similarity index 93% rename from tasks/apache.yml rename to roles/apache/tasks/main.yml index 1b941a98e5..d8222bb7ba 100644 --- a/tasks/apache.yml +++ b/roles/apache/tasks/main.yml @@ -31,7 +31,7 @@ - name: hotfix - copy over new httpd sysconfig (el6) copy: src="{{ files }}/hotfix/httpd/httpd.sysconfig" dest=/etc/sysconfig/httpd - when: ansible_distribution_major_version == '6' + when: ansible_distribution_major_version|int == 6 notify: - restart apache tags: @@ -41,7 +41,7 @@ - name: hotfix - copy over new httpd sysconfig (el7) copy: src="{{ files }}/hotfix/httpd/httpd.sysconfig" dest=/etc/sysconfig/httpd - when: ansible_distribution_major_version == '7' + when: ansible_distribution_major_version|int == 7 notify: - restart apache tags: diff --git a/roles/apps-fp-o/files/apps.yaml b/roles/apps-fp-o/files/apps.yaml index 50c88ff7d0..a67f2509d5 100644 --- a/roles/apps-fp-o/files/apps.yaml +++ b/roles/apps-fp-o/files/apps.yaml @@ -1,7 +1,5 @@ %YAML 1.2 --- - - name: Fedora Apps data: description: > @@ -9,295 +7,6 @@ data: huge; this page details only the public facing portion of it all. Explore! children: -- name: In Development - data: - description: > - These are the apps that we're working on, but that aren't quite - ready for prime-time yet. Try and use them, and report bugs when - they're broken -- it's a big help!. - Check back here from time to time, as this section will change. - children: - - name: Koschei - data: - url: http://koschei.cloud.fedoraproject.org - description: > - Koschei is a continuous integration system for RPM packages. It - tracks dependency changes done in Koji repositories and rebuilds - packages whose dependencies change. It can help packagers to - detect failures early and provide relevant information to narrow - down the cause. - - name: Release Monitoring - data: - url: http://release-monitoring.org - description: > - Code named anitya, this - project is slated to replace the - old wiki page for Upstream Release Monitoring. It will - track upstream tarball locations and publish notifications to - the fedmsg bus when new ones are found. Other daemons will - then be responsible for filing bugs, attempting to - automatically build packages, perform some preliminary QA - checks, etc.. - - name: Jenkins - data: - url: http://jenkins.cloud.fedoraproject.org - description: > - Our own continuous integration (CI) service! It works now and - you can use it.. we just don't yet give it the same kind of - guarantees that we give our other apps. Look forwards to us - promoting it soon.. - - name: faitout - data: - url: http://209.132.184.152/faitout/ - description: > - Provides access to temporary postgresql databases. This - database can be used for unit-test thus reducing the - differences between testing and production environment. - -- name: Infrastructure - data: - description: > - Tools for sysadmins -- the people who run the servers that run - Fedora (and otherwise). - children: - - name: GeoIP - data: - url: https://geoip.fedoraproject.org - description: > - A simple web service running geoip-city-wsgi - that will return geoip information to you. - - name: Easyfix - data: - url: http://fedoraproject.org/easyfix - description: > - A list of easy-to-fix problems for the different projects in - Fedora. Interested in getting into helping out with sysadmin - work or web application development? This should be useful - to you. - - name: DataGrepper - icon: fedmsg.png - data: - url: https://apps.fedoraproject.org/datagrepper - status_mappings: ['fedmsg'] - description: > - DataGrepper is an HTTP API for querying the datanommer - database. You can use it to dig into the history of the - fedmsg message bus. You - can grab events by username, by package, by message - source, by topic... you name it. - - - name: Status - icon: status-good.png - data: - url: http://status.fedoraproject.org - description: > - Sometimes the Fedora Infrastructure team messes up (or - lightning strikes our datacenter(s)). Sorry about that. - You can use this website to check the status. Is it - "down for everyone, or just me?"
Notice the favicon - in your browser tab. It changes based on the status, - so if you keep this open you can check back to it at a - glance. - - name: MirrorManager - icon: downloads.png - data: - url: http://mirrors.fedoraproject.org - status_mappings: ['mirrormanager', 'mirrorlist'] - description: > - Fedora is distributed to millions of systems globally. - This would not be possible without the donations of time, - disk space, and bandwidth by hundreds of volunteer system - administrators and their companies or institutions. Your - fast download experience is made possible by these - donations. The list on the MirrorManager - site is dynamically generated every hour, listing only - up-to-date mirrors. - - name: Nagios - icon: nagios-logo.png - data: - url: http://admin.fedoraproject.org/nagios - description: > - "Is telia down?" The answer can most definitively be - found here (and in detail). The Fedora Infrastructure - team uses Nagios to monitor the servers that serve - Fedora. Accessing most details requires membership - in the sysadmin group. - - name: Collectd - icon: collectd.png - data: - url: http://admin.fedoraproject.org/collectd/ - description: > - Tracks and displays statistics on the Fedora - Infrastructure machines over time. Useful for debugging - ineffeciencies and problems. - - name: HAProxy - data: - url: http://admin.fedoraproject.org/haproxy/proxy1 - description: > - Shows the health of our proxies. How many bytes? - Concurrent sessions? Health checks? -- name: QA - data: - description: > - Tools for testers -- the people who tell us its broken so we can - fix it. - children: - - name: Taskotron - data: - url: https://taskotron.fedoraproject.org - description: > - Taskotron is a framework for automated task execution and is in - the very early stages of development with the objective to - replace AutoQA for automating selected QA tasks in Fedora. - - name: Releng-Dash - data: - url: https://apps.fedoraproject.org/releng-dash/ - description: > - Track the status of the Fedora Release Engineering process. - Did the latest rawhide get rsynced out? How about for the - secondary arches? This read-only dashboard can help you - make a quick check. - - name: Problem Tracker - data: - url: https://retrace.fedoraproject.org - description: > - The Problem Tracker is a platform for collecting and - analyzing package crashes reported via ABRT (Automatic Bug - Reporting Tool). It makes it easy to see what problems - users are hitting the most, and allows you to filter them - by Fedora release, associate, or component. - - name: Blocker Bugs - data: - url: http://qa.fedoraproject.org/blockerbugs - status_mappings: ['blockerbugs'] - description: > - The Fedora Blocker Bug Tracker tracks release blocking bugs - and related updates in Fedora releases currently under - development. - - name: Bugzilla - icon: bugzilla.png - data: - url: http://bugzilla.redhat.com - description: > - The Fedora Community makes use of a bugzilla instance - run by Red Hat. Notice something wrong with a Fedora - package? You can file an official bug here. - - name: Review Status - data: - url: http://fedoraproject.org/PackageReviewStatus/ - description: > - These pages contain periodically generated reports with - information on the current state of all Fedora package review - tickets -- a super useful window on bugzilla. - - name: Kerneltest - icon: tux.png - data: - url: https://apps.fedoraproject.org/kerneltest - description: > - As part of the kernel - testing initiative we provide a webapp where users and - automated systems can upload test results. If you have - access to hardware where we could catch tricky driver - issues, your assistance here would be much appreciated. -- name: Coordination - data: - description: > - Tools for people -- so we can talk to each other and share content - and ideas. - children: - - name: Paste - data: - url: http://paste.fedoraproject.org - status_mappings: ['fedorapaste'] - description: > - Our very own pastebin server. If you yum install the - fpaste command, it will use this site - automatically. - - name: Elections - data: - url: http://admin.fedoraproject.org/voting - status_mappings: ['elections'] - description: > - As a member of the community, you can now vote for the - different steering committees and for this you will use the - Election application. Voting is a right and a duty as a member - of the community; it is one of the things you can do to - influence the development of Fedora. - - name: Nuancier - icon: nuancier.png - data: - url: https://apps.fedoraproject.org/nuancier - description: > - Nuancier is a simple voting application for the - supplementary wallpapers included in Fedora. - - name: The Mailing lists - icon: mail.png - data: - url: http://lists.fedoraproject.org - status_mappings: ['mailinglists'] - description: > - Mailing lists are used for communication within the community. - There are lists for generic topics and lists more dedicated - to a specific topic, there is for sure one for you. - - name: FedoCal - icon: fedocal.png - data: - url: https://apps.fedoraproject.org/calendar - status_mappings: ['fedocal'] - description: > - The Fedora Calendar (or fedocal), you might - have already guessed, is a public calendar service. You can - create your own calendar, or subscribe to others. Want to - be kept abrest of releases, freezes, and events? This is - the tool for you. - - name: Meetbot - icon: meetbot.png - data: - url: https://meetbot.fedoraproject.org - status_mappings: ['zodbot'] - description: > - Fedora Infrastructure runs a friendly IRC bot that you may - know named zodbot. - Among its many and varied functions is logging IRC meetings, - the archives of which you can find here. - -- name: Upstream - data: - description: > - Tools for upstream - developers -- because we love you. - - children: - - name: github2fedmsg - icon: github.png - data: - url: https://apps.fedoraproject.org/github2fedmsg - status_mappings: ['fedmsg'] - description: > - github2fedmsg is a web service that bridges upstream - development activity from GitHub into the Fedora Infrastructure message - bus. Visit the self-service dashboard to toggle the - status of your repositories. - - name: Fedora Hosted - icon: trac.png - data: - url: http://fedorahosted.org - status_mappings: ['fedorahosted'] - description: > - Fedora is dedicated to open source software. This - commitment can extend beyond regular Fedora offerings.
- Fedora Hosted is our most feature rich - hosting solution. It includes an scm, trac instance, - release dir, account system for access control, etc. - This is our most common hosting option. When most groups - want hosting, this is what they want. - name: Accounts data: description: > @@ -306,7 +15,8 @@ children: children: - name: Ambassadors Map data: - url: http://fedoraproject.org/membership-map/ambassadors.html + url: https://fedoraproject.org/membership-map/ambassadors.html + # TODO -- add source and bugs urls for this. description: > Ambassadors are the representatives of Fedora. Ambassadors ensure the public understand Fedora's principles and the work @@ -323,6 +33,7 @@ children: - name: FedoraPeople data: url: https://fedorapeople.org + user_url: https://{user}.fedorapeople.org status_mappings: ['people'] description: > Being a community member you gain access to fedorapeople which @@ -330,24 +41,32 @@ children: files to share them with the community. - name: FAS data: - url: http://admin.fedoraproject.org/accounts + url: https://admin.fedoraproject.org/accounts + user_url: https://admin.fedoraproject.org/accounts/user/view/{user} + source_url: https://github.com/fedora-infra/fas/ + bugs_url: https://github.com/fedora-infra/fas/issues/ status_mappings: ['fas'] description: > The Fedora Account System. Update your profile information and apply for membership in groups. - name: Notifications - icon: fedmsg.png data: + icon: fedmsg.png url: https://apps.fedoraproject.org/notifications + source_url: https://github.com/fedora-infra/fmn/ + bugs_url: https://github.com/fedora-infra/fmn/issues/ status_mappings: ['fedmsg'] description: > Centrally managed preferences for Fedora Infrastructure notifications to your inbox, irc client, and mobile device. - name: Badges - icon: badges.png status_mappings: ['badges'] data: + icon: badges.png url: https://badges.fedoraproject.org + user_url: https://badges.fedoraproject.org/user/{user} + source_url: https://github.com/fedora-infra/tahrir/ + bugs_url: https://github.com/fedora-infra/tahrir/issues/ description: > An achievements system for Fedora Contributors! "Badges" are awarded based on activity in the community. Can you @@ -363,32 +82,37 @@ children: more.. children: - name: Ask Fedora - icon: ask_fedora.png data: + icon: ask_fedora.png url: https://ask.fedoraproject.org/ + source_url: https://github.com/askbot/askbot-devel + bugs_url: https://github.com/askbot/askbot-devel/issues/ status_mappings: ['ask'] description: > Any question at all about Fedora? Ask it here. - name: The Wiki - icon: mediawiki.png data: - url: http://fedoraproject.org/wiki + icon: mediawiki.png + url: https://fedoraproject.org/wiki + user_url: https://fedoraproject.org/wiki/User:{user} + source_url: https://www.mediawiki.org/ + bugs_url: https://www.mediawiki.org/wiki/Phabricator#Get_started status_mappings: ['wiki'] description: > Maintain your own user profile page, contribute to documents about features, process, and governance. - name: Fedora Magazine - icon: magazine.png data: + icon: magazine.png url: http://fedoramagazine.org description: > Fedora Magazine is a WordPress-based site which delivers all the news of the Fedora Community. (It replaces the previous Fedora Weekly News.) - name: The Planet - icon: planet_logo.png data: - url: http://planet.fedoraproject.org + icon: planet_logo.png + url: http://fedoraplanet.org description: > The planet is a blog aggregator, a space accessible to you as a community member where you can express your opinion and @@ -403,6 +127,159 @@ children: including the changes between releases (and a big kudos to the translation teams to keep this resource up to date in the different languages!) +- name: QA + data: + description: > + Tools for testers -- the people who tell us its broken so we can + fix it. + children: + - name: Taskotron + data: + url: https://taskotron.fedoraproject.org + package_url: https://taskotron.fedoraproject.org/resultsdb/results?item={package} + description: > + Taskotron is a framework for automated task execution and is in + the very early stages of development with the objective to + replace AutoQA for automating selected QA tasks in Fedora. + - name: Releng-Dash + data: + url: https://apps.fedoraproject.org/releng-dash/ + description: > + Track the status of the Fedora Release Engineering process. + Did the latest rawhide get rsynced out? How about for the + secondary arches? This read-only dashboard can help you + make a quick check. + - name: Problem Tracker + data: + url: https://retrace.fedoraproject.org + package_url: https://retrace.fedoraproject.org/faf/reports/?component_names={package} + description: > + The Problem Tracker is a platform for collecting and + analyzing package crashes reported via ABRT (Automatic Bug + Reporting Tool). It makes it easy to see what problems + users are hitting the most, and allows you to filter them + by Fedora release, associate, or component. + - name: Blocker Bugs + data: + url: https://qa.fedoraproject.org/blockerbugs + status_mappings: ['blockerbugs'] + description: > + The Fedora Blocker Bug Tracker tracks release blocking bugs + and related updates in Fedora releases currently under + development. + - name: Bugzilla + data: + icon: bugzilla.png + url: https://bugzilla.redhat.com + package_url: https://bugzilla.redhat.com/buglist.cgi?bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED&product=Fedora&product=Fedora%20EPEL&query_format=advanced&component={package} + description: > + The Fedora Community makes use of a bugzilla instance + run by Red Hat. Notice something wrong with a Fedora + package? You can file an official bug here. + - name: Review Status + data: + url: https://fedoraproject.org/PackageReviewStatus/ + package_url: https://bugzilla.redhat.com/buglist.cgi?component=Package%20Review&query_format=advanced&short_desc_type=allwordssubstr&short_desc={package} + description: > + These pages contain periodically generated reports with + information on the current state of all Fedora package review + tickets -- a super useful window on bugzilla. + - name: Kerneltest + data: + icon: tux.png + url: https://apps.fedoraproject.org/kerneltest + description: > + As part of the kernel + testing initiative we provide a webapp where users and + automated systems can upload test results. If you have + access to hardware where we could catch tricky driver + issues, your assistance here would be much appreciated. + - name: Koschei + data: + icon: koschei.png + url: https://apps.fedoraproject.org/koschei/ + user_url: https://apps.fedoraproject.org/koschei/user/{user} + package_url: https://apps.fedoraproject.org/koschei/package/{package} + status_mappings: ['koschei'] + description: > + Koschei is a continuous integration system for RPM packages. It + tracks dependency changes done in Koji repositories and rebuilds + packages whose dependencies change. It can help packagers to + detect failures early and provide relevant information to narrow + down the cause. +- name: Coordination + data: + description: > + Tools for people -- so we can talk to each other and share content + and ideas. + children: + - name: Asknot + data: + url: http://whatcanidoforfedora.org + status_mappings: [] + description: > + Ask not what Fedora can do for you, but what you can do for + Fedora? This site is a starting place for brand new + contributors to help them figure out where they can + hop on board! + - name: Paste + data: + url: https://paste.fedoraproject.org + status_mappings: ['fedorapaste'] + description: > + Our very own pastebin server. If you yum install the + fpaste command, it will use this site + automatically. + - name: Elections + data: + url: https://admin.fedoraproject.org/voting + status_mappings: ['elections'] + description: > + As a member of the community, you can now vote for the + different steering committees and for this you will use the + Election application. Voting is a right and a duty as a member + of the community; it is one of the things you can do to + influence the development of Fedora. + - name: Nuancier + data: + icon: nuancier.png + url: https://apps.fedoraproject.org/nuancier + description: > + Nuancier is a simple voting application for the + supplementary wallpapers included in Fedora. + - name: The Mailing lists + data: + icon: mail.png + url: https://lists.fedoraproject.org + status_mappings: ['mailinglists'] + description: > + Mailing lists are used for communication within the community. + There are lists for generic topics and lists more dedicated + to a specific topic, there is for sure one for you. + - name: FedoCal + data: + icon: fedocal.png + url: https://apps.fedoraproject.org/calendar + status_mappings: ['fedocal'] + description: > + The Fedora Calendar (or fedocal), you might + have already guessed, is a public calendar service. You can + create your own calendar, or subscribe to others. Want to + be kept abrest of releases, freezes, and events? This is + the tool for you. + - name: Meetbot + data: + icon: meetbot.png + url: https://meetbot.fedoraproject.org + status_mappings: ['zodbot'] + description: > + Fedora Infrastructure runs a friendly IRC bot that you may + know named zodbot. + Among its many and varied functions is logging IRC meetings, + the archives of which you can find here. + - name: Packaging data: description: > @@ -413,6 +290,7 @@ children: - name: Packages data: url: https://apps.fedoraproject.org/packages + package_url: https://apps.fedoraproject.org/packages/{package} status_mappings: ['packages'] description: > A meta-app over the other packaging apps; the best place to @@ -424,18 +302,20 @@ children: It is sometimes called "Fedora Community v2" after the old Fedora Community site. - name: Tagger - icon: tagger.png data: + icon: tagger.png url: https://apps.fedoraproject.org/tagger + package_url: https://apps.fedoraproject.org/tagger/{package} status_mappings: ['tagger'] description: > Help build a tag cloud of all our packages.. It's actually really useful. It'll help improve the search of the "Packages" webapp. - name: COPR - icon: copr.png data: + icon: copr.png url: https://copr.fedoraproject.org + user_url: https://copr.fedoraproject.org/coprs/{user}/ status_mappings: ['copr'] description: > Copr is an easy-to-use automatic build system providing a @@ -443,13 +323,17 @@ children: - name: PkgDB data: url: https://admin.fedoraproject.org/pkgdb + user_url: https://admin.fedoraproject.org/pkgdb/packager/{user}/ + package_url: https://admin.fedoraproject.org/pkgdb/package/{package}/ status_mappings: ['pkgdb'] description: > Manage ACLs of your packages. - name: Koji - icon: koji.png data: + icon: koji.png url: http://koji.fedoraproject.org/koji + package_url: http://koji.fedoraproject.org/koji/search?match=glob&type=package&terms={package} + user_url: http://koji.fedoraproject.org/koji/userinfo?userID={user} status_mappings: ['koji'] description: > Koji is the software that builds RPM packages for the @@ -457,9 +341,11 @@ children: environments to perform builds that are both safe and trusted. - name: Bodhi - icon: bodhi.png data: + icon: bodhi.png url: https://admin.fedoraproject.org/updates + package_url: https://admin.fedoraproject.org/updates/{package} + user_url: https://admin.fedoraproject.org/updates/user/{user} status_mappings: ['bodhi'] description: > The tool you will use to push your packages to the Fedora @@ -468,9 +354,10 @@ children: (repository: updates). Behold -- the Magic Cabbage. - name: SCM - icon: git-logo.png data: + icon: git-logo.png url: http://pkgs.fedoraproject.org/cgit + package_url: http://pkgs.fedoraproject.org/cgit/{package}.git status_mappings: ['pkgs'] description: > Ever wonder exactly what is in the new release @@ -491,3 +378,159 @@ children: You can read more about why you might want to use it or you can just click below to... +- name: Upstream + data: + description: > + Tools for upstream + developers -- because we love you. + + children: + - name: Release Monitoring + data: + url: https://release-monitoring.org + package_url: https://release-monitoring.org/projects/search/?pattern={package} + description: > + Code named anitya, this + project is slated to replace the + old wiki page for Upstream Release Monitoring. It will + track upstream tarball locations and publish notifications to + the fedmsg bus when new ones are found. Other daemons will + then be responsible for filing bugs, attempting to + automatically build packages, perform some preliminary QA + checks, etc.. + - name: github2fedmsg + data: + icon: github.png + url: https://apps.fedoraproject.org/github2fedmsg + status_mappings: ['fedmsg'] + description: > + github2fedmsg is a web service that bridges upstream + development activity from GitHub into the Fedora Infrastructure message + bus. Visit the self-service dashboard to toggle the + status of your repositories. + - name: Fedora Hosted + data: + icon: trac.png + url: https://fedorahosted.org + status_mappings: ['fedorahosted'] + description: > + Fedora is dedicated to open source software. This + commitment can extend beyond regular Fedora offerings.
+ Fedora Hosted is our most feature rich + hosting solution. It includes an scm, trac instance, + release dir, account system for access control, etc. + This is our most common hosting option. When most groups + want hosting, this is what they want. +- name: Infrastructure + data: + description: > + Tools for sysadmins -- the people who run the servers that run + Fedora (and otherwise). + children: + - name: GeoIP + data: + url: https://geoip.fedoraproject.org + description: > + A simple web service running geoip-city-wsgi + that will return geoip information to you. + - name: Easyfix + data: + url: https://fedoraproject.org/easyfix + description: > + A list of easy-to-fix problems for the different projects in + Fedora. Interested in getting into helping out with sysadmin + work or web application development? This should be useful + to you. + - name: DataGrepper + data: + icon: fedmsg.png + url: https://apps.fedoraproject.org/datagrepper + package_url: https://apps.fedoraproject.org/datagrepper/raw?package={package} + user_url: https://apps.fedoraproject.org/datagrepper/raw?user={user} + status_mappings: ['fedmsg'] + description: > + DataGrepper is an HTTP API for querying the datanommer + database. You can use it to dig into the history of the + fedmsg message bus. You + can grab events by username, by package, by message + source, by topic... you name it. + + - name: Status + data: + icon: status-good.png + url: http://status.fedoraproject.org + description: > + Sometimes the Fedora Infrastructure team messes up (or + lightning strikes our datacenter(s)). Sorry about that. + You can use this website to check the status. Is it + "down for everyone, or just me?"
Notice the favicon + in your browser tab. It changes based on the status, + so if you keep this open you can check back to it at a + glance. + - name: MirrorManager + data: + icon: downloads.png + url: https://mirrors.fedoraproject.org + status_mappings: ['mirrormanager', 'mirrorlist'] + description: > + Fedora is distributed to millions of systems globally. + This would not be possible without the donations of time, + disk space, and bandwidth by hundreds of volunteer system + administrators and their companies or institutions. Your + fast download experience is made possible by these + donations. The list on the MirrorManager + site is dynamically generated every hour, listing only + up-to-date mirrors. + - name: Nagios + data: + icon: nagios-logo.png + url: https://admin.fedoraproject.org/nagios + description: > + "Is telia down?" The answer can most definitively be + found here (and in detail). The Fedora Infrastructure + team uses Nagios to monitor the servers that serve + Fedora. Accessing most details requires membership + in the sysadmin group. + - name: Collectd + data: + icon: collectd.png + url: https://admin.fedoraproject.org/collectd/ + description: > + Tracks and displays statistics on the Fedora + Infrastructure machines over time. Useful for debugging + ineffeciencies and problems. + - name: HAProxy + data: + url: https://admin.fedoraproject.org/haproxy/proxy1 + description: > + Shows the health of our proxies. How many bytes? + Concurrent sessions? Health checks? +- name: In Development + data: + description: > + These are the apps that we're working on, but that aren't quite + ready for prime-time yet. Try and use them, and report bugs when + they're broken -- it's a big help!. + Check back here from time to time, as this section will change. + children: + - name: Jenkins + data: + url: http://jenkins.cloud.fedoraproject.org + description: > + Our own continuous integration (CI) service! It works now and + you can use it.. we just don't yet give it the same kind of + guarantees that we give our other apps. Look forwards to us + promoting it soon.. + - name: faitout + data: + url: http://209.132.184.152/faitout/ + description: > + Provides access to temporary postgresql databases. This + database can be used for unit-test thus reducing the + differences between testing and production environment. diff --git a/roles/apps-fp-o/files/fedmenu-staging/css/demo.css b/roles/apps-fp-o/files/fedmenu-staging/css/demo.css new file mode 100644 index 0000000000..e8fd8fc1fa --- /dev/null +++ b/roles/apps-fp-o/files/fedmenu-staging/css/demo.css @@ -0,0 +1,3 @@ +.fm-intro { + padding-bottom: 15px; +} diff --git a/roles/apps-fp-o/files/fedmenu-staging/css/fedmenu.css b/roles/apps-fp-o/files/fedmenu-staging/css/fedmenu.css index 436f25478c..ae9b9746ba 100644 --- a/roles/apps-fp-o/files/fedmenu-staging/css/fedmenu.css +++ b/roles/apps-fp-o/files/fedmenu-staging/css/fedmenu.css @@ -7,27 +7,27 @@ .fedmenu-bottom-left { position: fixed; - bottom: 10; - left: 10; + bottom: 10px; + left: 10px; } .fedmenu-bottom-right { position: fixed; - bottom: 10; - right: 10; + bottom: 10px; + right: 10px; } .fedmenu-bottom-left:hover .fedmenu-bottom-left.fedmenu-active { - bottom: 2; - left: 2; + bottom: 2px; + left: 2px; } .fedmenu-bottom-right:hover .fedmenu-bottom-right.fedmenu-active { - bottom: 2; - right 2; + bottom: 2px; + right: 2px; } #fedmenu-tray { width: 48px; - z-index: 101; + z-index: 1010; } .fedmenu-button { @@ -35,8 +35,8 @@ margin-right: auto; margin-top: 0px; - margin-bottom: 0px; - padding-bottom: 8px; + margin-bottom: 8px; + padding-bottom: 0px; width: 32px; height: 32px; @@ -51,7 +51,7 @@ } .fedmenu-button:hover, .fedmenu-button.fedmenu-active { margin-top: -8px; - margin-bottom: -8px; + margin-bottom: 0px; padding-bottom: -8px; width: 48px; @@ -61,14 +61,14 @@ filter: drop-shadow(4px 4px 6px #222); } -.fedmenu-button .img { +.fedmenu-button div.img { width: 100%; height: 100%; background-repeat: no-repeat; background-size: 100% 100%; } -#fedmenu-main-button .img { background-image: url("../img/logo.png"); } -#fedmenu-user-button .img { border-radius: 50%; /* Make avatars into a circle */ } +#fedmenu-main-button div.img { background-image: url("../img/logo.png"); } +#fedmenu-user-button div.img { border-radius: 50%; /* Make avatars into a circle */ } #fedmenu-wrapper { position: fixed; @@ -76,7 +76,7 @@ width: 100%; top: 0; left: 0; - z-index: 98; + z-index: 1008; display: none; } @@ -143,7 +143,7 @@ -webkit-box-shadow: 0px 3px 10px rgba(0, 0, 0, 0.23); box-shadow: 0px 3px 10px rgba(0, 0, 0, 0.23); - z-index: 100; + z-index: 1009; } .fedmenu-content.fedmenu-active { @@ -158,18 +158,23 @@ font-family: Montserrat; font-weight: bold; border-bottom: 2px solid; + color: #294172; } .fedmenu-panel { font-family: Montserrat; display: inline-block; vertical-align: top; - font-size: 8pt; + font-size: 12pt; padding: 15px; } +.fedmenu-panel ul { + margin-top: 10px; + padding-left: 0px; +} .fedmenu-panel li { list-style: none; - margin-left: -40px; + margin-left: 0px; } .fedmenu-panel a, diff --git a/roles/apps-fp-o/files/fedmenu-staging/js/fedmenu.js b/roles/apps-fp-o/files/fedmenu-staging/js/fedmenu.js index 71d5135602..5c78cab8cd 100644 --- a/roles/apps-fp-o/files/fedmenu-staging/js/fedmenu.js +++ b/roles/apps-fp-o/files/fedmenu-staging/js/fedmenu.js @@ -3,46 +3,50 @@ var fedmenu = function(options) { $(document).ready(function() { 'url': 'https://apps.fedoraproject.org/js/data.js', 'mimeType': undefined, // Only needed for local development + 'context': document, // Alternatively, parent.document for iframes. + 'position': 'bottom-left', 'user': null, 'package': null, - } + }; + // Our options object is called 'o' for shorthand var o = $.extend({}, defaults, options || {}); + // Also, hang on to the selector context with a shorthand 'c' + var c = o['context']; + var buttons = ""; - if (o['user'] != null) buttons += '
'; - if (o['package'] != null) buttons += '
'; + if (o.user !== null) buttons += '
'; + if (o.package !== null) buttons += '
'; buttons += '
'; - $('body').append(''); - $('body').append( + var script = $("script[src$='fedmenu.js']").attr('src'); + var base = script.slice(0, -13); + + // Add a section if one doesn't exist. + // https://github.com/fedora-infra/fedmenu/issues/6 + if ($('head', c).length == 0) $('html', c).append(''); + $('head', c).append(''); + + $('body', c).append( '
' + buttons + '
'); - $('body').append('
'); + $('body', c).append('
'); - $('body').append('
'); - $('#fedmenu-main-content').append(""); - $('#fedmenu-main-content').append("

Fedora Infrastructure Apps

"); - - if (o['user'] != null) { - var imgurl = libravatar.url(o['user']); - $('#fedmenu-user-button .img').css('background-image', 'url("' + imgurl + '")'); - $('body').append('
'); - $('#fedmenu-user-content').append(""); - $('#fedmenu-user-content').append("

View " + o['user'] + " in other apps

"); + var imgurl; + if (o.user !== null) { + imgurl = libravatar.url(o.user); + $('#fedmenu-user-button .img', c).css('background-image', 'url("' + imgurl + '")'); } - if (o['package'] != null) { + if (o.package !== null) { /* This icon is not always going to exist, so we should put in an * apache rule that redirects to a default icon if this file * isn't there. */ - var imgurl = 'https://apps.fedoraproject.org/packages/images/icons/' + o['package'] + '.png'; - $('#fedmenu-package-button .img').css('background-image', 'url("' + imgurl + '")'); - $('body').append('
'); - $('#fedmenu-package-content').append(""); - $('#fedmenu-package-content').append("

View the " + o['package'] + " package elsewhere

"); + imgurl = 'https://apps.fedoraproject.org/packages/images/icons/' + o.package + '.png'; + $('#fedmenu-package-button .img', c).css('background-image', 'url("' + imgurl + '")'); } // Define three functions used to generate the content of the menu panes @@ -57,7 +61,13 @@ var fedmenu = function(options) { $(document).ready(function() { ""; }); html = html + ""; - $("#fedmenu-main-content").append(html); + + if ($('#fedmenu-main-content').length == 0) { + $('body', c).append('
'); + $('#fedmenu-main-content', c).append(""); + $('#fedmenu-main-content', c).append("

Fedora Infrastructure Apps

"); + } + $("#fedmenu-main-content", c).append(html); }; var make_user_content_html = function(i, node) { @@ -66,9 +76,9 @@ var fedmenu = function(options) { $(document).ready(function() { var found = false; $.each(node.children, function(j, leaf) { - if (leaf.data.user_url != undefined) { + if (leaf.data.user_url !== undefined) { found = true; - var url = leaf.data.user_url.replace('{user}', o['user']) + var url = leaf.data.user_url.replace('{user}', o.user); html = html + "
  • " + $("

    " + leaf.name + "

    ").text() + @@ -76,8 +86,13 @@ var fedmenu = function(options) { $(document).ready(function() { } }); if (found) { + if ($('#fedmenu-user-content').length == 0) { + $('body', c).append('
    '); + $('#fedmenu-user-content', c).append(""); + $('#fedmenu-user-content', c).append("

    View " + o.user + " in other apps

    "); + } html = html + ""; - $("#fedmenu-user-content").append(html); + $("#fedmenu-user-content", c).append(html); } }; @@ -87,9 +102,9 @@ var fedmenu = function(options) { $(document).ready(function() { var found = false; $.each(node.children, function(j, leaf) { - if (leaf.data.package_url != undefined) { + if (leaf.data.package_url !== undefined) { found = true; - var url = leaf.data.package_url.replace('{package}', o['package']) + var url = leaf.data.package_url.replace('{package}', o.package); html = html + "
  • " + $("

    " + leaf.name + "

    ").text() + @@ -98,24 +113,66 @@ var fedmenu = function(options) { $(document).ready(function() { }); if (found) { html = html + ""; - $("#fedmenu-package-content").append(html); + if ($('#fedmenu-package-content').length == 0) { + $('body', c).append('
    '); + $('#fedmenu-package-content', c).append(""); + $('#fedmenu-package-content', c).append("

    View the " + o.package + " package elsewhere

    "); + } + $("#fedmenu-package-content", c).append(html); } }; + // A handy lookup for those functions we just defined. + var content_makers = { + 'main': make_main_content_html, + 'user': make_user_content_html, + 'package': make_package_content_html, + }; + + var normalize = function(url) { + return url.slice(url.indexOf('://') + 3).replace(/\/$/, ""); + } + + // Figure out the current site that we're on, if possible, and return the + // data we have on it from the json we loaded. + var get_current_site = function() { + var found = null; + var ours = normalize(window.location.toString()); + $.each(master_data, function(i, node) { + $.each(node.children, function(j, leaf) { + var theirs = normalize(leaf.data.url); + if (ours.indexOf(theirs) === 0) found = leaf; + }) + }); + return found; + } + + // Try to construct a little footer for the menus. + var add_footer_links = function() { + var site = get_current_site(); + var content = ""; + if (site != null && site.data.bugs_url != undefined && site.data.source_url != undefined) { + content = content + "Problems with " + site.name + + "? Please
    file bugs or check out the source."; + } + content = content + "
    Powered by fedmenu."; + $(".fedmenu-content").append("

    " + content + "

    "); + } + $.ajax({ url: o.url, mimeType: o.mimeType, dataType: 'script', + cache: true, error: function(err) { console.log('Error getting ' + o.url); console.log(err); }, success: function(script) { - $.each(json.children, make_main_content_html); - if (o['user'] != null) - $.each(json.children, make_user_content_html); - if (o['package'] != null) - $.each(json.children, make_package_content_html); + // Save this for later... + master_data = json.children; }, }); @@ -124,27 +181,37 @@ var fedmenu = function(options) { $(document).ready(function() { "#fedmenu-" + t + "-button," + "#fedmenu-" + t + "-content"; }; - var activate = function(t) { $(selector(t)).addClass('fedmenu-active'); }; - var deactivate = function(t) { $(selector(t)).removeClass('fedmenu-active'); }; + + var activate = function(t) { + $.each(master_data, content_makers[t]); + $(".fedmenu-exit", c).click(function() {deactivate(t);}); + add_footer_links(); + setTimeout(function() {$(selector(t), c).addClass('fedmenu-active');}, 50); + }; + var deactivate = function(t) { + $(selector(t), c).removeClass('fedmenu-active'); + $('.fedmenu-content', c).remove(); // destroy content. + }; var click_factory = function(t) { return function() { - if ($(this).hasClass('fedmenu-active')) + if ($(this).hasClass('fedmenu-active')) { deactivate(t); - else + } else { // Deactivate any others that may be active before starting anew. deactivate('main'); deactivate('user'); deactivate('package'); activate(t); + } };}; - $("#fedmenu-main-button").click(click_factory('main')); - $("#fedmenu-user-button").click(click_factory('user')); - $("#fedmenu-package-button").click(click_factory('package')); - $("#fedmenu-wrapper,.fedmenu-exit").click(function() { + $("#fedmenu-main-button", c).click(click_factory('main')); + $("#fedmenu-user-button", c).click(click_factory('user')); + $("#fedmenu-package-button", c).click(click_factory('package')); + $("#fedmenu-wrapper,.fedmenu-exit", c).click(function() { deactivate('main'); deactivate('user'); deactivate('package'); }); $(document).keydown(function(event) { - if (event.key == 'Esc'){ + if (event.key == 'Escape' || event.key == 'Esc'){ deactivate('main'); deactivate('user'); deactivate('package'); diff --git a/roles/apps-fp-o/files/fedmenu/css/demo.css b/roles/apps-fp-o/files/fedmenu/css/demo.css new file mode 100644 index 0000000000..e8fd8fc1fa --- /dev/null +++ b/roles/apps-fp-o/files/fedmenu/css/demo.css @@ -0,0 +1,3 @@ +.fm-intro { + padding-bottom: 15px; +} diff --git a/roles/apps-fp-o/files/fedmenu/css/fedmenu.css b/roles/apps-fp-o/files/fedmenu/css/fedmenu.css new file mode 100644 index 0000000000..ae9b9746ba --- /dev/null +++ b/roles/apps-fp-o/files/fedmenu/css/fedmenu.css @@ -0,0 +1,212 @@ +@font-face { + font-family: 'Montserrat'; + font-style: normal; + font-weight: 400; + src: local('Montserrat-Regular'), url(../fonts/montserrat.ttf) format('truetype'); +} + +.fedmenu-bottom-left { + position: fixed; + bottom: 10px; + left: 10px; +} + +.fedmenu-bottom-right { + position: fixed; + bottom: 10px; + right: 10px; +} +.fedmenu-bottom-left:hover .fedmenu-bottom-left.fedmenu-active { + bottom: 2px; + left: 2px; +} +.fedmenu-bottom-right:hover .fedmenu-bottom-right.fedmenu-active { + bottom: 2px; + right: 2px; +} + +#fedmenu-tray { + width: 48px; + z-index: 1010; +} + +.fedmenu-button { + margin-left: auto; + margin-right: auto; + + margin-top: 0px; + margin-bottom: 8px; + padding-bottom: 0px; + + width: 32px; + height: 32px; + + -webkit-filter: drop-shadow(4px 4px 4px #222); + filter: drop-shadow(4px 4px 4px #222); + + /* Awesome */ + transition-property: width, height, bottom, left, background-size, filter, margin-top, margin-bottom, padding-bottom; + transition-duration: 0.10s; + transition-timing-function: linear; +} +.fedmenu-button:hover, .fedmenu-button.fedmenu-active { + margin-top: -8px; + margin-bottom: 0px; + padding-bottom: -8px; + + width: 48px; + height: 48px; + + -webkit-filter: drop-shadow(4px 4px 6px #222); + filter: drop-shadow(4px 4px 6px #222); +} + +.fedmenu-button div.img { + width: 100%; + height: 100%; + background-repeat: no-repeat; + background-size: 100% 100%; +} +#fedmenu-main-button div.img { background-image: url("../img/logo.png"); } +#fedmenu-user-button div.img { border-radius: 50%; /* Make avatars into a circle */ } + +#fedmenu-wrapper { + position: fixed; + height: 100%; + width: 100%; + top: 0; + left: 0; + z-index: 1008; + display: none; +} + +#fedmenu-wrapper.fedmenu-active,.fedmenu-exit { + cursor: pointer; +} + +.fedmenu-exit { + float: right; + width: 26px; + height: 26px; + text-align: center; + line-height: 28px; + font-family: sans-serif; + font-size: 16pt; + color: #FFF; + background-color: #db3279; + border-radius: 50%; + + -webkit-filter: drop-shadow(4px 4px 4px #222); + filter: drop-shadow(4px 4px 4px #222); + + /* Awesome */ + transition-property: width, height, margin, line-height, font-size; + transition-duration: 0.10s; + transition-timing-function: linear; +} + +.fedmenu-exit:hover { + margin-top: -6px; + margin-right: -6px; + + width: 38px; + height: 38px; + line-height: 40px; + font-size: 22pt; + + -webkit-filter: drop-shadow(4px 4px 6px #222); + filter: drop-shadow(4px 4px 6px #222); +} + +#fedmenu-wrapper.fedmenu-active { + display: block; + background-color: rgba(0,0,0,0.3); + -webkit-box-shadow: inset 0 0 400px rgba(0,0,0,1); + box-shadow: inset 0 0 400px rgba(0,0,0,1); + +} + +.fedmenu-content { + position: fixed; + top: 100%; + + /* Vary these two parameters for responsive-ness */ + left: 30%; + width: 40%; + + background: url('../img/panda-wave.png') no-repeat right bottom; + + background-color: #FFF; + margin-bottom: 20px; + padding: 19px; + border: 0px none; + + -webkit-box-shadow: 0px 3px 10px rgba(0, 0, 0, 0.23); + box-shadow: 0px 3px 10px rgba(0, 0, 0, 0.23); + z-index: 1009; +} + +.fedmenu-content.fedmenu-active { + top: 15%; + transition-property: top; + transition-duration: 0.4s; + transition-delay: 0.3s; + transition-timing-function: ease-out; +} + +.fedmenu-content > h1 { + font-family: Montserrat; + font-weight: bold; + border-bottom: 2px solid; + color: #294172; +} + +.fedmenu-panel { + font-family: Montserrat; + display: inline-block; + vertical-align: top; + font-size: 12pt; + padding: 15px; +} +.fedmenu-panel ul { + margin-top: 10px; + padding-left: 0px; +} +.fedmenu-panel li { + list-style: none; + margin-left: 0px; +} + +.fedmenu-panel a, +.fedmenu-panel a:hover { + color: #3C6EB4; +} + +.fedmenu-header { + display: inline; + font-weight: bold; + font-size: 12pt; + color: #294172; + border-bottom: 1px solid; +} + +/* Responsive stuff... */ +@media all and (max-width: 1450px) { .fedmenu-content { left: 25%; width: 50%; }} +@media all and (max-width: 1200px) { .fedmenu-content { left: 20%; width: 60%; }} +@media all and (max-width: 1000px) { .fedmenu-content { left: 15%; width: 70%; }} +@media all and (max-width: 860px) { .fedmenu-content { left: 10%; width: 80%; }} +@media all and (max-width: 740px) { .fedmenu-content { left: 5%; width: 85%; }} +@media all and (max-width: 600px) { .fedmenu-content { left: 0%; width:100%; }} +@media all and (max-width: 800px) { .fedmenu-content.fedmenu-active { top: 10%; }} +@media all and (max-width: 700px) { .fedmenu-content.fedmenu-active { top: 10%; }} +@media all and (max-width: 600px) { .fedmenu-content.fedmenu-active { top: 5%; }} +@media all and (max-width: 500px) { + .fedmenu-content.fedmenu-active { top: 0%; } + .fedmenu-content > h1 { font-size: 80%; padding: 5px } + .fedmenu-panel .fedmenu-header { font-size: 90%; } + .fedmenu-panel { padding: 7px; font-size: 5pt; } + .fedmenu-content { + padding: 9px; + background-image: none; + } +} diff --git a/roles/apps-fp-o/files/fedmenu/fonts/montserrat.ttf b/roles/apps-fp-o/files/fedmenu/fonts/montserrat.ttf new file mode 100644 index 0000000000..68df4ba660 Binary files /dev/null and b/roles/apps-fp-o/files/fedmenu/fonts/montserrat.ttf differ diff --git a/roles/apps-fp-o/files/fedmenu/img/logo.png b/roles/apps-fp-o/files/fedmenu/img/logo.png new file mode 100644 index 0000000000..debc431245 Binary files /dev/null and b/roles/apps-fp-o/files/fedmenu/img/logo.png differ diff --git a/roles/apps-fp-o/files/fedmenu/img/panda-wave.png b/roles/apps-fp-o/files/fedmenu/img/panda-wave.png new file mode 100644 index 0000000000..9e6423e8bb Binary files /dev/null and b/roles/apps-fp-o/files/fedmenu/img/panda-wave.png differ diff --git a/roles/apps-fp-o/files/fedmenu/js/fedmenu.js b/roles/apps-fp-o/files/fedmenu/js/fedmenu.js new file mode 100644 index 0000000000..5c78cab8cd --- /dev/null +++ b/roles/apps-fp-o/files/fedmenu/js/fedmenu.js @@ -0,0 +1,220 @@ +var fedmenu = function(options) { $(document).ready(function() { + var defaults = { + 'url': 'https://apps.fedoraproject.org/js/data.js', + 'mimeType': undefined, // Only needed for local development + + 'context': document, // Alternatively, parent.document for iframes. + + 'position': 'bottom-left', + + 'user': null, + 'package': null, + }; + + // Our options object is called 'o' for shorthand + var o = $.extend({}, defaults, options || {}); + + // Also, hang on to the selector context with a shorthand 'c' + var c = o['context']; + + var buttons = ""; + if (o.user !== null) buttons += '
    '; + if (o.package !== null) buttons += '
    '; + buttons += '
    '; + + var script = $("script[src$='fedmenu.js']").attr('src'); + var base = script.slice(0, -13); + + // Add a section if one doesn't exist. + // https://github.com/fedora-infra/fedmenu/issues/6 + if ($('head', c).length == 0) $('html', c).append(''); + $('head', c).append(''); + + $('body', c).append( + '
    ' + + buttons + '
    '); + + $('body', c).append('
    '); + + var imgurl; + if (o.user !== null) { + imgurl = libravatar.url(o.user); + $('#fedmenu-user-button .img', c).css('background-image', 'url("' + imgurl + '")'); + } + if (o.package !== null) { + /* This icon is not always going to exist, so we should put in an + * apache rule that redirects to a default icon if this file + * isn't there. */ + imgurl = 'https://apps.fedoraproject.org/packages/images/icons/' + o.package + '.png'; + $('#fedmenu-package-button .img', c).css('background-image', 'url("' + imgurl + '")'); + } + + // Define three functions used to generate the content of the menu panes + var make_main_content_html = function(i, node) { + var html = "
    " + + node.name + "
    "; + + if ($('#fedmenu-main-content').length == 0) { + $('body', c).append('
    '); + $('#fedmenu-main-content', c).append(""); + $('#fedmenu-main-content', c).append("

    Fedora Infrastructure Apps

    "); + } + $("#fedmenu-main-content", c).append(html); + }; + + var make_user_content_html = function(i, node) { + var html = "
    " + + node.name + "
      "; + + var found = false; + $.each(node.children, function(j, leaf) { + if (leaf.data.user_url !== undefined) { + found = true; + var url = leaf.data.user_url.replace('{user}', o.user); + html = html + + "
    • " + + $("

      " + leaf.name + "

      ").text() + + "
    • "; + } + }); + if (found) { + if ($('#fedmenu-user-content').length == 0) { + $('body', c).append('
      '); + $('#fedmenu-user-content', c).append(""); + $('#fedmenu-user-content', c).append("

      View " + o.user + " in other apps

      "); + } + html = html + "
    "; + $("#fedmenu-user-content", c).append(html); + } + }; + + var make_package_content_html = function(i, node) { + var html = "
    " + + node.name + "
      "; + + var found = false; + $.each(node.children, function(j, leaf) { + if (leaf.data.package_url !== undefined) { + found = true; + var url = leaf.data.package_url.replace('{package}', o.package); + html = html + + "
    • " + + $("

      " + leaf.name + "

      ").text() + + "
    • "; + } + }); + if (found) { + html = html + "
    "; + if ($('#fedmenu-package-content').length == 0) { + $('body', c).append('
    '); + $('#fedmenu-package-content', c).append(""); + $('#fedmenu-package-content', c).append("

    View the " + o.package + " package elsewhere

    "); + } + $("#fedmenu-package-content", c).append(html); + } + }; + + // A handy lookup for those functions we just defined. + var content_makers = { + 'main': make_main_content_html, + 'user': make_user_content_html, + 'package': make_package_content_html, + }; + + var normalize = function(url) { + return url.slice(url.indexOf('://') + 3).replace(/\/$/, ""); + } + + // Figure out the current site that we're on, if possible, and return the + // data we have on it from the json we loaded. + var get_current_site = function() { + var found = null; + var ours = normalize(window.location.toString()); + $.each(master_data, function(i, node) { + $.each(node.children, function(j, leaf) { + var theirs = normalize(leaf.data.url); + if (ours.indexOf(theirs) === 0) found = leaf; + }) + }); + return found; + } + + // Try to construct a little footer for the menus. + var add_footer_links = function() { + var site = get_current_site(); + var content = ""; + if (site != null && site.data.bugs_url != undefined && site.data.source_url != undefined) { + content = content + "Problems with " + site.name + + "? Please file bugs or check out the source."; + } + content = content + "
    Powered by fedmenu."; + $(".fedmenu-content").append("

    " + content + "

    "); + } + + $.ajax({ + url: o.url, + mimeType: o.mimeType, + dataType: 'script', + cache: true, + error: function(err) { + console.log('Error getting ' + o.url); + console.log(err); + }, + success: function(script) { + // Save this for later... + master_data = json.children; + }, + }); + + var selector = function(t) { + return "#fedmenu-wrapper," + + "#fedmenu-" + t + "-button," + + "#fedmenu-" + t + "-content"; + }; + + var activate = function(t) { + $.each(master_data, content_makers[t]); + $(".fedmenu-exit", c).click(function() {deactivate(t);}); + add_footer_links(); + setTimeout(function() {$(selector(t), c).addClass('fedmenu-active');}, 50); + }; + var deactivate = function(t) { + $(selector(t), c).removeClass('fedmenu-active'); + $('.fedmenu-content', c).remove(); // destroy content. + }; + + var click_factory = function(t) { return function() { + if ($(this).hasClass('fedmenu-active')) { + deactivate(t); + } else { + // Deactivate any others that may be active before starting anew. + deactivate('main'); deactivate('user'); deactivate('package'); + activate(t); + } + };}; + $("#fedmenu-main-button", c).click(click_factory('main')); + $("#fedmenu-user-button", c).click(click_factory('user')); + $("#fedmenu-package-button", c).click(click_factory('package')); + $("#fedmenu-wrapper,.fedmenu-exit", c).click(function() { + deactivate('main'); + deactivate('user'); + deactivate('package'); + }); + $(document).keydown(function(event) { + if (event.key == 'Escape' || event.key == 'Esc'){ + deactivate('main'); + deactivate('user'); + deactivate('package'); + } + }); +}); }; diff --git a/roles/apps-fp-o/files/fedmenu/js/fedora-libravatar.js b/roles/apps-fp-o/files/fedmenu/js/fedora-libravatar.js new file mode 100644 index 0000000000..0d7b52f9ce --- /dev/null +++ b/roles/apps-fp-o/files/fedmenu/js/fedora-libravatar.js @@ -0,0 +1,148 @@ +/* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -*/ +/* Libravatar retrieval for Fedora FAS usernames */ +/* (c) Ralph Bean 2015 / MIT License */ +/* Original SHA-256 implementation in JavaScript */ +/* (c) Chris Veness 2002-2014 / MIT Licence */ +/* */ +/* - see http://csrc.nist.gov/groups/ST/toolkit/secure_hashing.html */ +/* http://csrc.nist.gov/groups/ST/toolkit/examples.html */ +/* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -*/ + +'use strict'; + + +var sha256 = {}; +var libravatar = {}; + +/** + * Generates SHA-256 hash of string. + * + * @param {string} msg - String to be hashed + * @returns {string} Hash of msg as hex character string + */ +sha256.hash = function(msg) { + // convert string to UTF-8, as SHA only deals with byte-streams + msg = msg.utf8Encode(); + + // constants [§4.2.2] + var K = [ + 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5, + 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174, + 0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc, 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da, + 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967, + 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85, + 0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3, 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070, + 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3, + 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 ]; + // initial hash value [§5.3.1] + var H = [ + 0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a, 0x510e527f, 0x9b05688c, 0x1f83d9ab, 0x5be0cd19 ]; + + // PREPROCESSING + + msg += String.fromCharCode(0x80); // add trailing '1' bit (+ 0's padding) to string [§5.1.1] + + // convert string msg into 512-bit/16-integer blocks arrays of ints [§5.2.1] + var l = msg.length/4 + 2; // length (in 32-bit integers) of msg + ‘1’ + appended length + var N = Math.ceil(l/16); // number of 16-integer-blocks required to hold 'l' ints + var M = new Array(N); + + for (var i=0; i>> 32, but since JS converts + // bitwise-op args to 32 bits, we need to simulate this by arithmetic operators + M[N-1][14] = ((msg.length-1)*8) / Math.pow(2, 32); M[N-1][14] = Math.floor(M[N-1][14]); + M[N-1][15] = ((msg.length-1)*8) & 0xffffffff; + + + // HASH COMPUTATION [§6.1.2] + + var W = new Array(64); var a, b, c, d, e, f, g, h; + for (var i=0; i>> n) | (x << (32-n)); +}; + +sha256.BigSigma0 = function(x) { return sha256.ROTR(2, x) ^ sha256.ROTR(13, x) ^ sha256.ROTR(22, x); }; +sha256.BigSigma1 = function(x) { return sha256.ROTR(6, x) ^ sha256.ROTR(11, x) ^ sha256.ROTR(25, x); }; +sha256.SmallSigma0 = function(x) { return sha256.ROTR(7, x) ^ sha256.ROTR(18, x) ^ (x>>>3); }; +sha256.SmallSigma1 = function(x) { return sha256.ROTR(17, x) ^ sha256.ROTR(19, x) ^ (x>>>10); }; +sha256.Ch = function(x, y, z) { return (x & y) ^ (~x & z); }; +sha256.Maj = function(x, y, z) { return (x & y) ^ (x & z) ^ (y & z); }; + +sha256.toHexStr = function(n) { + // note can't use toString(16) as it is implementation-dependant, + // and in IE returns signed numbers when used on full words + var s="", v; + for (var i=7; i>=0; i--) { v = (n>>>(i*4)) & 0xf; s += v.toString(16); } + return s; +}; + + +if (typeof String.prototype.utf8Encode == 'undefined') { + String.prototype.utf8Encode = function() { + return unescape( encodeURIComponent( this ) ); + }; +} +if (typeof String.prototype.utf8Decode == 'undefined') { + String.prototype.utf8Decode = function() { + try { + return decodeURIComponent( escape( this ) ); + } catch (e) { + return this; // invalid UTF-8? return as-is + } + }; +} + +/* This is all that we need the sha256 code for... */ +libravatar.url = function(username, s, d) { + if (s === undefined) s = 64; + if (d === undefined) d = 'retro'; + + var openid = 'http://' + username + '.id.fedoraproject.org/' + var hash = sha256.hash(openid); + + return 'https://seccdn.libravatar.org/avatar/' + hash + '?s=' + s + '&d=' + d; +} diff --git a/roles/apps-fp-o/files/fedmenu/js/jquery-1.11.2.min.js b/roles/apps-fp-o/files/fedmenu/js/jquery-1.11.2.min.js new file mode 100644 index 0000000000..e6a051d0d1 --- /dev/null +++ b/roles/apps-fp-o/files/fedmenu/js/jquery-1.11.2.min.js @@ -0,0 +1,4 @@ +/*! jQuery v1.11.2 | (c) 2005, 2014 jQuery Foundation, Inc. | jquery.org/license */ +!function(a,b){"object"==typeof module&&"object"==typeof module.exports?module.exports=a.document?b(a,!0):function(a){if(!a.document)throw new Error("jQuery requires a window with a document");return b(a)}:b(a)}("undefined"!=typeof window?window:this,function(a,b){var c=[],d=c.slice,e=c.concat,f=c.push,g=c.indexOf,h={},i=h.toString,j=h.hasOwnProperty,k={},l="1.11.2",m=function(a,b){return new m.fn.init(a,b)},n=/^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g,o=/^-ms-/,p=/-([\da-z])/gi,q=function(a,b){return b.toUpperCase()};m.fn=m.prototype={jquery:l,constructor:m,selector:"",length:0,toArray:function(){return d.call(this)},get:function(a){return null!=a?0>a?this[a+this.length]:this[a]:d.call(this)},pushStack:function(a){var b=m.merge(this.constructor(),a);return b.prevObject=this,b.context=this.context,b},each:function(a,b){return m.each(this,a,b)},map:function(a){return this.pushStack(m.map(this,function(b,c){return a.call(b,c,b)}))},slice:function(){return this.pushStack(d.apply(this,arguments))},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},eq:function(a){var b=this.length,c=+a+(0>a?b:0);return this.pushStack(c>=0&&b>c?[this[c]]:[])},end:function(){return this.prevObject||this.constructor(null)},push:f,sort:c.sort,splice:c.splice},m.extend=m.fn.extend=function(){var a,b,c,d,e,f,g=arguments[0]||{},h=1,i=arguments.length,j=!1;for("boolean"==typeof g&&(j=g,g=arguments[h]||{},h++),"object"==typeof g||m.isFunction(g)||(g={}),h===i&&(g=this,h--);i>h;h++)if(null!=(e=arguments[h]))for(d in e)a=g[d],c=e[d],g!==c&&(j&&c&&(m.isPlainObject(c)||(b=m.isArray(c)))?(b?(b=!1,f=a&&m.isArray(a)?a:[]):f=a&&m.isPlainObject(a)?a:{},g[d]=m.extend(j,f,c)):void 0!==c&&(g[d]=c));return g},m.extend({expando:"jQuery"+(l+Math.random()).replace(/\D/g,""),isReady:!0,error:function(a){throw new Error(a)},noop:function(){},isFunction:function(a){return"function"===m.type(a)},isArray:Array.isArray||function(a){return"array"===m.type(a)},isWindow:function(a){return null!=a&&a==a.window},isNumeric:function(a){return!m.isArray(a)&&a-parseFloat(a)+1>=0},isEmptyObject:function(a){var b;for(b in a)return!1;return!0},isPlainObject:function(a){var b;if(!a||"object"!==m.type(a)||a.nodeType||m.isWindow(a))return!1;try{if(a.constructor&&!j.call(a,"constructor")&&!j.call(a.constructor.prototype,"isPrototypeOf"))return!1}catch(c){return!1}if(k.ownLast)for(b in a)return j.call(a,b);for(b in a);return void 0===b||j.call(a,b)},type:function(a){return null==a?a+"":"object"==typeof a||"function"==typeof a?h[i.call(a)]||"object":typeof a},globalEval:function(b){b&&m.trim(b)&&(a.execScript||function(b){a.eval.call(a,b)})(b)},camelCase:function(a){return a.replace(o,"ms-").replace(p,q)},nodeName:function(a,b){return a.nodeName&&a.nodeName.toLowerCase()===b.toLowerCase()},each:function(a,b,c){var d,e=0,f=a.length,g=r(a);if(c){if(g){for(;f>e;e++)if(d=b.apply(a[e],c),d===!1)break}else for(e in a)if(d=b.apply(a[e],c),d===!1)break}else if(g){for(;f>e;e++)if(d=b.call(a[e],e,a[e]),d===!1)break}else for(e in a)if(d=b.call(a[e],e,a[e]),d===!1)break;return a},trim:function(a){return null==a?"":(a+"").replace(n,"")},makeArray:function(a,b){var c=b||[];return null!=a&&(r(Object(a))?m.merge(c,"string"==typeof a?[a]:a):f.call(c,a)),c},inArray:function(a,b,c){var d;if(b){if(g)return g.call(b,a,c);for(d=b.length,c=c?0>c?Math.max(0,d+c):c:0;d>c;c++)if(c in b&&b[c]===a)return c}return-1},merge:function(a,b){var c=+b.length,d=0,e=a.length;while(c>d)a[e++]=b[d++];if(c!==c)while(void 0!==b[d])a[e++]=b[d++];return a.length=e,a},grep:function(a,b,c){for(var d,e=[],f=0,g=a.length,h=!c;g>f;f++)d=!b(a[f],f),d!==h&&e.push(a[f]);return e},map:function(a,b,c){var d,f=0,g=a.length,h=r(a),i=[];if(h)for(;g>f;f++)d=b(a[f],f,c),null!=d&&i.push(d);else for(f in a)d=b(a[f],f,c),null!=d&&i.push(d);return e.apply([],i)},guid:1,proxy:function(a,b){var c,e,f;return"string"==typeof b&&(f=a[b],b=a,a=f),m.isFunction(a)?(c=d.call(arguments,2),e=function(){return a.apply(b||this,c.concat(d.call(arguments)))},e.guid=a.guid=a.guid||m.guid++,e):void 0},now:function(){return+new Date},support:k}),m.each("Boolean Number String Function Array Date RegExp Object Error".split(" "),function(a,b){h["[object "+b+"]"]=b.toLowerCase()});function r(a){var b=a.length,c=m.type(a);return"function"===c||m.isWindow(a)?!1:1===a.nodeType&&b?!0:"array"===c||0===b||"number"==typeof b&&b>0&&b-1 in a}var s=function(a){var b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u="sizzle"+1*new Date,v=a.document,w=0,x=0,y=hb(),z=hb(),A=hb(),B=function(a,b){return a===b&&(l=!0),0},C=1<<31,D={}.hasOwnProperty,E=[],F=E.pop,G=E.push,H=E.push,I=E.slice,J=function(a,b){for(var c=0,d=a.length;d>c;c++)if(a[c]===b)return c;return-1},K="checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|ismap|loop|multiple|open|readonly|required|scoped",L="[\\x20\\t\\r\\n\\f]",M="(?:\\\\.|[\\w-]|[^\\x00-\\xa0])+",N=M.replace("w","w#"),O="\\["+L+"*("+M+")(?:"+L+"*([*^$|!~]?=)"+L+"*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|("+N+"))|)"+L+"*\\]",P=":("+M+")(?:\\((('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|((?:\\\\.|[^\\\\()[\\]]|"+O+")*)|.*)\\)|)",Q=new RegExp(L+"+","g"),R=new RegExp("^"+L+"+|((?:^|[^\\\\])(?:\\\\.)*)"+L+"+$","g"),S=new RegExp("^"+L+"*,"+L+"*"),T=new RegExp("^"+L+"*([>+~]|"+L+")"+L+"*"),U=new RegExp("="+L+"*([^\\]'\"]*?)"+L+"*\\]","g"),V=new RegExp(P),W=new RegExp("^"+N+"$"),X={ID:new RegExp("^#("+M+")"),CLASS:new RegExp("^\\.("+M+")"),TAG:new RegExp("^("+M.replace("w","w*")+")"),ATTR:new RegExp("^"+O),PSEUDO:new RegExp("^"+P),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+L+"*(even|odd|(([+-]|)(\\d*)n|)"+L+"*(?:([+-]|)"+L+"*(\\d+)|))"+L+"*\\)|)","i"),bool:new RegExp("^(?:"+K+")$","i"),needsContext:new RegExp("^"+L+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+L+"*((?:-\\d)?\\d*)"+L+"*\\)|)(?=[^-]|$)","i")},Y=/^(?:input|select|textarea|button)$/i,Z=/^h\d$/i,$=/^[^{]+\{\s*\[native \w/,_=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ab=/[+~]/,bb=/'|\\/g,cb=new RegExp("\\\\([\\da-f]{1,6}"+L+"?|("+L+")|.)","ig"),db=function(a,b,c){var d="0x"+b-65536;return d!==d||c?b:0>d?String.fromCharCode(d+65536):String.fromCharCode(d>>10|55296,1023&d|56320)},eb=function(){m()};try{H.apply(E=I.call(v.childNodes),v.childNodes),E[v.childNodes.length].nodeType}catch(fb){H={apply:E.length?function(a,b){G.apply(a,I.call(b))}:function(a,b){var c=a.length,d=0;while(a[c++]=b[d++]);a.length=c-1}}}function gb(a,b,d,e){var f,h,j,k,l,o,r,s,w,x;if((b?b.ownerDocument||b:v)!==n&&m(b),b=b||n,d=d||[],k=b.nodeType,"string"!=typeof a||!a||1!==k&&9!==k&&11!==k)return d;if(!e&&p){if(11!==k&&(f=_.exec(a)))if(j=f[1]){if(9===k){if(h=b.getElementById(j),!h||!h.parentNode)return d;if(h.id===j)return d.push(h),d}else if(b.ownerDocument&&(h=b.ownerDocument.getElementById(j))&&t(b,h)&&h.id===j)return d.push(h),d}else{if(f[2])return H.apply(d,b.getElementsByTagName(a)),d;if((j=f[3])&&c.getElementsByClassName)return H.apply(d,b.getElementsByClassName(j)),d}if(c.qsa&&(!q||!q.test(a))){if(s=r=u,w=b,x=1!==k&&a,1===k&&"object"!==b.nodeName.toLowerCase()){o=g(a),(r=b.getAttribute("id"))?s=r.replace(bb,"\\$&"):b.setAttribute("id",s),s="[id='"+s+"'] ",l=o.length;while(l--)o[l]=s+rb(o[l]);w=ab.test(a)&&pb(b.parentNode)||b,x=o.join(",")}if(x)try{return H.apply(d,w.querySelectorAll(x)),d}catch(y){}finally{r||b.removeAttribute("id")}}}return i(a.replace(R,"$1"),b,d,e)}function hb(){var a=[];function b(c,e){return a.push(c+" ")>d.cacheLength&&delete b[a.shift()],b[c+" "]=e}return b}function ib(a){return a[u]=!0,a}function jb(a){var b=n.createElement("div");try{return!!a(b)}catch(c){return!1}finally{b.parentNode&&b.parentNode.removeChild(b),b=null}}function kb(a,b){var c=a.split("|"),e=a.length;while(e--)d.attrHandle[c[e]]=b}function lb(a,b){var c=b&&a,d=c&&1===a.nodeType&&1===b.nodeType&&(~b.sourceIndex||C)-(~a.sourceIndex||C);if(d)return d;if(c)while(c=c.nextSibling)if(c===b)return-1;return a?1:-1}function mb(a){return function(b){var c=b.nodeName.toLowerCase();return"input"===c&&b.type===a}}function nb(a){return function(b){var c=b.nodeName.toLowerCase();return("input"===c||"button"===c)&&b.type===a}}function ob(a){return ib(function(b){return b=+b,ib(function(c,d){var e,f=a([],c.length,b),g=f.length;while(g--)c[e=f[g]]&&(c[e]=!(d[e]=c[e]))})})}function pb(a){return a&&"undefined"!=typeof a.getElementsByTagName&&a}c=gb.support={},f=gb.isXML=function(a){var b=a&&(a.ownerDocument||a).documentElement;return b?"HTML"!==b.nodeName:!1},m=gb.setDocument=function(a){var b,e,g=a?a.ownerDocument||a:v;return g!==n&&9===g.nodeType&&g.documentElement?(n=g,o=g.documentElement,e=g.defaultView,e&&e!==e.top&&(e.addEventListener?e.addEventListener("unload",eb,!1):e.attachEvent&&e.attachEvent("onunload",eb)),p=!f(g),c.attributes=jb(function(a){return a.className="i",!a.getAttribute("className")}),c.getElementsByTagName=jb(function(a){return a.appendChild(g.createComment("")),!a.getElementsByTagName("*").length}),c.getElementsByClassName=$.test(g.getElementsByClassName),c.getById=jb(function(a){return o.appendChild(a).id=u,!g.getElementsByName||!g.getElementsByName(u).length}),c.getById?(d.find.ID=function(a,b){if("undefined"!=typeof b.getElementById&&p){var c=b.getElementById(a);return c&&c.parentNode?[c]:[]}},d.filter.ID=function(a){var b=a.replace(cb,db);return function(a){return a.getAttribute("id")===b}}):(delete d.find.ID,d.filter.ID=function(a){var b=a.replace(cb,db);return function(a){var c="undefined"!=typeof a.getAttributeNode&&a.getAttributeNode("id");return c&&c.value===b}}),d.find.TAG=c.getElementsByTagName?function(a,b){return"undefined"!=typeof b.getElementsByTagName?b.getElementsByTagName(a):c.qsa?b.querySelectorAll(a):void 0}:function(a,b){var c,d=[],e=0,f=b.getElementsByTagName(a);if("*"===a){while(c=f[e++])1===c.nodeType&&d.push(c);return d}return f},d.find.CLASS=c.getElementsByClassName&&function(a,b){return p?b.getElementsByClassName(a):void 0},r=[],q=[],(c.qsa=$.test(g.querySelectorAll))&&(jb(function(a){o.appendChild(a).innerHTML="",a.querySelectorAll("[msallowcapture^='']").length&&q.push("[*^$]="+L+"*(?:''|\"\")"),a.querySelectorAll("[selected]").length||q.push("\\["+L+"*(?:value|"+K+")"),a.querySelectorAll("[id~="+u+"-]").length||q.push("~="),a.querySelectorAll(":checked").length||q.push(":checked"),a.querySelectorAll("a#"+u+"+*").length||q.push(".#.+[+~]")}),jb(function(a){var b=g.createElement("input");b.setAttribute("type","hidden"),a.appendChild(b).setAttribute("name","D"),a.querySelectorAll("[name=d]").length&&q.push("name"+L+"*[*^$|!~]?="),a.querySelectorAll(":enabled").length||q.push(":enabled",":disabled"),a.querySelectorAll("*,:x"),q.push(",.*:")})),(c.matchesSelector=$.test(s=o.matches||o.webkitMatchesSelector||o.mozMatchesSelector||o.oMatchesSelector||o.msMatchesSelector))&&jb(function(a){c.disconnectedMatch=s.call(a,"div"),s.call(a,"[s!='']:x"),r.push("!=",P)}),q=q.length&&new RegExp(q.join("|")),r=r.length&&new RegExp(r.join("|")),b=$.test(o.compareDocumentPosition),t=b||$.test(o.contains)?function(a,b){var c=9===a.nodeType?a.documentElement:a,d=b&&b.parentNode;return a===d||!(!d||1!==d.nodeType||!(c.contains?c.contains(d):a.compareDocumentPosition&&16&a.compareDocumentPosition(d)))}:function(a,b){if(b)while(b=b.parentNode)if(b===a)return!0;return!1},B=b?function(a,b){if(a===b)return l=!0,0;var d=!a.compareDocumentPosition-!b.compareDocumentPosition;return d?d:(d=(a.ownerDocument||a)===(b.ownerDocument||b)?a.compareDocumentPosition(b):1,1&d||!c.sortDetached&&b.compareDocumentPosition(a)===d?a===g||a.ownerDocument===v&&t(v,a)?-1:b===g||b.ownerDocument===v&&t(v,b)?1:k?J(k,a)-J(k,b):0:4&d?-1:1)}:function(a,b){if(a===b)return l=!0,0;var c,d=0,e=a.parentNode,f=b.parentNode,h=[a],i=[b];if(!e||!f)return a===g?-1:b===g?1:e?-1:f?1:k?J(k,a)-J(k,b):0;if(e===f)return lb(a,b);c=a;while(c=c.parentNode)h.unshift(c);c=b;while(c=c.parentNode)i.unshift(c);while(h[d]===i[d])d++;return d?lb(h[d],i[d]):h[d]===v?-1:i[d]===v?1:0},g):n},gb.matches=function(a,b){return gb(a,null,null,b)},gb.matchesSelector=function(a,b){if((a.ownerDocument||a)!==n&&m(a),b=b.replace(U,"='$1']"),!(!c.matchesSelector||!p||r&&r.test(b)||q&&q.test(b)))try{var d=s.call(a,b);if(d||c.disconnectedMatch||a.document&&11!==a.document.nodeType)return d}catch(e){}return gb(b,n,null,[a]).length>0},gb.contains=function(a,b){return(a.ownerDocument||a)!==n&&m(a),t(a,b)},gb.attr=function(a,b){(a.ownerDocument||a)!==n&&m(a);var e=d.attrHandle[b.toLowerCase()],f=e&&D.call(d.attrHandle,b.toLowerCase())?e(a,b,!p):void 0;return void 0!==f?f:c.attributes||!p?a.getAttribute(b):(f=a.getAttributeNode(b))&&f.specified?f.value:null},gb.error=function(a){throw new Error("Syntax error, unrecognized expression: "+a)},gb.uniqueSort=function(a){var b,d=[],e=0,f=0;if(l=!c.detectDuplicates,k=!c.sortStable&&a.slice(0),a.sort(B),l){while(b=a[f++])b===a[f]&&(e=d.push(f));while(e--)a.splice(d[e],1)}return k=null,a},e=gb.getText=function(a){var b,c="",d=0,f=a.nodeType;if(f){if(1===f||9===f||11===f){if("string"==typeof a.textContent)return a.textContent;for(a=a.firstChild;a;a=a.nextSibling)c+=e(a)}else if(3===f||4===f)return a.nodeValue}else while(b=a[d++])c+=e(b);return c},d=gb.selectors={cacheLength:50,createPseudo:ib,match:X,attrHandle:{},find:{},relative:{">":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(a){return a[1]=a[1].replace(cb,db),a[3]=(a[3]||a[4]||a[5]||"").replace(cb,db),"~="===a[2]&&(a[3]=" "+a[3]+" "),a.slice(0,4)},CHILD:function(a){return a[1]=a[1].toLowerCase(),"nth"===a[1].slice(0,3)?(a[3]||gb.error(a[0]),a[4]=+(a[4]?a[5]+(a[6]||1):2*("even"===a[3]||"odd"===a[3])),a[5]=+(a[7]+a[8]||"odd"===a[3])):a[3]&&gb.error(a[0]),a},PSEUDO:function(a){var b,c=!a[6]&&a[2];return X.CHILD.test(a[0])?null:(a[3]?a[2]=a[4]||a[5]||"":c&&V.test(c)&&(b=g(c,!0))&&(b=c.indexOf(")",c.length-b)-c.length)&&(a[0]=a[0].slice(0,b),a[2]=c.slice(0,b)),a.slice(0,3))}},filter:{TAG:function(a){var b=a.replace(cb,db).toLowerCase();return"*"===a?function(){return!0}:function(a){return a.nodeName&&a.nodeName.toLowerCase()===b}},CLASS:function(a){var b=y[a+" "];return b||(b=new RegExp("(^|"+L+")"+a+"("+L+"|$)"))&&y(a,function(a){return b.test("string"==typeof a.className&&a.className||"undefined"!=typeof a.getAttribute&&a.getAttribute("class")||"")})},ATTR:function(a,b,c){return function(d){var e=gb.attr(d,a);return null==e?"!="===b:b?(e+="","="===b?e===c:"!="===b?e!==c:"^="===b?c&&0===e.indexOf(c):"*="===b?c&&e.indexOf(c)>-1:"$="===b?c&&e.slice(-c.length)===c:"~="===b?(" "+e.replace(Q," ")+" ").indexOf(c)>-1:"|="===b?e===c||e.slice(0,c.length+1)===c+"-":!1):!0}},CHILD:function(a,b,c,d,e){var f="nth"!==a.slice(0,3),g="last"!==a.slice(-4),h="of-type"===b;return 1===d&&0===e?function(a){return!!a.parentNode}:function(b,c,i){var j,k,l,m,n,o,p=f!==g?"nextSibling":"previousSibling",q=b.parentNode,r=h&&b.nodeName.toLowerCase(),s=!i&&!h;if(q){if(f){while(p){l=b;while(l=l[p])if(h?l.nodeName.toLowerCase()===r:1===l.nodeType)return!1;o=p="only"===a&&!o&&"nextSibling"}return!0}if(o=[g?q.firstChild:q.lastChild],g&&s){k=q[u]||(q[u]={}),j=k[a]||[],n=j[0]===w&&j[1],m=j[0]===w&&j[2],l=n&&q.childNodes[n];while(l=++n&&l&&l[p]||(m=n=0)||o.pop())if(1===l.nodeType&&++m&&l===b){k[a]=[w,n,m];break}}else if(s&&(j=(b[u]||(b[u]={}))[a])&&j[0]===w)m=j[1];else while(l=++n&&l&&l[p]||(m=n=0)||o.pop())if((h?l.nodeName.toLowerCase()===r:1===l.nodeType)&&++m&&(s&&((l[u]||(l[u]={}))[a]=[w,m]),l===b))break;return m-=e,m===d||m%d===0&&m/d>=0}}},PSEUDO:function(a,b){var c,e=d.pseudos[a]||d.setFilters[a.toLowerCase()]||gb.error("unsupported pseudo: "+a);return e[u]?e(b):e.length>1?(c=[a,a,"",b],d.setFilters.hasOwnProperty(a.toLowerCase())?ib(function(a,c){var d,f=e(a,b),g=f.length;while(g--)d=J(a,f[g]),a[d]=!(c[d]=f[g])}):function(a){return e(a,0,c)}):e}},pseudos:{not:ib(function(a){var b=[],c=[],d=h(a.replace(R,"$1"));return d[u]?ib(function(a,b,c,e){var f,g=d(a,null,e,[]),h=a.length;while(h--)(f=g[h])&&(a[h]=!(b[h]=f))}):function(a,e,f){return b[0]=a,d(b,null,f,c),b[0]=null,!c.pop()}}),has:ib(function(a){return function(b){return gb(a,b).length>0}}),contains:ib(function(a){return a=a.replace(cb,db),function(b){return(b.textContent||b.innerText||e(b)).indexOf(a)>-1}}),lang:ib(function(a){return W.test(a||"")||gb.error("unsupported lang: "+a),a=a.replace(cb,db).toLowerCase(),function(b){var c;do if(c=p?b.lang:b.getAttribute("xml:lang")||b.getAttribute("lang"))return c=c.toLowerCase(),c===a||0===c.indexOf(a+"-");while((b=b.parentNode)&&1===b.nodeType);return!1}}),target:function(b){var c=a.location&&a.location.hash;return c&&c.slice(1)===b.id},root:function(a){return a===o},focus:function(a){return a===n.activeElement&&(!n.hasFocus||n.hasFocus())&&!!(a.type||a.href||~a.tabIndex)},enabled:function(a){return a.disabled===!1},disabled:function(a){return a.disabled===!0},checked:function(a){var b=a.nodeName.toLowerCase();return"input"===b&&!!a.checked||"option"===b&&!!a.selected},selected:function(a){return a.parentNode&&a.parentNode.selectedIndex,a.selected===!0},empty:function(a){for(a=a.firstChild;a;a=a.nextSibling)if(a.nodeType<6)return!1;return!0},parent:function(a){return!d.pseudos.empty(a)},header:function(a){return Z.test(a.nodeName)},input:function(a){return Y.test(a.nodeName)},button:function(a){var b=a.nodeName.toLowerCase();return"input"===b&&"button"===a.type||"button"===b},text:function(a){var b;return"input"===a.nodeName.toLowerCase()&&"text"===a.type&&(null==(b=a.getAttribute("type"))||"text"===b.toLowerCase())},first:ob(function(){return[0]}),last:ob(function(a,b){return[b-1]}),eq:ob(function(a,b,c){return[0>c?c+b:c]}),even:ob(function(a,b){for(var c=0;b>c;c+=2)a.push(c);return a}),odd:ob(function(a,b){for(var c=1;b>c;c+=2)a.push(c);return a}),lt:ob(function(a,b,c){for(var d=0>c?c+b:c;--d>=0;)a.push(d);return a}),gt:ob(function(a,b,c){for(var d=0>c?c+b:c;++db;b++)d+=a[b].value;return d}function sb(a,b,c){var d=b.dir,e=c&&"parentNode"===d,f=x++;return b.first?function(b,c,f){while(b=b[d])if(1===b.nodeType||e)return a(b,c,f)}:function(b,c,g){var h,i,j=[w,f];if(g){while(b=b[d])if((1===b.nodeType||e)&&a(b,c,g))return!0}else while(b=b[d])if(1===b.nodeType||e){if(i=b[u]||(b[u]={}),(h=i[d])&&h[0]===w&&h[1]===f)return j[2]=h[2];if(i[d]=j,j[2]=a(b,c,g))return!0}}}function tb(a){return a.length>1?function(b,c,d){var e=a.length;while(e--)if(!a[e](b,c,d))return!1;return!0}:a[0]}function ub(a,b,c){for(var d=0,e=b.length;e>d;d++)gb(a,b[d],c);return c}function vb(a,b,c,d,e){for(var f,g=[],h=0,i=a.length,j=null!=b;i>h;h++)(f=a[h])&&(!c||c(f,d,e))&&(g.push(f),j&&b.push(h));return g}function wb(a,b,c,d,e,f){return d&&!d[u]&&(d=wb(d)),e&&!e[u]&&(e=wb(e,f)),ib(function(f,g,h,i){var j,k,l,m=[],n=[],o=g.length,p=f||ub(b||"*",h.nodeType?[h]:h,[]),q=!a||!f&&b?p:vb(p,m,a,h,i),r=c?e||(f?a:o||d)?[]:g:q;if(c&&c(q,r,h,i),d){j=vb(r,n),d(j,[],h,i),k=j.length;while(k--)(l=j[k])&&(r[n[k]]=!(q[n[k]]=l))}if(f){if(e||a){if(e){j=[],k=r.length;while(k--)(l=r[k])&&j.push(q[k]=l);e(null,r=[],j,i)}k=r.length;while(k--)(l=r[k])&&(j=e?J(f,l):m[k])>-1&&(f[j]=!(g[j]=l))}}else r=vb(r===g?r.splice(o,r.length):r),e?e(null,g,r,i):H.apply(g,r)})}function xb(a){for(var b,c,e,f=a.length,g=d.relative[a[0].type],h=g||d.relative[" "],i=g?1:0,k=sb(function(a){return a===b},h,!0),l=sb(function(a){return J(b,a)>-1},h,!0),m=[function(a,c,d){var e=!g&&(d||c!==j)||((b=c).nodeType?k(a,c,d):l(a,c,d));return b=null,e}];f>i;i++)if(c=d.relative[a[i].type])m=[sb(tb(m),c)];else{if(c=d.filter[a[i].type].apply(null,a[i].matches),c[u]){for(e=++i;f>e;e++)if(d.relative[a[e].type])break;return wb(i>1&&tb(m),i>1&&rb(a.slice(0,i-1).concat({value:" "===a[i-2].type?"*":""})).replace(R,"$1"),c,e>i&&xb(a.slice(i,e)),f>e&&xb(a=a.slice(e)),f>e&&rb(a))}m.push(c)}return tb(m)}function yb(a,b){var c=b.length>0,e=a.length>0,f=function(f,g,h,i,k){var l,m,o,p=0,q="0",r=f&&[],s=[],t=j,u=f||e&&d.find.TAG("*",k),v=w+=null==t?1:Math.random()||.1,x=u.length;for(k&&(j=g!==n&&g);q!==x&&null!=(l=u[q]);q++){if(e&&l){m=0;while(o=a[m++])if(o(l,g,h)){i.push(l);break}k&&(w=v)}c&&((l=!o&&l)&&p--,f&&r.push(l))}if(p+=q,c&&q!==p){m=0;while(o=b[m++])o(r,s,g,h);if(f){if(p>0)while(q--)r[q]||s[q]||(s[q]=F.call(i));s=vb(s)}H.apply(i,s),k&&!f&&s.length>0&&p+b.length>1&&gb.uniqueSort(i)}return k&&(w=v,j=t),r};return c?ib(f):f}return h=gb.compile=function(a,b){var c,d=[],e=[],f=A[a+" "];if(!f){b||(b=g(a)),c=b.length;while(c--)f=xb(b[c]),f[u]?d.push(f):e.push(f);f=A(a,yb(e,d)),f.selector=a}return f},i=gb.select=function(a,b,e,f){var i,j,k,l,m,n="function"==typeof a&&a,o=!f&&g(a=n.selector||a);if(e=e||[],1===o.length){if(j=o[0]=o[0].slice(0),j.length>2&&"ID"===(k=j[0]).type&&c.getById&&9===b.nodeType&&p&&d.relative[j[1].type]){if(b=(d.find.ID(k.matches[0].replace(cb,db),b)||[])[0],!b)return e;n&&(b=b.parentNode),a=a.slice(j.shift().value.length)}i=X.needsContext.test(a)?0:j.length;while(i--){if(k=j[i],d.relative[l=k.type])break;if((m=d.find[l])&&(f=m(k.matches[0].replace(cb,db),ab.test(j[0].type)&&pb(b.parentNode)||b))){if(j.splice(i,1),a=f.length&&rb(j),!a)return H.apply(e,f),e;break}}}return(n||h(a,o))(f,b,!p,e,ab.test(a)&&pb(b.parentNode)||b),e},c.sortStable=u.split("").sort(B).join("")===u,c.detectDuplicates=!!l,m(),c.sortDetached=jb(function(a){return 1&a.compareDocumentPosition(n.createElement("div"))}),jb(function(a){return a.innerHTML="","#"===a.firstChild.getAttribute("href")})||kb("type|href|height|width",function(a,b,c){return c?void 0:a.getAttribute(b,"type"===b.toLowerCase()?1:2)}),c.attributes&&jb(function(a){return a.innerHTML="",a.firstChild.setAttribute("value",""),""===a.firstChild.getAttribute("value")})||kb("value",function(a,b,c){return c||"input"!==a.nodeName.toLowerCase()?void 0:a.defaultValue}),jb(function(a){return null==a.getAttribute("disabled")})||kb(K,function(a,b,c){var d;return c?void 0:a[b]===!0?b.toLowerCase():(d=a.getAttributeNode(b))&&d.specified?d.value:null}),gb}(a);m.find=s,m.expr=s.selectors,m.expr[":"]=m.expr.pseudos,m.unique=s.uniqueSort,m.text=s.getText,m.isXMLDoc=s.isXML,m.contains=s.contains;var t=m.expr.match.needsContext,u=/^<(\w+)\s*\/?>(?:<\/\1>|)$/,v=/^.[^:#\[\.,]*$/;function w(a,b,c){if(m.isFunction(b))return m.grep(a,function(a,d){return!!b.call(a,d,a)!==c});if(b.nodeType)return m.grep(a,function(a){return a===b!==c});if("string"==typeof b){if(v.test(b))return m.filter(b,a,c);b=m.filter(b,a)}return m.grep(a,function(a){return m.inArray(a,b)>=0!==c})}m.filter=function(a,b,c){var d=b[0];return c&&(a=":not("+a+")"),1===b.length&&1===d.nodeType?m.find.matchesSelector(d,a)?[d]:[]:m.find.matches(a,m.grep(b,function(a){return 1===a.nodeType}))},m.fn.extend({find:function(a){var b,c=[],d=this,e=d.length;if("string"!=typeof a)return this.pushStack(m(a).filter(function(){for(b=0;e>b;b++)if(m.contains(d[b],this))return!0}));for(b=0;e>b;b++)m.find(a,d[b],c);return c=this.pushStack(e>1?m.unique(c):c),c.selector=this.selector?this.selector+" "+a:a,c},filter:function(a){return this.pushStack(w(this,a||[],!1))},not:function(a){return this.pushStack(w(this,a||[],!0))},is:function(a){return!!w(this,"string"==typeof a&&t.test(a)?m(a):a||[],!1).length}});var x,y=a.document,z=/^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]*))$/,A=m.fn.init=function(a,b){var c,d;if(!a)return this;if("string"==typeof a){if(c="<"===a.charAt(0)&&">"===a.charAt(a.length-1)&&a.length>=3?[null,a,null]:z.exec(a),!c||!c[1]&&b)return!b||b.jquery?(b||x).find(a):this.constructor(b).find(a);if(c[1]){if(b=b instanceof m?b[0]:b,m.merge(this,m.parseHTML(c[1],b&&b.nodeType?b.ownerDocument||b:y,!0)),u.test(c[1])&&m.isPlainObject(b))for(c in b)m.isFunction(this[c])?this[c](b[c]):this.attr(c,b[c]);return this}if(d=y.getElementById(c[2]),d&&d.parentNode){if(d.id!==c[2])return x.find(a);this.length=1,this[0]=d}return this.context=y,this.selector=a,this}return a.nodeType?(this.context=this[0]=a,this.length=1,this):m.isFunction(a)?"undefined"!=typeof x.ready?x.ready(a):a(m):(void 0!==a.selector&&(this.selector=a.selector,this.context=a.context),m.makeArray(a,this))};A.prototype=m.fn,x=m(y);var B=/^(?:parents|prev(?:Until|All))/,C={children:!0,contents:!0,next:!0,prev:!0};m.extend({dir:function(a,b,c){var d=[],e=a[b];while(e&&9!==e.nodeType&&(void 0===c||1!==e.nodeType||!m(e).is(c)))1===e.nodeType&&d.push(e),e=e[b];return d},sibling:function(a,b){for(var c=[];a;a=a.nextSibling)1===a.nodeType&&a!==b&&c.push(a);return c}}),m.fn.extend({has:function(a){var b,c=m(a,this),d=c.length;return this.filter(function(){for(b=0;d>b;b++)if(m.contains(this,c[b]))return!0})},closest:function(a,b){for(var c,d=0,e=this.length,f=[],g=t.test(a)||"string"!=typeof a?m(a,b||this.context):0;e>d;d++)for(c=this[d];c&&c!==b;c=c.parentNode)if(c.nodeType<11&&(g?g.index(c)>-1:1===c.nodeType&&m.find.matchesSelector(c,a))){f.push(c);break}return this.pushStack(f.length>1?m.unique(f):f)},index:function(a){return a?"string"==typeof a?m.inArray(this[0],m(a)):m.inArray(a.jquery?a[0]:a,this):this[0]&&this[0].parentNode?this.first().prevAll().length:-1},add:function(a,b){return this.pushStack(m.unique(m.merge(this.get(),m(a,b))))},addBack:function(a){return this.add(null==a?this.prevObject:this.prevObject.filter(a))}});function D(a,b){do a=a[b];while(a&&1!==a.nodeType);return a}m.each({parent:function(a){var b=a.parentNode;return b&&11!==b.nodeType?b:null},parents:function(a){return m.dir(a,"parentNode")},parentsUntil:function(a,b,c){return m.dir(a,"parentNode",c)},next:function(a){return D(a,"nextSibling")},prev:function(a){return D(a,"previousSibling")},nextAll:function(a){return m.dir(a,"nextSibling")},prevAll:function(a){return m.dir(a,"previousSibling")},nextUntil:function(a,b,c){return m.dir(a,"nextSibling",c)},prevUntil:function(a,b,c){return m.dir(a,"previousSibling",c)},siblings:function(a){return m.sibling((a.parentNode||{}).firstChild,a)},children:function(a){return m.sibling(a.firstChild)},contents:function(a){return m.nodeName(a,"iframe")?a.contentDocument||a.contentWindow.document:m.merge([],a.childNodes)}},function(a,b){m.fn[a]=function(c,d){var e=m.map(this,b,c);return"Until"!==a.slice(-5)&&(d=c),d&&"string"==typeof d&&(e=m.filter(d,e)),this.length>1&&(C[a]||(e=m.unique(e)),B.test(a)&&(e=e.reverse())),this.pushStack(e)}});var E=/\S+/g,F={};function G(a){var b=F[a]={};return m.each(a.match(E)||[],function(a,c){b[c]=!0}),b}m.Callbacks=function(a){a="string"==typeof a?F[a]||G(a):m.extend({},a);var b,c,d,e,f,g,h=[],i=!a.once&&[],j=function(l){for(c=a.memory&&l,d=!0,f=g||0,g=0,e=h.length,b=!0;h&&e>f;f++)if(h[f].apply(l[0],l[1])===!1&&a.stopOnFalse){c=!1;break}b=!1,h&&(i?i.length&&j(i.shift()):c?h=[]:k.disable())},k={add:function(){if(h){var d=h.length;!function f(b){m.each(b,function(b,c){var d=m.type(c);"function"===d?a.unique&&k.has(c)||h.push(c):c&&c.length&&"string"!==d&&f(c)})}(arguments),b?e=h.length:c&&(g=d,j(c))}return this},remove:function(){return h&&m.each(arguments,function(a,c){var d;while((d=m.inArray(c,h,d))>-1)h.splice(d,1),b&&(e>=d&&e--,f>=d&&f--)}),this},has:function(a){return a?m.inArray(a,h)>-1:!(!h||!h.length)},empty:function(){return h=[],e=0,this},disable:function(){return h=i=c=void 0,this},disabled:function(){return!h},lock:function(){return i=void 0,c||k.disable(),this},locked:function(){return!i},fireWith:function(a,c){return!h||d&&!i||(c=c||[],c=[a,c.slice?c.slice():c],b?i.push(c):j(c)),this},fire:function(){return k.fireWith(this,arguments),this},fired:function(){return!!d}};return k},m.extend({Deferred:function(a){var b=[["resolve","done",m.Callbacks("once memory"),"resolved"],["reject","fail",m.Callbacks("once memory"),"rejected"],["notify","progress",m.Callbacks("memory")]],c="pending",d={state:function(){return c},always:function(){return e.done(arguments).fail(arguments),this},then:function(){var a=arguments;return m.Deferred(function(c){m.each(b,function(b,f){var g=m.isFunction(a[b])&&a[b];e[f[1]](function(){var a=g&&g.apply(this,arguments);a&&m.isFunction(a.promise)?a.promise().done(c.resolve).fail(c.reject).progress(c.notify):c[f[0]+"With"](this===d?c.promise():this,g?[a]:arguments)})}),a=null}).promise()},promise:function(a){return null!=a?m.extend(a,d):d}},e={};return d.pipe=d.then,m.each(b,function(a,f){var g=f[2],h=f[3];d[f[1]]=g.add,h&&g.add(function(){c=h},b[1^a][2].disable,b[2][2].lock),e[f[0]]=function(){return e[f[0]+"With"](this===e?d:this,arguments),this},e[f[0]+"With"]=g.fireWith}),d.promise(e),a&&a.call(e,e),e},when:function(a){var b=0,c=d.call(arguments),e=c.length,f=1!==e||a&&m.isFunction(a.promise)?e:0,g=1===f?a:m.Deferred(),h=function(a,b,c){return function(e){b[a]=this,c[a]=arguments.length>1?d.call(arguments):e,c===i?g.notifyWith(b,c):--f||g.resolveWith(b,c)}},i,j,k;if(e>1)for(i=new Array(e),j=new Array(e),k=new Array(e);e>b;b++)c[b]&&m.isFunction(c[b].promise)?c[b].promise().done(h(b,k,c)).fail(g.reject).progress(h(b,j,i)):--f;return f||g.resolveWith(k,c),g.promise()}});var H;m.fn.ready=function(a){return m.ready.promise().done(a),this},m.extend({isReady:!1,readyWait:1,holdReady:function(a){a?m.readyWait++:m.ready(!0)},ready:function(a){if(a===!0?!--m.readyWait:!m.isReady){if(!y.body)return setTimeout(m.ready);m.isReady=!0,a!==!0&&--m.readyWait>0||(H.resolveWith(y,[m]),m.fn.triggerHandler&&(m(y).triggerHandler("ready"),m(y).off("ready")))}}});function I(){y.addEventListener?(y.removeEventListener("DOMContentLoaded",J,!1),a.removeEventListener("load",J,!1)):(y.detachEvent("onreadystatechange",J),a.detachEvent("onload",J))}function J(){(y.addEventListener||"load"===event.type||"complete"===y.readyState)&&(I(),m.ready())}m.ready.promise=function(b){if(!H)if(H=m.Deferred(),"complete"===y.readyState)setTimeout(m.ready);else if(y.addEventListener)y.addEventListener("DOMContentLoaded",J,!1),a.addEventListener("load",J,!1);else{y.attachEvent("onreadystatechange",J),a.attachEvent("onload",J);var c=!1;try{c=null==a.frameElement&&y.documentElement}catch(d){}c&&c.doScroll&&!function e(){if(!m.isReady){try{c.doScroll("left")}catch(a){return setTimeout(e,50)}I(),m.ready()}}()}return H.promise(b)};var K="undefined",L;for(L in m(k))break;k.ownLast="0"!==L,k.inlineBlockNeedsLayout=!1,m(function(){var a,b,c,d;c=y.getElementsByTagName("body")[0],c&&c.style&&(b=y.createElement("div"),d=y.createElement("div"),d.style.cssText="position:absolute;border:0;width:0;height:0;top:0;left:-9999px",c.appendChild(d).appendChild(b),typeof b.style.zoom!==K&&(b.style.cssText="display:inline;margin:0;border:0;padding:1px;width:1px;zoom:1",k.inlineBlockNeedsLayout=a=3===b.offsetWidth,a&&(c.style.zoom=1)),c.removeChild(d))}),function(){var a=y.createElement("div");if(null==k.deleteExpando){k.deleteExpando=!0;try{delete a.test}catch(b){k.deleteExpando=!1}}a=null}(),m.acceptData=function(a){var b=m.noData[(a.nodeName+" ").toLowerCase()],c=+a.nodeType||1;return 1!==c&&9!==c?!1:!b||b!==!0&&a.getAttribute("classid")===b};var M=/^(?:\{[\w\W]*\}|\[[\w\W]*\])$/,N=/([A-Z])/g;function O(a,b,c){if(void 0===c&&1===a.nodeType){var d="data-"+b.replace(N,"-$1").toLowerCase();if(c=a.getAttribute(d),"string"==typeof c){try{c="true"===c?!0:"false"===c?!1:"null"===c?null:+c+""===c?+c:M.test(c)?m.parseJSON(c):c}catch(e){}m.data(a,b,c)}else c=void 0}return c}function P(a){var b;for(b in a)if(("data"!==b||!m.isEmptyObject(a[b]))&&"toJSON"!==b)return!1; +return!0}function Q(a,b,d,e){if(m.acceptData(a)){var f,g,h=m.expando,i=a.nodeType,j=i?m.cache:a,k=i?a[h]:a[h]&&h;if(k&&j[k]&&(e||j[k].data)||void 0!==d||"string"!=typeof b)return k||(k=i?a[h]=c.pop()||m.guid++:h),j[k]||(j[k]=i?{}:{toJSON:m.noop}),("object"==typeof b||"function"==typeof b)&&(e?j[k]=m.extend(j[k],b):j[k].data=m.extend(j[k].data,b)),g=j[k],e||(g.data||(g.data={}),g=g.data),void 0!==d&&(g[m.camelCase(b)]=d),"string"==typeof b?(f=g[b],null==f&&(f=g[m.camelCase(b)])):f=g,f}}function R(a,b,c){if(m.acceptData(a)){var d,e,f=a.nodeType,g=f?m.cache:a,h=f?a[m.expando]:m.expando;if(g[h]){if(b&&(d=c?g[h]:g[h].data)){m.isArray(b)?b=b.concat(m.map(b,m.camelCase)):b in d?b=[b]:(b=m.camelCase(b),b=b in d?[b]:b.split(" ")),e=b.length;while(e--)delete d[b[e]];if(c?!P(d):!m.isEmptyObject(d))return}(c||(delete g[h].data,P(g[h])))&&(f?m.cleanData([a],!0):k.deleteExpando||g!=g.window?delete g[h]:g[h]=null)}}}m.extend({cache:{},noData:{"applet ":!0,"embed ":!0,"object ":"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000"},hasData:function(a){return a=a.nodeType?m.cache[a[m.expando]]:a[m.expando],!!a&&!P(a)},data:function(a,b,c){return Q(a,b,c)},removeData:function(a,b){return R(a,b)},_data:function(a,b,c){return Q(a,b,c,!0)},_removeData:function(a,b){return R(a,b,!0)}}),m.fn.extend({data:function(a,b){var c,d,e,f=this[0],g=f&&f.attributes;if(void 0===a){if(this.length&&(e=m.data(f),1===f.nodeType&&!m._data(f,"parsedAttrs"))){c=g.length;while(c--)g[c]&&(d=g[c].name,0===d.indexOf("data-")&&(d=m.camelCase(d.slice(5)),O(f,d,e[d])));m._data(f,"parsedAttrs",!0)}return e}return"object"==typeof a?this.each(function(){m.data(this,a)}):arguments.length>1?this.each(function(){m.data(this,a,b)}):f?O(f,a,m.data(f,a)):void 0},removeData:function(a){return this.each(function(){m.removeData(this,a)})}}),m.extend({queue:function(a,b,c){var d;return a?(b=(b||"fx")+"queue",d=m._data(a,b),c&&(!d||m.isArray(c)?d=m._data(a,b,m.makeArray(c)):d.push(c)),d||[]):void 0},dequeue:function(a,b){b=b||"fx";var c=m.queue(a,b),d=c.length,e=c.shift(),f=m._queueHooks(a,b),g=function(){m.dequeue(a,b)};"inprogress"===e&&(e=c.shift(),d--),e&&("fx"===b&&c.unshift("inprogress"),delete f.stop,e.call(a,g,f)),!d&&f&&f.empty.fire()},_queueHooks:function(a,b){var c=b+"queueHooks";return m._data(a,c)||m._data(a,c,{empty:m.Callbacks("once memory").add(function(){m._removeData(a,b+"queue"),m._removeData(a,c)})})}}),m.fn.extend({queue:function(a,b){var c=2;return"string"!=typeof a&&(b=a,a="fx",c--),arguments.lengthh;h++)b(a[h],c,g?d:d.call(a[h],h,b(a[h],c)));return e?a:j?b.call(a):i?b(a[0],c):f},W=/^(?:checkbox|radio)$/i;!function(){var a=y.createElement("input"),b=y.createElement("div"),c=y.createDocumentFragment();if(b.innerHTML="
    a",k.leadingWhitespace=3===b.firstChild.nodeType,k.tbody=!b.getElementsByTagName("tbody").length,k.htmlSerialize=!!b.getElementsByTagName("link").length,k.html5Clone="<:nav>"!==y.createElement("nav").cloneNode(!0).outerHTML,a.type="checkbox",a.checked=!0,c.appendChild(a),k.appendChecked=a.checked,b.innerHTML="",k.noCloneChecked=!!b.cloneNode(!0).lastChild.defaultValue,c.appendChild(b),b.innerHTML="",k.checkClone=b.cloneNode(!0).cloneNode(!0).lastChild.checked,k.noCloneEvent=!0,b.attachEvent&&(b.attachEvent("onclick",function(){k.noCloneEvent=!1}),b.cloneNode(!0).click()),null==k.deleteExpando){k.deleteExpando=!0;try{delete b.test}catch(d){k.deleteExpando=!1}}}(),function(){var b,c,d=y.createElement("div");for(b in{submit:!0,change:!0,focusin:!0})c="on"+b,(k[b+"Bubbles"]=c in a)||(d.setAttribute(c,"t"),k[b+"Bubbles"]=d.attributes[c].expando===!1);d=null}();var X=/^(?:input|select|textarea)$/i,Y=/^key/,Z=/^(?:mouse|pointer|contextmenu)|click/,$=/^(?:focusinfocus|focusoutblur)$/,_=/^([^.]*)(?:\.(.+)|)$/;function ab(){return!0}function bb(){return!1}function cb(){try{return y.activeElement}catch(a){}}m.event={global:{},add:function(a,b,c,d,e){var f,g,h,i,j,k,l,n,o,p,q,r=m._data(a);if(r){c.handler&&(i=c,c=i.handler,e=i.selector),c.guid||(c.guid=m.guid++),(g=r.events)||(g=r.events={}),(k=r.handle)||(k=r.handle=function(a){return typeof m===K||a&&m.event.triggered===a.type?void 0:m.event.dispatch.apply(k.elem,arguments)},k.elem=a),b=(b||"").match(E)||[""],h=b.length;while(h--)f=_.exec(b[h])||[],o=q=f[1],p=(f[2]||"").split(".").sort(),o&&(j=m.event.special[o]||{},o=(e?j.delegateType:j.bindType)||o,j=m.event.special[o]||{},l=m.extend({type:o,origType:q,data:d,handler:c,guid:c.guid,selector:e,needsContext:e&&m.expr.match.needsContext.test(e),namespace:p.join(".")},i),(n=g[o])||(n=g[o]=[],n.delegateCount=0,j.setup&&j.setup.call(a,d,p,k)!==!1||(a.addEventListener?a.addEventListener(o,k,!1):a.attachEvent&&a.attachEvent("on"+o,k))),j.add&&(j.add.call(a,l),l.handler.guid||(l.handler.guid=c.guid)),e?n.splice(n.delegateCount++,0,l):n.push(l),m.event.global[o]=!0);a=null}},remove:function(a,b,c,d,e){var f,g,h,i,j,k,l,n,o,p,q,r=m.hasData(a)&&m._data(a);if(r&&(k=r.events)){b=(b||"").match(E)||[""],j=b.length;while(j--)if(h=_.exec(b[j])||[],o=q=h[1],p=(h[2]||"").split(".").sort(),o){l=m.event.special[o]||{},o=(d?l.delegateType:l.bindType)||o,n=k[o]||[],h=h[2]&&new RegExp("(^|\\.)"+p.join("\\.(?:.*\\.|)")+"(\\.|$)"),i=f=n.length;while(f--)g=n[f],!e&&q!==g.origType||c&&c.guid!==g.guid||h&&!h.test(g.namespace)||d&&d!==g.selector&&("**"!==d||!g.selector)||(n.splice(f,1),g.selector&&n.delegateCount--,l.remove&&l.remove.call(a,g));i&&!n.length&&(l.teardown&&l.teardown.call(a,p,r.handle)!==!1||m.removeEvent(a,o,r.handle),delete k[o])}else for(o in k)m.event.remove(a,o+b[j],c,d,!0);m.isEmptyObject(k)&&(delete r.handle,m._removeData(a,"events"))}},trigger:function(b,c,d,e){var f,g,h,i,k,l,n,o=[d||y],p=j.call(b,"type")?b.type:b,q=j.call(b,"namespace")?b.namespace.split("."):[];if(h=l=d=d||y,3!==d.nodeType&&8!==d.nodeType&&!$.test(p+m.event.triggered)&&(p.indexOf(".")>=0&&(q=p.split("."),p=q.shift(),q.sort()),g=p.indexOf(":")<0&&"on"+p,b=b[m.expando]?b:new m.Event(p,"object"==typeof b&&b),b.isTrigger=e?2:3,b.namespace=q.join("."),b.namespace_re=b.namespace?new RegExp("(^|\\.)"+q.join("\\.(?:.*\\.|)")+"(\\.|$)"):null,b.result=void 0,b.target||(b.target=d),c=null==c?[b]:m.makeArray(c,[b]),k=m.event.special[p]||{},e||!k.trigger||k.trigger.apply(d,c)!==!1)){if(!e&&!k.noBubble&&!m.isWindow(d)){for(i=k.delegateType||p,$.test(i+p)||(h=h.parentNode);h;h=h.parentNode)o.push(h),l=h;l===(d.ownerDocument||y)&&o.push(l.defaultView||l.parentWindow||a)}n=0;while((h=o[n++])&&!b.isPropagationStopped())b.type=n>1?i:k.bindType||p,f=(m._data(h,"events")||{})[b.type]&&m._data(h,"handle"),f&&f.apply(h,c),f=g&&h[g],f&&f.apply&&m.acceptData(h)&&(b.result=f.apply(h,c),b.result===!1&&b.preventDefault());if(b.type=p,!e&&!b.isDefaultPrevented()&&(!k._default||k._default.apply(o.pop(),c)===!1)&&m.acceptData(d)&&g&&d[p]&&!m.isWindow(d)){l=d[g],l&&(d[g]=null),m.event.triggered=p;try{d[p]()}catch(r){}m.event.triggered=void 0,l&&(d[g]=l)}return b.result}},dispatch:function(a){a=m.event.fix(a);var b,c,e,f,g,h=[],i=d.call(arguments),j=(m._data(this,"events")||{})[a.type]||[],k=m.event.special[a.type]||{};if(i[0]=a,a.delegateTarget=this,!k.preDispatch||k.preDispatch.call(this,a)!==!1){h=m.event.handlers.call(this,a,j),b=0;while((f=h[b++])&&!a.isPropagationStopped()){a.currentTarget=f.elem,g=0;while((e=f.handlers[g++])&&!a.isImmediatePropagationStopped())(!a.namespace_re||a.namespace_re.test(e.namespace))&&(a.handleObj=e,a.data=e.data,c=((m.event.special[e.origType]||{}).handle||e.handler).apply(f.elem,i),void 0!==c&&(a.result=c)===!1&&(a.preventDefault(),a.stopPropagation()))}return k.postDispatch&&k.postDispatch.call(this,a),a.result}},handlers:function(a,b){var c,d,e,f,g=[],h=b.delegateCount,i=a.target;if(h&&i.nodeType&&(!a.button||"click"!==a.type))for(;i!=this;i=i.parentNode||this)if(1===i.nodeType&&(i.disabled!==!0||"click"!==a.type)){for(e=[],f=0;h>f;f++)d=b[f],c=d.selector+" ",void 0===e[c]&&(e[c]=d.needsContext?m(c,this).index(i)>=0:m.find(c,this,null,[i]).length),e[c]&&e.push(d);e.length&&g.push({elem:i,handlers:e})}return h]","i"),hb=/^\s+/,ib=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([\w:]+)[^>]*)\/>/gi,jb=/<([\w:]+)/,kb=/\s*$/g,rb={option:[1,""],legend:[1,"
    ","
    "],area:[1,"",""],param:[1,"",""],thead:[1,"","
    "],tr:[2,"","
    "],col:[2,"","
    "],td:[3,"","
    "],_default:k.htmlSerialize?[0,"",""]:[1,"X
    ","
    "]},sb=db(y),tb=sb.appendChild(y.createElement("div"));rb.optgroup=rb.option,rb.tbody=rb.tfoot=rb.colgroup=rb.caption=rb.thead,rb.th=rb.td;function ub(a,b){var c,d,e=0,f=typeof a.getElementsByTagName!==K?a.getElementsByTagName(b||"*"):typeof a.querySelectorAll!==K?a.querySelectorAll(b||"*"):void 0;if(!f)for(f=[],c=a.childNodes||a;null!=(d=c[e]);e++)!b||m.nodeName(d,b)?f.push(d):m.merge(f,ub(d,b));return void 0===b||b&&m.nodeName(a,b)?m.merge([a],f):f}function vb(a){W.test(a.type)&&(a.defaultChecked=a.checked)}function wb(a,b){return m.nodeName(a,"table")&&m.nodeName(11!==b.nodeType?b:b.firstChild,"tr")?a.getElementsByTagName("tbody")[0]||a.appendChild(a.ownerDocument.createElement("tbody")):a}function xb(a){return a.type=(null!==m.find.attr(a,"type"))+"/"+a.type,a}function yb(a){var b=pb.exec(a.type);return b?a.type=b[1]:a.removeAttribute("type"),a}function zb(a,b){for(var c,d=0;null!=(c=a[d]);d++)m._data(c,"globalEval",!b||m._data(b[d],"globalEval"))}function Ab(a,b){if(1===b.nodeType&&m.hasData(a)){var c,d,e,f=m._data(a),g=m._data(b,f),h=f.events;if(h){delete g.handle,g.events={};for(c in h)for(d=0,e=h[c].length;e>d;d++)m.event.add(b,c,h[c][d])}g.data&&(g.data=m.extend({},g.data))}}function Bb(a,b){var c,d,e;if(1===b.nodeType){if(c=b.nodeName.toLowerCase(),!k.noCloneEvent&&b[m.expando]){e=m._data(b);for(d in e.events)m.removeEvent(b,d,e.handle);b.removeAttribute(m.expando)}"script"===c&&b.text!==a.text?(xb(b).text=a.text,yb(b)):"object"===c?(b.parentNode&&(b.outerHTML=a.outerHTML),k.html5Clone&&a.innerHTML&&!m.trim(b.innerHTML)&&(b.innerHTML=a.innerHTML)):"input"===c&&W.test(a.type)?(b.defaultChecked=b.checked=a.checked,b.value!==a.value&&(b.value=a.value)):"option"===c?b.defaultSelected=b.selected=a.defaultSelected:("input"===c||"textarea"===c)&&(b.defaultValue=a.defaultValue)}}m.extend({clone:function(a,b,c){var d,e,f,g,h,i=m.contains(a.ownerDocument,a);if(k.html5Clone||m.isXMLDoc(a)||!gb.test("<"+a.nodeName+">")?f=a.cloneNode(!0):(tb.innerHTML=a.outerHTML,tb.removeChild(f=tb.firstChild)),!(k.noCloneEvent&&k.noCloneChecked||1!==a.nodeType&&11!==a.nodeType||m.isXMLDoc(a)))for(d=ub(f),h=ub(a),g=0;null!=(e=h[g]);++g)d[g]&&Bb(e,d[g]);if(b)if(c)for(h=h||ub(a),d=d||ub(f),g=0;null!=(e=h[g]);g++)Ab(e,d[g]);else Ab(a,f);return d=ub(f,"script"),d.length>0&&zb(d,!i&&ub(a,"script")),d=h=e=null,f},buildFragment:function(a,b,c,d){for(var e,f,g,h,i,j,l,n=a.length,o=db(b),p=[],q=0;n>q;q++)if(f=a[q],f||0===f)if("object"===m.type(f))m.merge(p,f.nodeType?[f]:f);else if(lb.test(f)){h=h||o.appendChild(b.createElement("div")),i=(jb.exec(f)||["",""])[1].toLowerCase(),l=rb[i]||rb._default,h.innerHTML=l[1]+f.replace(ib,"<$1>")+l[2],e=l[0];while(e--)h=h.lastChild;if(!k.leadingWhitespace&&hb.test(f)&&p.push(b.createTextNode(hb.exec(f)[0])),!k.tbody){f="table"!==i||kb.test(f)?""!==l[1]||kb.test(f)?0:h:h.firstChild,e=f&&f.childNodes.length;while(e--)m.nodeName(j=f.childNodes[e],"tbody")&&!j.childNodes.length&&f.removeChild(j)}m.merge(p,h.childNodes),h.textContent="";while(h.firstChild)h.removeChild(h.firstChild);h=o.lastChild}else p.push(b.createTextNode(f));h&&o.removeChild(h),k.appendChecked||m.grep(ub(p,"input"),vb),q=0;while(f=p[q++])if((!d||-1===m.inArray(f,d))&&(g=m.contains(f.ownerDocument,f),h=ub(o.appendChild(f),"script"),g&&zb(h),c)){e=0;while(f=h[e++])ob.test(f.type||"")&&c.push(f)}return h=null,o},cleanData:function(a,b){for(var d,e,f,g,h=0,i=m.expando,j=m.cache,l=k.deleteExpando,n=m.event.special;null!=(d=a[h]);h++)if((b||m.acceptData(d))&&(f=d[i],g=f&&j[f])){if(g.events)for(e in g.events)n[e]?m.event.remove(d,e):m.removeEvent(d,e,g.handle);j[f]&&(delete j[f],l?delete d[i]:typeof d.removeAttribute!==K?d.removeAttribute(i):d[i]=null,c.push(f))}}}),m.fn.extend({text:function(a){return V(this,function(a){return void 0===a?m.text(this):this.empty().append((this[0]&&this[0].ownerDocument||y).createTextNode(a))},null,a,arguments.length)},append:function(){return this.domManip(arguments,function(a){if(1===this.nodeType||11===this.nodeType||9===this.nodeType){var b=wb(this,a);b.appendChild(a)}})},prepend:function(){return this.domManip(arguments,function(a){if(1===this.nodeType||11===this.nodeType||9===this.nodeType){var b=wb(this,a);b.insertBefore(a,b.firstChild)}})},before:function(){return this.domManip(arguments,function(a){this.parentNode&&this.parentNode.insertBefore(a,this)})},after:function(){return this.domManip(arguments,function(a){this.parentNode&&this.parentNode.insertBefore(a,this.nextSibling)})},remove:function(a,b){for(var c,d=a?m.filter(a,this):this,e=0;null!=(c=d[e]);e++)b||1!==c.nodeType||m.cleanData(ub(c)),c.parentNode&&(b&&m.contains(c.ownerDocument,c)&&zb(ub(c,"script")),c.parentNode.removeChild(c));return this},empty:function(){for(var a,b=0;null!=(a=this[b]);b++){1===a.nodeType&&m.cleanData(ub(a,!1));while(a.firstChild)a.removeChild(a.firstChild);a.options&&m.nodeName(a,"select")&&(a.options.length=0)}return this},clone:function(a,b){return a=null==a?!1:a,b=null==b?a:b,this.map(function(){return m.clone(this,a,b)})},html:function(a){return V(this,function(a){var b=this[0]||{},c=0,d=this.length;if(void 0===a)return 1===b.nodeType?b.innerHTML.replace(fb,""):void 0;if(!("string"!=typeof a||mb.test(a)||!k.htmlSerialize&&gb.test(a)||!k.leadingWhitespace&&hb.test(a)||rb[(jb.exec(a)||["",""])[1].toLowerCase()])){a=a.replace(ib,"<$1>");try{for(;d>c;c++)b=this[c]||{},1===b.nodeType&&(m.cleanData(ub(b,!1)),b.innerHTML=a);b=0}catch(e){}}b&&this.empty().append(a)},null,a,arguments.length)},replaceWith:function(){var a=arguments[0];return this.domManip(arguments,function(b){a=this.parentNode,m.cleanData(ub(this)),a&&a.replaceChild(b,this)}),a&&(a.length||a.nodeType)?this:this.remove()},detach:function(a){return this.remove(a,!0)},domManip:function(a,b){a=e.apply([],a);var c,d,f,g,h,i,j=0,l=this.length,n=this,o=l-1,p=a[0],q=m.isFunction(p);if(q||l>1&&"string"==typeof p&&!k.checkClone&&nb.test(p))return this.each(function(c){var d=n.eq(c);q&&(a[0]=p.call(this,c,d.html())),d.domManip(a,b)});if(l&&(i=m.buildFragment(a,this[0].ownerDocument,!1,this),c=i.firstChild,1===i.childNodes.length&&(i=c),c)){for(g=m.map(ub(i,"script"),xb),f=g.length;l>j;j++)d=i,j!==o&&(d=m.clone(d,!0,!0),f&&m.merge(g,ub(d,"script"))),b.call(this[j],d,j);if(f)for(h=g[g.length-1].ownerDocument,m.map(g,yb),j=0;f>j;j++)d=g[j],ob.test(d.type||"")&&!m._data(d,"globalEval")&&m.contains(h,d)&&(d.src?m._evalUrl&&m._evalUrl(d.src):m.globalEval((d.text||d.textContent||d.innerHTML||"").replace(qb,"")));i=c=null}return this}}),m.each({appendTo:"append",prependTo:"prepend",insertBefore:"before",insertAfter:"after",replaceAll:"replaceWith"},function(a,b){m.fn[a]=function(a){for(var c,d=0,e=[],g=m(a),h=g.length-1;h>=d;d++)c=d===h?this:this.clone(!0),m(g[d])[b](c),f.apply(e,c.get());return this.pushStack(e)}});var Cb,Db={};function Eb(b,c){var d,e=m(c.createElement(b)).appendTo(c.body),f=a.getDefaultComputedStyle&&(d=a.getDefaultComputedStyle(e[0]))?d.display:m.css(e[0],"display");return e.detach(),f}function Fb(a){var b=y,c=Db[a];return c||(c=Eb(a,b),"none"!==c&&c||(Cb=(Cb||m("