Merge branch 'master' of /git/ansible
Conflicts: roles/rsyncd/files/rsyncd.conf.download-ibiblio roles/rsyncd/files/rsyncd.conf.download-phx2 roles/rsyncd/files/rsyncd.conf.download-rdu
This commit is contained in:
commit
8c4986ce16
1452 changed files with 75505 additions and 6620 deletions
230
README
230
README
|
@ -15,6 +15,12 @@ library - library of custom local ansible modules
|
|||
|
||||
playbooks - collections of plays we want to run on systems
|
||||
|
||||
groups: groups of hosts configured from one playbook.
|
||||
|
||||
hosts: playbooks for single hosts.
|
||||
|
||||
manual: playbooks that are only run manually by an admin as needed.
|
||||
|
||||
tasks - snippets of tasks that should be included in plays
|
||||
|
||||
roles - specific roles to be use in playbooks.
|
||||
|
@ -22,6 +28,13 @@ roles - specific roles to be use in playbooks.
|
|||
|
||||
filter_plugins - Jinja filters
|
||||
|
||||
master.yml - This is the master playbook, consisting of all
|
||||
current group and host playbooks. Note that the
|
||||
daily cron doesn't run this, it runs even over
|
||||
playbooks that are not yet included in master.
|
||||
This playbook is usefull for making changes over
|
||||
multiple groups/hosts usually with -t (tag).
|
||||
|
||||
== Paths ==
|
||||
|
||||
public path for everything is:
|
||||
|
@ -36,212 +49,23 @@ In general to run any ansible playbook you will want to run:
|
|||
|
||||
sudo -i ansible-playbook /path/to/playbook.yml
|
||||
|
||||
== Cloud information ==
|
||||
== Scheduled check-diff ==
|
||||
|
||||
cloud instances:
|
||||
to startup a new cloud instance and configure for basic server use run (as
|
||||
root):
|
||||
Every night a cron job runs over all playbooks under playbooks/{groups}{hosts}
|
||||
with the ansible --check --diff options. A report from this is sent to
|
||||
sysadmin-logs. In the ideal state this report would be empty.
|
||||
|
||||
el6:
|
||||
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/el6_temp_instance.yml
|
||||
== Idempotency ==
|
||||
|
||||
f19:
|
||||
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/f19_temp_instance.yml
|
||||
All playbooks should be idempotent. Ie, if run once they should bring the
|
||||
machine(s) to the desired state, and if run again N times after that they should
|
||||
make 0 changes (because the machine(s) are in the desired state).
|
||||
Please make sure your playbooks are idempotent.
|
||||
|
||||
== Can be run anytime ==
|
||||
|
||||
The -i is important - ansible's tools need access to root's sshagent as well
|
||||
as the cloud credentials to run the above playbooks successfully.
|
||||
|
||||
This will setup a new instance, provision it and email sysadmin-main that
|
||||
the instance was created, it's instance id (for terminating it, attaching
|
||||
volumes, etc) and it's ip address.
|
||||
|
||||
You will then be able to login, as root.
|
||||
|
||||
You can add various extra vars to the above commands to change the instance
|
||||
you've just spun up.
|
||||
|
||||
variables to define:
|
||||
instance_type=c1.medium
|
||||
security_group=default
|
||||
root_auth_users='username1 username2 @groupname'
|
||||
hostbase=basename for hostname - will have instance id appended to it
|
||||
|
||||
|
||||
define these with:
|
||||
|
||||
--extra-vars="varname=value varname1=value varname2=value"
|
||||
|
||||
Name Memory_MB Disk VCPUs
|
||||
m1.tiny 512 0 1
|
||||
m1.small 2048 20 1
|
||||
m1.medium 4096 40 2
|
||||
m1.large 8192 80 4
|
||||
m1.xlarge 16384 160 8
|
||||
m1.builder 5120 50 3
|
||||
|
||||
Setting up a new persistent cloud host:
|
||||
1. select an ip:
|
||||
source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
oeuca-describe-addresses
|
||||
- pick an ip from the list that is not assigned anywhere
|
||||
- add it into dns - normally in the cloud.fedoraproject.org but it doesn't
|
||||
have to be
|
||||
|
||||
2. If needed create a persistent storage disk for the instance:
|
||||
source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
euca-create-volume -z nova -s <size in gigabytes>
|
||||
|
||||
|
||||
3. set up the host/ip in ansible host inventory
|
||||
- add to ansible/inventory/inventory under [persistent-cloud]
|
||||
- either the ip itself or the hostname you want to refer to it as
|
||||
|
||||
4. setup the host_vars
|
||||
- create file named by the hostname or ip you used in the inventory
|
||||
- for adding persistent volumes add an entry like this into the host_vars file
|
||||
|
||||
volumes: ['-d /dev/vdb vol-BCA33FCD', '-d /dev/vdc vol-DC833F48']
|
||||
|
||||
for each volume you want to attach to the instance.
|
||||
|
||||
The device names matter - they start at /dev/vdb and increment. However,
|
||||
they are not reliable IN the instance. You should find the device, partition
|
||||
it, format it and label the formatted device then mount the device by label
|
||||
or by UUID. Do not count on the device name being the same each time.
|
||||
|
||||
|
||||
Contents should look like this (remove all the comments)
|
||||
|
||||
---
|
||||
# 2cpus, 3GB of ram 20GB of ephemeral space
|
||||
instance_type: m1.large
|
||||
# image id - see global vars. You can also use euca-describe-images to find other images as well
|
||||
image: "{{ el6_qcow_id }}"
|
||||
keypair: fedora-admin-20130801
|
||||
# what security group to add the host to
|
||||
security_group: webserver
|
||||
zone: fedoracloud
|
||||
# instance id will be appended
|
||||
hostbase: hostname_base-
|
||||
# ip should be in the 209.132.184.XXX range
|
||||
public_ip: $ip_you_selected
|
||||
# users/groups who should have root ssh access
|
||||
root_auth_users: skvidal bkabrda
|
||||
description: some description so someone else can know what this is
|
||||
|
||||
The available images can be found by running::
|
||||
source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
euca-describe-images | grep ami
|
||||
|
||||
4. setup a host playbook ansible/playbooks/hosts/$YOUR_HOSTNAME_HERE.yml
|
||||
Note: the name of this file doesn't really matter but it should normally
|
||||
be the hostname of the host you're setting up.
|
||||
|
||||
- name: check/create instance
|
||||
hosts: $YOUR_HOSTNAME/IP HERE
|
||||
user: root
|
||||
gather_facts: False
|
||||
|
||||
vars_files:
|
||||
- /srv/web/infra/ansible/vars/global.yml
|
||||
- "{{ private }}/vars.yml"
|
||||
|
||||
tasks:
|
||||
- include: "{{ tasks }}/persistent_cloud.yml"
|
||||
|
||||
- name: provision instance
|
||||
hosts: $YOUR_HOSTNAME/IP HERE
|
||||
user: root
|
||||
gather_facts: True
|
||||
|
||||
vars_files:
|
||||
- /srv/web/infra/ansible/vars/global.yml
|
||||
- "{{ private }}/vars.yml"
|
||||
- /srv/web/infra/ansible/vars//{{ ansible_distribution }}.yml
|
||||
|
||||
tasks:
|
||||
- include: "{{ tasks }}/cloud_setup_basic.yml
|
||||
# fill in other actions/includes/etc here
|
||||
|
||||
handlers:
|
||||
- include: "{{ handlers }}/restart_services.yml
|
||||
|
||||
|
||||
5. add/commit the above to the git repo and push your changes
|
||||
|
||||
|
||||
6. set it up:
|
||||
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/hosts/$YOUR_HOSTNAME_HERE.yml
|
||||
|
||||
7. login, etc
|
||||
|
||||
You should be able to run that playbook over and over again safely, it will
|
||||
only setup/create a new instance if the ip is not up/responding.
|
||||
|
||||
SECURITY GROUPS
|
||||
- to edit security groups you must either have your own cloud account or
|
||||
be a member of sysadmin-main
|
||||
|
||||
This gives you the credential to change things in the persistent tenant
|
||||
- source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
|
||||
|
||||
This lists all security groups in that tenant:
|
||||
- euca-describe-groups | grep GROUP
|
||||
|
||||
the output will look like this:
|
||||
euca-describe-groups | grep GROU
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e default default
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e jenkins jenkins instance group
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e logstash logstash security group
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e smtpserver list server group. needs web and smtp
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e webserver webserver security group
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e wideopen wideopen
|
||||
|
||||
|
||||
This lets you list the rules in a specific group:
|
||||
- euca-describe-group groupname
|
||||
|
||||
the output will look like this:
|
||||
|
||||
euca-describe-group wideopen
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e wideopen wideopen
|
||||
PERMISSION d4e664a10e2c4210839150be09c46e5e wideopen ALLOWS tcp 1 65535 FROM CIDR 0.0.0.0/0
|
||||
PERMISSION d4e664a10e2c4210839150be09c46e5e wideopen ALLOWS icmp -1 -1 FROM CIDR 0.0.0.0/0
|
||||
|
||||
|
||||
To create a new group:
|
||||
euca-create-group -d "group description here" groupname
|
||||
|
||||
To add a rule to a group:
|
||||
euca-authorize -P tcp -p 22 groupname
|
||||
euca-authorize -P icmp -t -1:-1 groupname
|
||||
|
||||
To delete a rule from a group:
|
||||
euca-revoke -P tcp -p 22 groupname
|
||||
|
||||
Notes:
|
||||
- Be careful removing or adding rules to existing groups b/c you could be
|
||||
impacting other instances using that security group.
|
||||
|
||||
- You will almost always want to allow 22/tcp (sshd) and icmp -1 -1 (ping
|
||||
and traceroute and friends).
|
||||
|
||||
|
||||
|
||||
|
||||
TERMINATING INSTANCES
|
||||
|
||||
For transient:
|
||||
1. source /srv/private/ansible/files/openstack/transient-admin/ec2rc.sh
|
||||
|
||||
- OR -
|
||||
|
||||
For persistent:
|
||||
1. source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
|
||||
2. euca-describe-instances | grep <ip of your instance>
|
||||
|
||||
3. euca-terminate-instances <the id, something like i-00000295>
|
||||
When a playbook or change is checked into ansible you should assume
|
||||
that it could be run at ANY TIME. Always make sure the checked in state
|
||||
is the desired state. Always test changes when they land so they don't
|
||||
surprise you later.
|
||||
|
||||
|
|
181
README.cloud
Normal file
181
README.cloud
Normal file
|
@ -0,0 +1,181 @@
|
|||
== Cloud information ==
|
||||
|
||||
The dashboard for the production cloud instance is:
|
||||
https://fedorainfracloud.org/dashboard/
|
||||
|
||||
You can download credentials via the dashboard (under security and access)
|
||||
|
||||
=== Transient instances ===
|
||||
|
||||
Transient instances are short term use instances for Fedora
|
||||
contributors. They can be terminated at any time and shouldn't be
|
||||
relied on for any production use. If you have an application
|
||||
or longer term item that should always be around
|
||||
please create a persistent playbook instead. (see below)
|
||||
|
||||
to startup a new transient cloud instance and configure for basic
|
||||
server use run (as root):
|
||||
|
||||
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/transient_cloud_instance.yml -e 'name=somename'
|
||||
|
||||
The -i is important - ansible's tools need access to root's sshagent as well
|
||||
as the cloud credentials to run the above playbooks successfully.
|
||||
|
||||
This will setup a new instance, provision it and email sysadmin-main that
|
||||
the instance was created and it's ip address.
|
||||
|
||||
You will then be able to login, as root if you are in the sysadmin-main group.
|
||||
(If you are making the instance for another user, see below)
|
||||
|
||||
You MUST pass a name to it, ie: -e 'name=somethingdescriptive'
|
||||
You can optionally override defaults by passing any of the following:
|
||||
image=imagename (default is centos70_x86_64)
|
||||
instance_type=some instance type (default is m1.small)
|
||||
root_auth_users='user1 user2 user3 @group1' (default always includes sysadmin-main group)
|
||||
|
||||
Note: if you run this playbook with the same name= multiple times
|
||||
openstack is smart enough to just return the current ip of that instance
|
||||
and go on. This way you can re-run if you want to reconfigure it without
|
||||
reprovisioning it.
|
||||
|
||||
|
||||
Sizes options
|
||||
-------------
|
||||
|
||||
Name Memory_MB Disk VCPUs
|
||||
m1.tiny 512 0 1
|
||||
m1.small 2048 20 1
|
||||
m1.medium 4096 40 2
|
||||
m1.large 8192 80 4
|
||||
m1.xlarge 16384 160 8
|
||||
m1.builder 5120 50 3
|
||||
|
||||
|
||||
=== Persistent cloud instances ===
|
||||
|
||||
Persistent cloud instances are ones that we want to always have up and
|
||||
configured. These are things like dev instances for various applications,
|
||||
proof of concept servers for evaluating something, etc. They will be
|
||||
reprovisioned after a reboot/maint window for the cloud.
|
||||
|
||||
Setting up a new persistent cloud host:
|
||||
|
||||
1) Select an available floating IP
|
||||
|
||||
source /srv/private/ansible/files/openstack/novarc
|
||||
nova floating-ip-list
|
||||
|
||||
Note that an "available floating IP" is one that has only a "-" in the Fixed IP
|
||||
column of the above `nova` command. Ignore the fact that the "Server Id" column
|
||||
is completely blank for all instances. If there are no ip's with -, use:
|
||||
|
||||
nova floating-ip-create
|
||||
|
||||
and retry the list.
|
||||
|
||||
2) Add that IP addr to dns (typically as foo.fedorainfracloud.org)
|
||||
|
||||
3) Create persistent storage disk for the instance (if necessary.. you might not
|
||||
need this).
|
||||
|
||||
nova volume-create --display-name SOME_NAME SIZE_IN_GB
|
||||
|
||||
4) Add to ansible inventory in the persistent-cloud group.
|
||||
You should use the FQDN for this and not the IP. Names are good.
|
||||
|
||||
5) setup the host_vars file. It should looks something like this::
|
||||
|
||||
instance_type: m1.medium
|
||||
image:
|
||||
keypair: fedora-admin-20130801
|
||||
security_group: default # NOTE: security_group MUST contain default.
|
||||
zone: nova
|
||||
tcp_ports: [22, 80, 443]
|
||||
|
||||
inventory_tenant: persistent
|
||||
inventory_instance_name: taiga
|
||||
hostbase: taiga
|
||||
public_ip: 209.132.184.50
|
||||
root_auth_users: ralph maxamillion
|
||||
description: taiga frontend server
|
||||
|
||||
volumes:
|
||||
- volume_id: VOLUME_UUID_GOES_HERE
|
||||
device: /dev/vdc
|
||||
|
||||
cloud_networks:
|
||||
# persistent-net
|
||||
- net-id: "67b77354-39a4-43de-b007-bb813ac5c35f"
|
||||
|
||||
6) setup the host playbook
|
||||
|
||||
7) run the playbook:
|
||||
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/hosts/$YOUR_HOSTNAME_HERE.yml
|
||||
|
||||
You should be able to run that playbook over and over again safely, it will
|
||||
only setup/create a new instance if the ip is not up/responding.
|
||||
|
||||
=== SECURITY GROUPS ===
|
||||
|
||||
FIXME: needs work for new cloud.
|
||||
|
||||
- to edit security groups you must either have your own cloud account or
|
||||
be a member of sysadmin-main
|
||||
|
||||
This gives you the credential to change things in the persistent tenant
|
||||
- source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
|
||||
This lists all security groups in that tenant:
|
||||
- euca-describe-groups | grep GROUP
|
||||
|
||||
the output will look like this:
|
||||
euca-describe-groups | grep GROU
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e default default
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e jenkins jenkins instance group
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e logstash logstash security group
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e smtpserver list server group. needs web and smtp
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e webserver webserver security group
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e wideopen wideopen
|
||||
|
||||
|
||||
This lets you list the rules in a specific group:
|
||||
- euca-describe-group groupname
|
||||
|
||||
the output will look like this:
|
||||
|
||||
euca-describe-group wideopen
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e wideopen wideopen
|
||||
PERMISSION d4e664a10e2c4210839150be09c46e5e wideopen ALLOWS tcp 1 65535 FROM CIDR 0.0.0.0/0
|
||||
PERMISSION d4e664a10e2c4210839150be09c46e5e wideopen ALLOWS icmp -1 -1 FROM CIDR 0.0.0.0/0
|
||||
|
||||
|
||||
To create a new group:
|
||||
euca-create-group -d "group description here" groupname
|
||||
|
||||
To add a rule to a group:
|
||||
euca-authorize -P tcp -p 22 groupname
|
||||
euca-authorize -P icmp -t -1:-1 groupname
|
||||
|
||||
To delete a rule from a group:
|
||||
euca-revoke -P tcp -p 22 groupname
|
||||
|
||||
Notes:
|
||||
- Be careful removing or adding rules to existing groups b/c you could be
|
||||
impacting other instances using that security group.
|
||||
|
||||
- You will almost always want to allow 22/tcp (sshd) and icmp -1 -1 (ping
|
||||
and traceroute and friends).
|
||||
|
||||
=== TERMINATING INSTANCES ===
|
||||
|
||||
For transient:
|
||||
1. source /srv/private/ansible/files/openstack/transient-admin/keystonerc.sh
|
||||
|
||||
- OR -
|
||||
|
||||
For persistent:
|
||||
1. source /srv/private/ansible/files/openstack/novarc
|
||||
|
||||
2. nova list | grep <ip of your instance or name of your instance>
|
||||
|
||||
3. nova delete <name of instance or ID of instance>
|
|
@ -3,7 +3,14 @@
|
|||
|
||||
AllowOverride All
|
||||
|
||||
Order allow,deny
|
||||
Allow from all
|
||||
<IfModule mod_authz_core.c>
|
||||
# Apache 2.4
|
||||
Require all granted
|
||||
</IfModule>
|
||||
<IfModule !mod_authz_core.c>
|
||||
# Apache 2.2
|
||||
Order deny,allow
|
||||
Allow from all
|
||||
</IfModule>
|
||||
|
||||
</Directory>
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
RSYNC='/usr/bin/rsync'
|
||||
RS_OPT="-avSHP --numeric-ids"
|
||||
RS_DEADLY="--delete --delete-excluded --delete-delay --delay-updates"
|
||||
ALT_EXCLUDES="--exclude deltaisos/archive --exclude 21_Alpha* --exclude 21-Alpha* --exclude 21_Beta* --exclude=F21a-TC1"
|
||||
ALT_EXCLUDES="--exclude deltaisos/archive --exclude 22_Alpha* --exclude 22_Beta*"
|
||||
EPL_EXCLUDES=""
|
||||
FED_EXCLUDES=""
|
||||
|
||||
|
|
|
@ -1 +1,2 @@
|
|||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZAnj+U9s4Xn36DpQYOfAhW2Q1ZQqkKASvG3rJNsOCpRlvWGcmxrvjUr5mQ3ZMapEu0IaaQUq40JvP8iqJ1HIq4C8UXLBq9SFEfeNYh5qRqEpEn5CRcjrJPwFf6jLpr3bN+F98Vo3E/FMgJ3MzBsynZoT+A6d02oitoxV6DomDB7gXU08Pfz7oQYXBzAVe3+BP4IaeUWbjHDv57LGBa/Xfw5SKrgk+/IKXIGk2Rkxn7sShtHzkpkI4waNl4gqUzwsJ/Y+FJxpI1DvWxHuzlx1uOLupxYA9p+ejJo5sXGZtO2Ynx2NFEjIzqmBljaiy+wmDYvZz2JdIFwSAjPbaFjtF root@fed-cloud09.cloud.fedoraproject.org
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCv8WqXOuL78Rd7ZvDqoi84M7uRV3uueXTXtvlPdyNQBzIBmxh+spw9IhtoR+FlzgQQ1MN4B7YVLTGki6QDxWDM5jgTVfzxTh/HTg7kJ31HbM1/jDuBK7HMfay2BGx/HCqS2oxIBgIBwIMQAU93jBZUxNyYWvO+5TiU35IHEkYOtHyGYtTtuGCopYRQoAAOIVIIzzDbPvopojCBF5cMYglR/G02YgWM7hMpQ9IqEttLctLmpg6ckcp/sDTHV/8CbXbrSN6pOYxn1YutOgC9MHNmxC1joMH18qkwvSnzXaeVNh4PBWnm1f3KVTSZXKuewPThc3fk2sozgM9BH6KmZoKl
|
||||
|
||||
|
|
|
@ -35,6 +35,10 @@ global
|
|||
# turn on stats unix socket
|
||||
stats socket /var/lib/haproxy/stats
|
||||
|
||||
tune.ssl.default-dh-param 1024
|
||||
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
|
||||
|
||||
|
||||
#---------------------------------------------------------------------
|
||||
# common defaults that all the 'listen' and 'backend' sections will
|
||||
# use if not designated in their block
|
||||
|
@ -62,32 +66,46 @@ defaults
|
|||
#frontend keystone_admin *:35357
|
||||
# default_backend keystone_admin
|
||||
frontend neutron
|
||||
bind 0.0.0.0:9696 ssl crt /etc/haproxy/fed-cloud09.combined
|
||||
bind 0.0.0.0:9696 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
|
||||
default_backend neutron
|
||||
# HSTS (15768000 seconds = 6 months)
|
||||
rspadd Strict-Transport-Security:\ max-age=15768000
|
||||
|
||||
frontend cinder
|
||||
bind 0.0.0.0:8776 ssl crt /etc/haproxy/fed-cloud09.combined
|
||||
bind 0.0.0.0:8776 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
|
||||
default_backend cinder
|
||||
# HSTS (15768000 seconds = 6 months)
|
||||
rspadd Strict-Transport-Security:\ max-age=15768000
|
||||
|
||||
frontend swift
|
||||
bind 0.0.0.0:8080 ssl crt /etc/haproxy/fed-cloud09.combined
|
||||
bind 0.0.0.0:8080 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
|
||||
default_backend swift
|
||||
# HSTS (15768000 seconds = 6 months)
|
||||
rspadd Strict-Transport-Security:\ max-age=15768000
|
||||
|
||||
frontend nova
|
||||
bind 0.0.0.0:8774 ssl crt /etc/haproxy/fed-cloud09.combined
|
||||
bind 0.0.0.0:8774 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
|
||||
default_backend nova
|
||||
# HSTS (15768000 seconds = 6 months)
|
||||
rspadd Strict-Transport-Security:\ max-age=15768000
|
||||
|
||||
frontend ceilometer
|
||||
bind 0.0.0.0:8777 ssl crt /etc/haproxy/fed-cloud09.combined
|
||||
bind 0.0.0.0:8777 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
|
||||
default_backend ceilometer
|
||||
# HSTS (15768000 seconds = 6 months)
|
||||
rspadd Strict-Transport-Security:\ max-age=15768000
|
||||
|
||||
frontend ec2
|
||||
bind 0.0.0.0:8773 ssl crt /etc/haproxy/fed-cloud09.combined
|
||||
bind 0.0.0.0:8773 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
|
||||
default_backend ec2
|
||||
# HSTS (15768000 seconds = 6 months)
|
||||
rspadd Strict-Transport-Security:\ max-age=15768000
|
||||
|
||||
frontend glance
|
||||
bind 0.0.0.0:9292 ssl crt /etc/haproxy/fed-cloud09.combined
|
||||
bind 0.0.0.0:9292 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
|
||||
default_backend glance
|
||||
# HSTS (15768000 seconds = 6 months)
|
||||
rspadd Strict-Transport-Security:\ max-age=15768000
|
||||
|
||||
backend neutron
|
||||
server neutron 127.0.0.1:8696 check
|
||||
|
|
|
@ -21,4 +21,4 @@
|
|||
209.132.181.6 infrastructure infrastructure.fedoraproject.org
|
||||
209.132.181.32 fas-all.phx2.fedoraproject.org
|
||||
|
||||
{{ controller_private_ip }} fed-cloud09.cloud.fedoraproject.org
|
||||
{{ controller_private_ip }} fed-cloud09.cloud.fedoraproject.org fedorainfracloud.org
|
||||
|
|
|
@ -96,11 +96,11 @@ CONFIG_AMQP_SSL_PORT=5671
|
|||
|
||||
# The filename of the certificate that the AMQP service is going to
|
||||
# use
|
||||
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/fed-cloud09.pem
|
||||
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/fedorainfracloud.org.pem
|
||||
|
||||
# The filename of the private key that the AMQP service is going to
|
||||
# use
|
||||
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/fed-cloud09.key
|
||||
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/fedorainfracloud.org.key
|
||||
|
||||
# Auto Generates self signed SSL certificate and key
|
||||
CONFIG_AMQP_SSL_SELF_SIGNED=n
|
||||
|
@ -198,10 +198,10 @@ CONFIG_NOVA_COMPUTE_PRIVIF=lo
|
|||
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
|
||||
|
||||
# Public interface on the Nova network server
|
||||
CONFIG_NOVA_NETWORK_PUBIF={{ controller_public_ip }}
|
||||
CONFIG_NOVA_NETWORK_PUBIF=eth0
|
||||
|
||||
# Private interface for network manager on the Nova network server
|
||||
CONFIG_NOVA_NETWORK_PRIVIF=lo
|
||||
CONFIG_NOVA_NETWORK_PRIVIF=eth1
|
||||
|
||||
# IP Range for network manager
|
||||
CONFIG_NOVA_NETWORK_FIXEDRANGE={{ internal_interface_cidr }}
|
||||
|
@ -214,7 +214,7 @@ CONFIG_NOVA_NETWORK_FLOATRANGE={{ public_interface_cidr }}
|
|||
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=external
|
||||
|
||||
# Automatically assign a floating IP to new instances
|
||||
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=y
|
||||
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
|
||||
|
||||
# First VLAN for private networks
|
||||
CONFIG_NOVA_NETWORK_VLAN_START=100
|
||||
|
@ -258,6 +258,16 @@ CONFIG_NEUTRON_L2_PLUGIN=ml2
|
|||
# metadata agent
|
||||
CONFIG_NEUTRON_METADATA_PW={{ NEUTRON_PASS }}
|
||||
|
||||
# Set to 'y' if you would like Packstack to install Neutron LBaaS
|
||||
CONFIG_LBAAS_INSTALL=y
|
||||
|
||||
# Set to 'y' if you would like Packstack to install Neutron L3
|
||||
# Metering agent
|
||||
CONFIG_NEUTRON_METERING_AGENT_INSTALL=y
|
||||
|
||||
# Whether to configure neutron Firewall as a Service
|
||||
CONFIG_NEUTRON_FWAAS=y
|
||||
|
||||
# A comma separated list of network type driver entrypoints to be
|
||||
# loaded from the neutron.ml2.type_drivers namespace.
|
||||
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=local,flat,gre
|
||||
|
@ -350,14 +360,14 @@ CONFIG_HORIZON_SSL=y
|
|||
# PEM encoded certificate to be used for ssl on the https server,
|
||||
# leave blank if one should be generated, this certificate should not
|
||||
# require a passphrase
|
||||
CONFIG_SSL_CERT=/etc/pki/tls/certs/fed-cloud09.pem
|
||||
CONFIG_SSL_CERT=/etc/pki/tls/certs/fedorainfracloud.org.pem
|
||||
|
||||
# PEM encoded CA certificates from which the certificate chain of the
|
||||
# # server certificate can be assembled.
|
||||
CONFIG_SSL_CACHAIN=/etc/pki/tls/certs/fed-cloud09.pem
|
||||
CONFIG_SSL_CACHAIN=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
|
||||
|
||||
# Keyfile corresponding to the certificate if one was entered
|
||||
CONFIG_SSL_KEY=/etc/pki/tls/private/fed-cloud09.key
|
||||
CONFIG_SSL_KEY=/etc/pki/tls/private/fedorainfracloud.key
|
||||
|
||||
# The password to use for the Swift to authenticate with Keystone
|
||||
CONFIG_SWIFT_KS_PW={{ SWIFT_PASS }}
|
||||
|
@ -443,7 +453,7 @@ CONFIG_CEILOMETER_SECRET={{ CEILOMETER_SECRET }}
|
|||
CONFIG_CEILOMETER_KS_PW={{ CEILOMETER_PASS }}
|
||||
|
||||
# The IP address of the server on which to install mongodb
|
||||
CONFIG_MONGODB_HOST={{ controller_public_ip }}
|
||||
CONFIG_MONGODB_HOST=127.0.0.1
|
||||
|
||||
# The password of the nagiosadmin user on the Nagios server
|
||||
CONFIG_NAGIOS_PW=
|
||||
|
|
75
files/httpd/newvirtualhost.conf.j2
Normal file
75
files/httpd/newvirtualhost.conf.j2
Normal file
|
@ -0,0 +1,75 @@
|
|||
<VirtualHost *:443>
|
||||
# Change this to the domain which points to your host.
|
||||
ServerName {{ item.name }}
|
||||
|
||||
# Use separate log files for the SSL virtual host; note that LogLevel
|
||||
# is not inherited from httpd.conf.
|
||||
ErrorLog logs/{{ item.name }}_error_log
|
||||
TransferLog logs/{{ item.name }}_access_log
|
||||
LogLevel warn
|
||||
|
||||
# SSL Engine Switch:
|
||||
# Enable/Disable SSL for this virtual host.
|
||||
SSLEngine on
|
||||
|
||||
# SSL Protocol support:
|
||||
# List the enable protocol levels with which clients will be able to
|
||||
# connect. Disable SSLv2 access by default:
|
||||
SSLProtocol all -SSLv2
|
||||
|
||||
# SSL Cipher Suite:
|
||||
# List the ciphers that the client is permitted to negotiate.
|
||||
# See the mod_ssl documentation for a complete list.
|
||||
#SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
|
||||
SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5
|
||||
|
||||
# Server Certificate:
|
||||
# Point SSLCertificateFile at a PEM encoded certificate. If
|
||||
# the certificate is encrypted, then you will be prompted for a
|
||||
# pass phrase. Note that a kill -HUP will prompt again. A new
|
||||
# certificate can be generated using the genkey(1) command.
|
||||
SSLCertificateFile /etc/pki/tls/certs/{{ sslcertfile }}
|
||||
|
||||
# Server Private Key:
|
||||
# If the key is not combined with the certificate, use this
|
||||
# directive to point at the key file. Keep in mind that if
|
||||
# you've both a RSA and a DSA private key you can configure
|
||||
# both in parallel (to also allow the use of DSA ciphers, etc.)
|
||||
SSLCertificateKeyFile /etc/pki/tls/private/{{ sslkeyfile }}
|
||||
|
||||
# Server Certificate Chain:
|
||||
# Point SSLCertificateChainFile at a file containing the
|
||||
# concatenation of PEM encoded CA certificates which form the
|
||||
# certificate chain for the server certificate. Alternatively
|
||||
# the referenced file can be the same as SSLCertificateFile
|
||||
# when the CA certificates are directly appended to the server
|
||||
# certificate for convinience.
|
||||
#SSLCertificateChainFile /etc/pki/tls/certs/server-chain.crt
|
||||
{% if sslintermediatecertfile != '' %}
|
||||
SSLCertificateChainFile /etc/pki/tls/certs/{{ sslintermediatecertfile }}
|
||||
{% endif %}
|
||||
|
||||
# Certificate Authority (CA):
|
||||
# Set the CA certificate verification path where to find CA
|
||||
# certificates for client authentication or alternatively one
|
||||
# huge file containing all of them (file must be PEM encoded)
|
||||
#SSLCACertificateFile /etc/pki/tls/certs/ca-bundle.crt
|
||||
|
||||
DocumentRoot {{ item.document_root }}
|
||||
|
||||
Options Indexes FollowSymLinks
|
||||
|
||||
</VirtualHost>
|
||||
|
||||
|
||||
<VirtualHost *:80>
|
||||
# Change this to the domain which points to your host.
|
||||
ServerName {{ item.name }}
|
||||
{% if sslonly %}
|
||||
RewriteEngine On
|
||||
RewriteCond %{HTTPS} off
|
||||
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [NE]
|
||||
{% else %}
|
||||
Options Indexes FollowSymLinks
|
||||
{% endif %}
|
||||
</VirtualHost>
|
|
@ -46,6 +46,22 @@ class="jenkins.model.ProjectNamingStrategy$DefaultProjectNamingStrategy"/>
|
|||
<label></label>
|
||||
<nodeProperties/>
|
||||
</slave>
|
||||
<slave>
|
||||
<name>Fedora22</name>
|
||||
<description></description>
|
||||
<remoteFS>/mnt/jenkins/</remoteFS>
|
||||
<numExecutors>2</numExecutors>
|
||||
<mode>NORMAL</mode>
|
||||
<retentionStrategy class="hudson.slaves.RetentionStrategy$Always"/>
|
||||
<launcher class="hudson.plugins.sshslaves.SSHLauncher"
|
||||
plugin="ssh-slaves@0.21">
|
||||
<host>jenkins-f22.fedorainfracloud.org</host>
|
||||
<port>22</port>
|
||||
<credentialsId>950d5dd7-acb2-402a-8670-21f152d04928</credentialsId>
|
||||
</launcher>
|
||||
<label></label>
|
||||
<nodeProperties/>
|
||||
</slave>
|
||||
<slave>
|
||||
<name>Fedora20</name>
|
||||
<description></description>
|
||||
|
@ -63,7 +79,7 @@ class="jenkins.model.ProjectNamingStrategy$DefaultProjectNamingStrategy"/>
|
|||
<nodeProperties/>
|
||||
</slave>
|
||||
<slave>
|
||||
<name>EL7-beta</name>
|
||||
<name>EL7</name>
|
||||
<description></description>
|
||||
<remoteFS>/mnt/jenkins/</remoteFS>
|
||||
<numExecutors>2</numExecutors>
|
||||
|
@ -71,7 +87,7 @@ class="jenkins.model.ProjectNamingStrategy$DefaultProjectNamingStrategy"/>
|
|||
<retentionStrategy class="hudson.slaves.RetentionStrategy$Always"/>
|
||||
<launcher class="hudson.plugins.sshslaves.SSHLauncher"
|
||||
plugin="ssh-slaves@0.21">
|
||||
<host>172.16.5.14</host>
|
||||
<host>172.16.5.27</host>
|
||||
<port>22</port>
|
||||
<credentialsId>950d5dd7-acb2-402a-8670-21f152d04928</credentialsId>
|
||||
</launcher>
|
||||
|
|
|
@ -1,58 +0,0 @@
|
|||
# This is a config file for Koschei that can override values in default
|
||||
# configuration in /usr/share/koschei/config.cfg. It is a python file expecting
|
||||
# assignment to config dictionary which will be recursively merged with the
|
||||
# default one.
|
||||
config = {
|
||||
"database_config": {
|
||||
"username": "koschei",
|
||||
"password": "{{ koschei_pgsql_password }}",
|
||||
"database": "koschei"
|
||||
},
|
||||
"koji_config": {
|
||||
"cert": "/etc/koschei/koschei.pem",
|
||||
"ca": "/etc/koschei/fedora-ca.cert",
|
||||
"server_ca": "/etc/koschei/fedora-ca.cert",
|
||||
},
|
||||
"flask": {
|
||||
"SECRET_KEY": "{{ koschei_flask_secret_key }}",
|
||||
},
|
||||
"logging": {
|
||||
"loggers": {
|
||||
"": {
|
||||
"level": "DEBUG",
|
||||
"handlers": ["stderr", "email"],
|
||||
},
|
||||
},
|
||||
"handlers": {
|
||||
"email": {
|
||||
"class": "logging.handlers.SMTPHandler",
|
||||
"level": "WARN",
|
||||
"mailhost": "localhost",
|
||||
"fromaddr": "koschei@fedoraproject.org",
|
||||
"toaddrs": ['msimacek@redhat.com', 'mizdebsk@redhat.com'],
|
||||
"subject": "Koschei warning",
|
||||
},
|
||||
},
|
||||
},
|
||||
"fedmsg-publisher": {
|
||||
"enabled": True,
|
||||
"modname": "koschei",
|
||||
},
|
||||
# "services": {
|
||||
# "polling": {
|
||||
# "interval": 60,
|
||||
# },
|
||||
# },
|
||||
"dependency": {
|
||||
"repo_chache_items": 5,
|
||||
"keep_build_deps_for": 2
|
||||
},
|
||||
"koji_config": {
|
||||
"max_builds": 30
|
||||
},
|
||||
}
|
||||
|
||||
# Local Variables:
|
||||
# mode: Python
|
||||
# End:
|
||||
# vi: ft=python
|
|
@ -1,13 +0,0 @@
|
|||
[koschei-mizdebsk]
|
||||
name=Koschei repo
|
||||
baseurl=https://mizdebsk.fedorapeople.org/koschei/repo/
|
||||
enabled=1
|
||||
gpgcheck=0
|
||||
metadata_expire=60
|
||||
|
||||
[koschei-msimacek]
|
||||
name=Koschei repo
|
||||
baseurl=https://msimacek.fedorapeople.org/koschei/repo/
|
||||
enabled=1
|
||||
gpgcheck=0
|
||||
metadata_expire=60
|
17
files/lists-dev/apache.conf.j2
Normal file
17
files/lists-dev/apache.conf.j2
Normal file
|
@ -0,0 +1,17 @@
|
|||
<VirtualHost *:80>
|
||||
ServerAdmin admin@fedoraproject.org
|
||||
ServerName {{ ansible_hostname }}
|
||||
</VirtualHost>
|
||||
<VirtualHost *:443>
|
||||
ServerAdmin admin@fedoraproject.org
|
||||
ServerName {{ ansible_hostname }}
|
||||
|
||||
SSLEngine on
|
||||
SSLCertificateFile /etc/pki/tls/certs/localhost.crt
|
||||
SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
|
||||
#SSLCertificateChainFile /etc/pki/tls/cert.pem
|
||||
SSLHonorCipherOrder On
|
||||
SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
|
||||
SSLProtocol -All +TLSv1 +TLSv1.1 +TLSv1.2
|
||||
</VirtualHost>
|
||||
|
|
@ -3,7 +3,7 @@
|
|||
sharedscripts
|
||||
su mailman mailman
|
||||
postrotate
|
||||
/bin/kill -HUP `cat /run/mailman3/master.pid 2>/dev/null` 2>/dev/null || true
|
||||
/bin/kill -HUP `cat {{ mailman_webui_basedir }}/var/master.pid 2>/dev/null` 2>/dev/null || true
|
||||
# Don't run "mailman3 reopen" with SELinux on here in the logrotate
|
||||
# context, it will be blocked
|
||||
#/usr/bin/mailman3 reopen >/dev/null 2>&1 || true
|
|
@ -1,3 +1,2 @@
|
|||
*:*:mailman:mailmanadmin:{{ lists_dev_mm_db_pass }}
|
||||
*:*:hyperkitty:hyperkittyadmin:{{ lists_dev_hk_db_pass }}
|
||||
*:*:kittystore:kittystoreadmin:{{ lists_dev_ks_db_pass }}
|
||||
|
|
2
files/lists-dev/ssl.conf
Normal file
2
files/lists-dev/ssl.conf
Normal file
|
@ -0,0 +1,2 @@
|
|||
LoadModule ssl_module modules/mod_ssl.so
|
||||
Listen 443
|
|
@ -1,25 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
HKCONFDIR="/etc/hyperkitty/sites/default"
|
||||
MMDIR=$1
|
||||
DOMAIN=$2
|
||||
|
||||
if [ -z "$MMDIR" ]; then
|
||||
echo "Usage: $0 <mailman-lib-directory>"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
[ -z "$DOMAIN" ] && DOMAIN=lists.fedoraproject.org
|
||||
|
||||
existinglists=`mktemp`
|
||||
trap "rm -f $existinglists" EXIT
|
||||
sudo -u mailman mailman3 lists -q > $existinglists
|
||||
|
||||
for listname in `ls $MMDIR/lists`; do
|
||||
listaddr="$listname@$DOMAIN"
|
||||
if ! grep -qs $listaddr $existinglists; then
|
||||
echo "sudo -u mailman mailman3 create -d $listaddr"
|
||||
echo "sudo -u mailman PYTHONPATH=/usr/lib/mailman mailman3 import21 $listaddr $MMDIR/lists/$listname/config.pck"
|
||||
fi
|
||||
echo "sudo kittystore-import -p $HKCONFDIR -s settings_admin -l $listaddr --continue $MMDIR/archives/private/${listname}.mbox/${listname}.mbox"
|
||||
done
|
|
@ -1,7 +1,5 @@
|
|||
*:*:mailman:mailman:{{ mailman_mm_db_pass }}
|
||||
*:*:hyperkitty:hyperkittyapp:{{ mailman_hk_db_pass }}
|
||||
*:*:hyperkitty:hyperkittyadmin:{{ mailman_hk_admin_db_pass }}
|
||||
*:*:kittystore:kittystoreapp:{{ mailman_ks_db_pass }}
|
||||
*:*:kittystore:kittystoreadmin:{{ mailman_ks_admin_db_pass }}
|
||||
*:*:postorius:postoriusapp:{{ mailman_ps_db_pass }}
|
||||
*:*:postorius:postoriusadmin:{{ mailman_ps_admin_db_pass }}
|
||||
|
|
8
files/osbs/atomic-reactor.repo
Normal file
8
files/osbs/atomic-reactor.repo
Normal file
|
@ -0,0 +1,8 @@
|
|||
[atomic-reactor]
|
||||
name=Copr repo for atomic-reactor owned by maxamillion
|
||||
baseurl=https://copr-be.cloud.fedoraproject.org/results/maxamillion/atomic-reactor/epel-7-$basearch/
|
||||
skip_if_unavailable=True
|
||||
gpgcheck=1
|
||||
gpgkey=https://copr-be.cloud.fedoraproject.org/results/maxamillion/atomic-reactor/pubkey.gpg
|
||||
enabled=1
|
||||
enabled_metadata=1
|
18
files/osbs/osbs.conf
Normal file
18
files/osbs/osbs.conf
Normal file
|
@ -0,0 +1,18 @@
|
|||
[general]
|
||||
build_json_dir = /usr/share/osbs/
|
||||
|
||||
[default]
|
||||
openshift_uri = https://losbs.example.com:8443/
|
||||
# if you want to get packages from koji (koji plugin in dock)
|
||||
# you need to setup koji hub and root
|
||||
# this sample is for fedora
|
||||
koji_root = http://koji.fedoraproject.org/
|
||||
koji_hub = http://koji.fedoraproject.org/kojihub
|
||||
# in case of using artifacts plugin, you should provide a command
|
||||
# how to fetch artifacts
|
||||
sources_command = fedpkg sources
|
||||
# from where should be images pulled and where should be pushed?
|
||||
# registry_uri = your.example.registry
|
||||
registry_uri = localhost:5000
|
||||
verify_ssl = false
|
||||
build_type = simple
|
|
@ -26,4 +26,5 @@ unix-group: sigul
|
|||
nss-dir: /var/lib/sigul
|
||||
# Password for accessing the NSS database. If not specified, the bridge will
|
||||
# ask on startup
|
||||
; nss-password:
|
||||
# Currently no password is used
|
||||
nss-password:
|
||||
|
|
45
files/sign/bridge.conf.secondary.j2
Normal file
45
files/sign/bridge.conf.secondary.j2
Normal file
|
@ -0,0 +1,45 @@
|
|||
# This is a configuration for the sigul bridge.
|
||||
#
|
||||
[bridge]
|
||||
# Nickname of the bridge's certificate in the NSS database specified below
|
||||
bridge-cert-nickname: secondary-signer
|
||||
# Port on which the bridge expects client connections
|
||||
client-listen-port: 44334
|
||||
# Port on which the bridge expects server connections
|
||||
server-listen-port: 44333
|
||||
# A Fedora account system group required for access to the signing server. If
|
||||
# empty, no Fedora account check is done.
|
||||
; required-fas-group:
|
||||
# User name and password for an account on the Fedora account system that can
|
||||
# be used to verify group memberships
|
||||
; fas-user-name:
|
||||
; fas-password:
|
||||
#
|
||||
[koji]
|
||||
# Config file used to connect to the Koji hub
|
||||
# ; koji-config: ~/.koji/config
|
||||
# # Recognized alternative instances
|
||||
koji-instances: ppc s390 arm sparc
|
||||
#
|
||||
# # Example configuration of alternative instances:
|
||||
# # koji-instances: ppc64 s390
|
||||
# # Configuration paths for alternative instances:
|
||||
koji-config-ppc: /etc/koji-ppc.conf
|
||||
koji-config-s390: /etc/koji-s390.conf
|
||||
koji-config-arm: /etc/koji-arm.conf
|
||||
koji-config-sparc: /etc/koji-sparc.conf
|
||||
#
|
||||
#
|
||||
[daemon]
|
||||
# The user to run as
|
||||
unix-user: sigul
|
||||
# The group to run as
|
||||
unix-group: sigul
|
||||
#
|
||||
[nss]
|
||||
# Path to a directory containing a NSS database
|
||||
nss-dir: /var/lib/sigul
|
||||
# Password for accessing the NSS database. If not specified, the bridge will
|
||||
# ask on startup
|
||||
# Currently no password is used
|
||||
nss-password:
|
|
@ -8,16 +8,20 @@ server = http://arm.koji.fedoraproject.org/kojihub
|
|||
;url of web interface
|
||||
weburl = http://arm.koji.fedoraproject.org/koji
|
||||
|
||||
;url of package download site
|
||||
topurl = http://armpkgs.fedoraproject.org/
|
||||
|
||||
;path to the koji top directory
|
||||
;topdir = /mnt/koji
|
||||
|
||||
;configuration for SSL athentication
|
||||
|
||||
;client certificate
|
||||
;cert = ~/.koji/client.crt
|
||||
cert = ~/.fedora.cert
|
||||
|
||||
;certificate of the CA that issued the client certificate
|
||||
;ca = ~/.koji/clientca.crt
|
||||
ca = ~/.fedora-upload-ca.cert
|
||||
|
||||
;certificate of the CA that issued the HTTP server certificate
|
||||
;serverca = ~/.koji/serverca.crt
|
||||
serverca = ~/.fedora-server-ca.cert
|
||||
|
27
files/sign/koji-ppc.conf
Normal file
27
files/sign/koji-ppc.conf
Normal file
|
@ -0,0 +1,27 @@
|
|||
[koji]
|
||||
|
||||
;configuration for koji cli tool
|
||||
|
||||
;url of XMLRPC server
|
||||
server = http://ppc.koji.fedoraproject.org/kojihub
|
||||
|
||||
;url of web interface
|
||||
weburl = http://ppc.koji.fedoraproject.org/koji
|
||||
|
||||
;url of package download site
|
||||
topurl = http://ppc.koji.fedoraproject.org/
|
||||
|
||||
;path to the koji top directory
|
||||
;topdir = /mnt/koji
|
||||
|
||||
;configuration for SSL athentication
|
||||
|
||||
;client certificate
|
||||
cert = ~/.fedora.cert
|
||||
|
||||
;certificate of the CA that issued the client certificate
|
||||
ca = ~/.fedora-upload-ca.cert
|
||||
|
||||
;certificate of the CA that issued the HTTP server certificate
|
||||
serverca = ~/.fedora-server-ca.cert
|
||||
|
27
files/sign/koji-s390.conf
Normal file
27
files/sign/koji-s390.conf
Normal file
|
@ -0,0 +1,27 @@
|
|||
[koji]
|
||||
|
||||
;configuration for koji cli tool
|
||||
|
||||
;url of XMLRPC server
|
||||
server = http://s390.koji.fedoraproject.org/kojihub
|
||||
|
||||
;url of web interface
|
||||
weburl = http://s390.koji.fedoraproject.org/koji
|
||||
|
||||
;url of package download site
|
||||
topurl = http://s390pkgs.fedoraproject.org/
|
||||
|
||||
;path to the koji top directory
|
||||
;topdir = /mnt/koji
|
||||
|
||||
;configuration for SSL athentication
|
||||
|
||||
;client certificate
|
||||
cert = ~/.fedora.cert
|
||||
|
||||
;certificate of the CA that issued the client certificate
|
||||
ca = ~/.fedora-upload-ca.cert
|
||||
|
||||
;certificate of the CA that issued the HTTP server certificate
|
||||
serverca = ~/.fedora-server-ca.cert
|
||||
|
51
files/sign/server.conf.secondary
Normal file
51
files/sign/server.conf.secondary
Normal file
|
@ -0,0 +1,51 @@
|
|||
# This is a configuration for the sigul server.
|
||||
|
||||
# FIXME: remove my data
|
||||
|
||||
[server]
|
||||
# Host name of the publically acessible bridge to clients
|
||||
bridge-hostname: secondary-signer
|
||||
# Port on which the bridge expects server connections
|
||||
; bridge-port: 44333
|
||||
# Maximum accepted size of payload stored on disk
|
||||
max-file-payload-size: 2073741824
|
||||
# Maximum accepted size of payload stored in server's memory
|
||||
max-memory-payload-size: 1048576
|
||||
# Nickname of the server's certificate in the NSS database specified below
|
||||
server-cert-nickname: secondary-signer-server
|
||||
|
||||
signing-timeout: 4000
|
||||
|
||||
[database]
|
||||
# Path to a SQLite database
|
||||
; database-path: /var/lib/sigul/server.conf
|
||||
|
||||
[gnupg]
|
||||
# Path to a directory containing GPG configuration and keyrings
|
||||
gnupg-home: /var/lib/sigul/gnupg
|
||||
# Default primary key type for newly created keys
|
||||
gnupg-key-type: RSA
|
||||
# Default primary key length for newly created keys
|
||||
gnupg-key-length: 4096
|
||||
# Default subkey type for newly created keys, empty for no subkey
|
||||
#gnupg-subkey-type: ELG-E
|
||||
# Default subkey length for newly created keys if gnupg-subkey-type is not empty
|
||||
# gnupg-subkey-length: 4096
|
||||
# Default key usage flags for newly created keys
|
||||
gnupg-key-usage: encrypt, sign
|
||||
# Length of key passphrases used for newsly created keys
|
||||
; passphrase-length: 64
|
||||
|
||||
[daemon]
|
||||
# The user to run as
|
||||
unix-user: sigul
|
||||
# The group to run as
|
||||
unix-group: sigul
|
||||
|
||||
[nss]
|
||||
# Path to a directory containing a NSS database
|
||||
nss-dir: /var/lib/sigul
|
||||
# Password for accessing the NSS database. If not specified, the server will
|
||||
# ask on startup
|
||||
; nss-password is not specified by default
|
||||
|
1
files/twisted/ssh-pub-key
Normal file
1
files/twisted/ssh-pub-key
Normal file
|
@ -0,0 +1 @@
|
|||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDxFYkms3bEGIGpH0Dg0JgvsdHI/pWkS4ynCn/poyVcKc+SL1StKoOFPzzFh7dGVIeQ6q8MLbm246H8Swx57R13Q3bhtTs5Gpy5bNC7HejkWbrrMEJuKxVKhIintbC+tT04OFBFklVePuxacsc3EBdTHSnz9o41MfJnjv58VxJB5bwfgsV7FMDLHnBpujlPPH1hZG5A0fwD8VgCwaRVirIs9Kw35yKEUm8D76vOxjAqm7UTexEcPNFb4tYGzI00hbPS374FzoO4ZuXxv1ymakw9iyL54Hwbyj8JxBbgfZ6TvgLSSN9OU+KRqz1NqfepSj+y8up0Q+W8J5UObvf02VZrJKVgnIVe5gw4iDx/5E7F4qmf8qa5YUlJnP3LWRz6jhtQE+m6Ro7zItnoqPR3EtQZ9rMgaS1+/qPX7hcB35hlGZbhj0IDY+HE98ehUivUuxSoLOp8c+COaJ2b5+wSQigi9jRYx0qPeCOCCtA8vF8z4SOmD3I6IsPzlCiejeC5y3tWoQqJPR430TPBJ7CMNbbHPNF8GyzM7vFukqSpgacLq1f/YgBwqiRLVk+ktgUM/+fHuE6mUDMdE+Ag2lfwHnLI7DOwaJdr7JoAoSi6R+uTRhx1d4AET1sMv/HXKD+4Abu0WyaT3l/xO+hBABz+KO33gPUdCsKOw7lvJFZRC+OSyQ==
|
39
filter_plugins/fedmsg.py
Normal file
39
filter_plugins/fedmsg.py
Normal file
|
@ -0,0 +1,39 @@
|
|||
import operator
|
||||
|
||||
|
||||
def invert_fedmsg_policy(groups, vars, env):
|
||||
""" Given hostvars that map hosts -> topics, invert that
|
||||
and return a dict that maps topics -> hosts.
|
||||
|
||||
Really, returns a list of tuples -- not a dict.
|
||||
"""
|
||||
|
||||
if env == 'staging':
|
||||
hosts = groups['staging']
|
||||
else:
|
||||
hosts = [h for h in groups['all'] if h not in groups['staging']]
|
||||
|
||||
inverted = {}
|
||||
for host in hosts:
|
||||
prefix = '.'.join([vars[host]['fedmsg_prefix'],
|
||||
vars[host]['fedmsg_env']])
|
||||
fqdn = vars[host].get('fedmsg_fqdn', host)
|
||||
|
||||
for cert in vars[host]['fedmsg_certs']:
|
||||
for topic in cert.get('can_send', []):
|
||||
key = prefix + '.' + topic
|
||||
inverted[key] = inverted.get(key, [])
|
||||
inverted[key].append(cert['service'] + '-' + fqdn)
|
||||
|
||||
result = inverted.items()
|
||||
# Sort things so they come out in a reliable order (idempotence)
|
||||
[inverted[key].sort() for key in inverted]
|
||||
result.sort(key=operator.itemgetter(0))
|
||||
return result
|
||||
|
||||
|
||||
class FilterModule(object):
|
||||
def filters(self):
|
||||
return {
|
||||
"invert_fedmsg_policy": invert_fedmsg_policy,
|
||||
}
|
315
filter_plugins/oo_filters.py
Normal file
315
filter_plugins/oo_filters.py
Normal file
|
@ -0,0 +1,315 @@
|
|||
#!/usr/bin/python
|
||||
# -*- coding: utf-8 -*-
|
||||
# vim: expandtab:tabstop=4:shiftwidth=4
|
||||
'''
|
||||
Custom filters for use in openshift-ansible
|
||||
'''
|
||||
|
||||
from ansible import errors
|
||||
from operator import itemgetter
|
||||
import pdb
|
||||
import re
|
||||
import json
|
||||
|
||||
|
||||
class FilterModule(object):
|
||||
''' Custom ansible filters '''
|
||||
|
||||
@staticmethod
|
||||
def oo_pdb(arg):
|
||||
''' This pops you into a pdb instance where arg is the data passed in
|
||||
from the filter.
|
||||
Ex: "{{ hostvars | oo_pdb }}"
|
||||
'''
|
||||
pdb.set_trace()
|
||||
return arg
|
||||
|
||||
@staticmethod
|
||||
def get_attr(data, attribute=None):
|
||||
''' This looks up dictionary attributes of the form a.b.c and returns
|
||||
the value.
|
||||
Ex: data = {'a': {'b': {'c': 5}}}
|
||||
attribute = "a.b.c"
|
||||
returns 5
|
||||
'''
|
||||
if not attribute:
|
||||
raise errors.AnsibleFilterError("|failed expects attribute to be set")
|
||||
|
||||
ptr = data
|
||||
for attr in attribute.split('.'):
|
||||
ptr = ptr[attr]
|
||||
|
||||
return ptr
|
||||
|
||||
@staticmethod
|
||||
def oo_flatten(data):
|
||||
''' This filter plugin will flatten a list of lists
|
||||
'''
|
||||
if not issubclass(type(data), list):
|
||||
raise errors.AnsibleFilterError("|failed expects to flatten a List")
|
||||
|
||||
return [item for sublist in data for item in sublist]
|
||||
|
||||
@staticmethod
|
||||
def oo_collect(data, attribute=None, filters=None):
|
||||
''' This takes a list of dict and collects all attributes specified into a
|
||||
list. If filter is specified then we will include all items that
|
||||
match _ALL_ of filters. If a dict entry is missing the key in a
|
||||
filter it will be excluded from the match.
|
||||
Ex: data = [ {'a':1, 'b':5, 'z': 'z'}, # True, return
|
||||
{'a':2, 'z': 'z'}, # True, return
|
||||
{'a':3, 'z': 'z'}, # True, return
|
||||
{'a':4, 'z': 'b'}, # FAILED, obj['z'] != obj['z']
|
||||
]
|
||||
attribute = 'a'
|
||||
filters = {'z': 'z'}
|
||||
returns [1, 2, 3]
|
||||
'''
|
||||
if not issubclass(type(data), list):
|
||||
raise errors.AnsibleFilterError("|failed expects to filter on a List")
|
||||
|
||||
if not attribute:
|
||||
raise errors.AnsibleFilterError("|failed expects attribute to be set")
|
||||
|
||||
if filters is not None:
|
||||
if not issubclass(type(filters), dict):
|
||||
raise errors.AnsibleFilterError("|fialed expects filter to be a"
|
||||
" dict")
|
||||
retval = [FilterModule.get_attr(d, attribute) for d in data if (
|
||||
all([d.get(key, None) == filters[key] for key in filters]))]
|
||||
else:
|
||||
retval = [FilterModule.get_attr(d, attribute) for d in data]
|
||||
|
||||
return retval
|
||||
|
||||
@staticmethod
|
||||
def oo_select_keys(data, keys):
|
||||
''' This returns a list, which contains the value portions for the keys
|
||||
Ex: data = { 'a':1, 'b':2, 'c':3 }
|
||||
keys = ['a', 'c']
|
||||
returns [1, 3]
|
||||
'''
|
||||
|
||||
if not issubclass(type(data), dict):
|
||||
raise errors.AnsibleFilterError("|failed expects to filter on a dict")
|
||||
|
||||
if not issubclass(type(keys), list):
|
||||
raise errors.AnsibleFilterError("|failed expects first param is a list")
|
||||
|
||||
# Gather up the values for the list of keys passed in
|
||||
retval = [data[key] for key in keys]
|
||||
|
||||
return retval
|
||||
|
||||
@staticmethod
|
||||
def oo_prepend_strings_in_list(data, prepend):
|
||||
''' This takes a list of strings and prepends a string to each item in the
|
||||
list
|
||||
Ex: data = ['cart', 'tree']
|
||||
prepend = 'apple-'
|
||||
returns ['apple-cart', 'apple-tree']
|
||||
'''
|
||||
if not issubclass(type(data), list):
|
||||
raise errors.AnsibleFilterError("|failed expects first param is a list")
|
||||
if not all(isinstance(x, basestring) for x in data):
|
||||
raise errors.AnsibleFilterError("|failed expects first param is a list"
|
||||
" of strings")
|
||||
retval = [prepend + s for s in data]
|
||||
return retval
|
||||
|
||||
@staticmethod
|
||||
def oo_combine_key_value(data, joiner='='):
|
||||
'''Take a list of dict in the form of { 'key': 'value'} and
|
||||
arrange them as a list of strings ['key=value']
|
||||
'''
|
||||
if not issubclass(type(data), list):
|
||||
raise errors.AnsibleFilterError("|failed expects first param is a list")
|
||||
|
||||
rval = []
|
||||
for item in data:
|
||||
rval.append("%s%s%s" % (item['key'], joiner, item['value']))
|
||||
|
||||
return rval
|
||||
|
||||
@staticmethod
|
||||
def oo_ami_selector(data, image_name):
|
||||
''' This takes a list of amis and an image name and attempts to return
|
||||
the latest ami.
|
||||
'''
|
||||
if not issubclass(type(data), list):
|
||||
raise errors.AnsibleFilterError("|failed expects first param is a list")
|
||||
|
||||
if not data:
|
||||
return None
|
||||
else:
|
||||
if image_name is None or not image_name.endswith('_*'):
|
||||
ami = sorted(data, key=itemgetter('name'), reverse=True)[0]
|
||||
return ami['ami_id']
|
||||
else:
|
||||
ami_info = [(ami, ami['name'].split('_')[-1]) for ami in data]
|
||||
ami = sorted(ami_info, key=itemgetter(1), reverse=True)[0][0]
|
||||
return ami['ami_id']
|
||||
|
||||
@staticmethod
|
||||
def oo_ec2_volume_definition(data, host_type, docker_ephemeral=False):
|
||||
''' This takes a dictionary of volume definitions and returns a valid ec2
|
||||
volume definition based on the host_type and the values in the
|
||||
dictionary.
|
||||
The dictionary should look similar to this:
|
||||
{ 'master':
|
||||
{ 'root':
|
||||
{ 'volume_size': 10, 'device_type': 'gp2',
|
||||
'iops': 500
|
||||
}
|
||||
},
|
||||
'node':
|
||||
{ 'root':
|
||||
{ 'volume_size': 10, 'device_type': 'io1',
|
||||
'iops': 1000
|
||||
},
|
||||
'docker':
|
||||
{ 'volume_size': 40, 'device_type': 'gp2',
|
||||
'iops': 500, 'ephemeral': 'true'
|
||||
}
|
||||
}
|
||||
}
|
||||
'''
|
||||
if not issubclass(type(data), dict):
|
||||
raise errors.AnsibleFilterError("|failed expects first param is a dict")
|
||||
if host_type not in ['master', 'node', 'etcd']:
|
||||
raise errors.AnsibleFilterError("|failed expects etcd, master or node"
|
||||
" as the host type")
|
||||
|
||||
root_vol = data[host_type]['root']
|
||||
root_vol['device_name'] = '/dev/sda1'
|
||||
root_vol['delete_on_termination'] = True
|
||||
if root_vol['device_type'] != 'io1':
|
||||
root_vol.pop('iops', None)
|
||||
if host_type == 'node':
|
||||
docker_vol = data[host_type]['docker']
|
||||
docker_vol['device_name'] = '/dev/xvdb'
|
||||
docker_vol['delete_on_termination'] = True
|
||||
if docker_vol['device_type'] != 'io1':
|
||||
docker_vol.pop('iops', None)
|
||||
if docker_ephemeral:
|
||||
docker_vol.pop('device_type', None)
|
||||
docker_vol.pop('delete_on_termination', None)
|
||||
docker_vol['ephemeral'] = 'ephemeral0'
|
||||
return [root_vol, docker_vol]
|
||||
elif host_type == 'etcd':
|
||||
etcd_vol = data[host_type]['etcd']
|
||||
etcd_vol['device_name'] = '/dev/xvdb'
|
||||
etcd_vol['delete_on_termination'] = True
|
||||
if etcd_vol['device_type'] != 'io1':
|
||||
etcd_vol.pop('iops', None)
|
||||
return [root_vol, etcd_vol]
|
||||
return [root_vol]
|
||||
|
||||
@staticmethod
|
||||
def oo_split(string, separator=','):
|
||||
''' This splits the input string into a list
|
||||
'''
|
||||
return string.split(separator)
|
||||
|
||||
@staticmethod
|
||||
def oo_filter_list(data, filter_attr=None):
|
||||
''' This returns a list, which contains all items where filter_attr
|
||||
evaluates to true
|
||||
Ex: data = [ { a: 1, b: True },
|
||||
{ a: 3, b: False },
|
||||
{ a: 5, b: True } ]
|
||||
filter_attr = 'b'
|
||||
returns [ { a: 1, b: True },
|
||||
{ a: 5, b: True } ]
|
||||
'''
|
||||
if not issubclass(type(data), list):
|
||||
raise errors.AnsibleFilterError("|failed expects to filter on a list")
|
||||
|
||||
if not issubclass(type(filter_attr), str):
|
||||
raise errors.AnsibleFilterError("|failed expects filter_attr is a str")
|
||||
|
||||
# Gather up the values for the list of keys passed in
|
||||
return [x for x in data if x[filter_attr]]
|
||||
|
||||
@staticmethod
|
||||
def oo_parse_heat_stack_outputs(data):
|
||||
''' Formats the HEAT stack output into a usable form
|
||||
|
||||
The goal is to transform something like this:
|
||||
|
||||
+---------------+-------------------------------------------------+
|
||||
| Property | Value |
|
||||
+---------------+-------------------------------------------------+
|
||||
| capabilities | [] | |
|
||||
| creation_time | 2015-06-26T12:26:26Z | |
|
||||
| description | OpenShift cluster | |
|
||||
| … | … |
|
||||
| outputs | [ |
|
||||
| | { |
|
||||
| | "output_value": "value_A" |
|
||||
| | "description": "This is the value of Key_A" |
|
||||
| | "output_key": "Key_A" |
|
||||
| | }, |
|
||||
| | { |
|
||||
| | "output_value": [ |
|
||||
| | "value_B1", |
|
||||
| | "value_B2" |
|
||||
| | ], |
|
||||
| | "description": "This is the value of Key_B" |
|
||||
| | "output_key": "Key_B" |
|
||||
| | }, |
|
||||
| | ] |
|
||||
| parameters | { |
|
||||
| … | … |
|
||||
+---------------+-------------------------------------------------+
|
||||
|
||||
into something like this:
|
||||
|
||||
{
|
||||
"Key_A": "value_A",
|
||||
"Key_B": [
|
||||
"value_B1",
|
||||
"value_B2"
|
||||
]
|
||||
}
|
||||
'''
|
||||
|
||||
# Extract the “outputs” JSON snippet from the pretty-printed array
|
||||
in_outputs = False
|
||||
outputs = ''
|
||||
|
||||
line_regex = re.compile(r'\|\s*(.*?)\s*\|\s*(.*?)\s*\|')
|
||||
for line in data['stdout_lines']:
|
||||
match = line_regex.match(line)
|
||||
if match:
|
||||
if match.group(1) == 'outputs':
|
||||
in_outputs = True
|
||||
elif match.group(1) != '':
|
||||
in_outputs = False
|
||||
if in_outputs:
|
||||
outputs += match.group(2)
|
||||
|
||||
outputs = json.loads(outputs)
|
||||
|
||||
# Revamp the “outputs” to put it in the form of a “Key: value” map
|
||||
revamped_outputs = {}
|
||||
for output in outputs:
|
||||
revamped_outputs[output['output_key']] = output['output_value']
|
||||
|
||||
return revamped_outputs
|
||||
|
||||
def filters(self):
|
||||
''' returns a mapping of filters to methods '''
|
||||
return {
|
||||
"oo_select_keys": self.oo_select_keys,
|
||||
"oo_collect": self.oo_collect,
|
||||
"oo_flatten": self.oo_flatten,
|
||||
"oo_pdb": self.oo_pdb,
|
||||
"oo_prepend_strings_in_list": self.oo_prepend_strings_in_list,
|
||||
"oo_ami_selector": self.oo_ami_selector,
|
||||
"oo_ec2_volume_definition": self.oo_ec2_volume_definition,
|
||||
"oo_combine_key_value": self.oo_combine_key_value,
|
||||
"oo_split": self.oo_split,
|
||||
"oo_filter_list": self.oo_filter_list,
|
||||
"oo_parse_heat_stack_outputs": self.oo_parse_heat_stack_outputs
|
||||
}
|
79
filter_plugins/oo_zabbix_filters.py
Normal file
79
filter_plugins/oo_zabbix_filters.py
Normal file
|
@ -0,0 +1,79 @@
|
|||
#!/usr/bin/python
|
||||
# -*- coding: utf-8 -*-
|
||||
# vim: expandtab:tabstop=4:shiftwidth=4
|
||||
'''
|
||||
Custom zabbix filters for use in openshift-ansible
|
||||
'''
|
||||
|
||||
import pdb
|
||||
|
||||
class FilterModule(object):
|
||||
''' Custom zabbix ansible filters '''
|
||||
|
||||
@staticmethod
|
||||
def create_data(data, results, key, new_key):
|
||||
'''Take a dict, filter through results and add results['key'] to dict
|
||||
'''
|
||||
new_list = [app[key] for app in results]
|
||||
data[new_key] = new_list
|
||||
return data
|
||||
|
||||
@staticmethod
|
||||
def oo_set_zbx_trigger_triggerid(item, trigger_results):
|
||||
'''Set zabbix trigger id from trigger results
|
||||
'''
|
||||
if isinstance(trigger_results, list):
|
||||
item['triggerid'] = trigger_results[0]['triggerid']
|
||||
return item
|
||||
|
||||
item['triggerid'] = trigger_results['triggerids'][0]
|
||||
return item
|
||||
|
||||
@staticmethod
|
||||
def oo_set_zbx_item_hostid(item, template_results):
|
||||
''' Set zabbix host id from template results
|
||||
'''
|
||||
if isinstance(template_results, list):
|
||||
item['hostid'] = template_results[0]['templateid']
|
||||
return item
|
||||
|
||||
item['hostid'] = template_results['templateids'][0]
|
||||
return item
|
||||
|
||||
@staticmethod
|
||||
def oo_pdb(arg):
|
||||
''' This pops you into a pdb instance where arg is the data passed in
|
||||
from the filter.
|
||||
Ex: "{{ hostvars | oo_pdb }}"
|
||||
'''
|
||||
pdb.set_trace()
|
||||
return arg
|
||||
|
||||
@staticmethod
|
||||
def select_by_name(ans_data, data):
|
||||
''' test
|
||||
'''
|
||||
for zabbix_item in data:
|
||||
if ans_data['name'] == zabbix_item:
|
||||
data[zabbix_item]['params']['hostid'] = ans_data['templateid']
|
||||
return data[zabbix_item]['params']
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def oo_build_zabbix_list_dict(values, string):
|
||||
''' Build a list of dicts with string as key for each value
|
||||
'''
|
||||
rval = []
|
||||
for value in values:
|
||||
rval.append({string: value})
|
||||
return rval
|
||||
|
||||
def filters(self):
|
||||
''' returns a mapping of filters to methods '''
|
||||
return {
|
||||
"select_by_name": self.select_by_name,
|
||||
"oo_set_zbx_item_hostid": self.oo_set_zbx_item_hostid,
|
||||
"oo_set_zbx_trigger_triggerid": self.oo_set_zbx_trigger_triggerid,
|
||||
"oo_build_zabbix_list_dict": self.oo_build_zabbix_list_dict,
|
||||
"create_data": self.create_data,
|
||||
}
|
|
@ -47,6 +47,18 @@
|
|||
- name: restart kojid
|
||||
action: service name=kojid state=restarted
|
||||
|
||||
- name: restart koschei-polling
|
||||
action: service name=koschei-polling state=restarted
|
||||
|
||||
- name: restart koschei-resolver
|
||||
action: service name=koschei-resolver state=restarted
|
||||
|
||||
- name: restart koschei-scheduler
|
||||
action: service name=koschei-scheduler state=restarted
|
||||
|
||||
- name: restart koschei-watcher
|
||||
action: service name=koschei-watcher state=restarted
|
||||
|
||||
- name: restart libvirtd
|
||||
action: service name=libvirtd state=restarted
|
||||
|
||||
|
@ -73,11 +85,11 @@
|
|||
action: service name=openvpn@openvpn state=restarted
|
||||
|
||||
- name: restart openvpn (RHEL6)
|
||||
when: ansible_distribution == "RedHat" and ansible_distribution_major_version == "6"
|
||||
when: ansible_distribution == "RedHat" and ansible_distribution_major_version|int == 6
|
||||
action: service name=openvpn state=restarted
|
||||
|
||||
- name: restart openvpn (RHEL7)
|
||||
when: ansible_distribution == "RedHat" and ansible_distribution_major_version == "7"
|
||||
when: ansible_distribution == "RedHat" and ansible_distribution_major_version|int == 7
|
||||
action: service name=openvpn@openvpn state=restarted
|
||||
|
||||
- name: restart postfix
|
||||
|
@ -137,10 +149,10 @@
|
|||
- name: restart bridge
|
||||
shell: /usr/lib/systemd/systemd-sysctl --prefix=/proc/sys/net/bridge
|
||||
|
||||
- name: hup libvirtd
|
||||
command: pkill -HUP libvirtd
|
||||
- name: reload libvirtd
|
||||
service: name=libvirtd state=reloaded
|
||||
ignore_errors: true
|
||||
when: inventory_hostname.startswith('buildhw')
|
||||
when: ansible_virtualization_role == 'host'
|
||||
|
||||
- name: restart fcomm-cache-worker
|
||||
service: name=fcomm-cache-worker state=restarted
|
||||
|
@ -168,3 +180,8 @@
|
|||
|
||||
- name: restart stunnel
|
||||
service: name=stunnel state=restarted
|
||||
|
||||
- name: restart cinder
|
||||
service: name=openstack-cinder-api state=restarted
|
||||
service: name=openstack-cinder-scheduler state=restarted
|
||||
service: name=openstack-cinder-volume state=restarted
|
||||
|
|
|
@ -2,22 +2,24 @@
|
|||
# This is the list of clients we backup with rdiff-backup.
|
||||
#
|
||||
[backup_clients]
|
||||
collab04.fedoraproject.org
|
||||
collab03.fedoraproject.org
|
||||
db01.phx2.fedoraproject.org
|
||||
db05.phx2.fedoraproject.org
|
||||
db03.phx2.fedoraproject.org
|
||||
db-datanommer02.phx2.fedoraproject.org
|
||||
db-fas01.phx2.fedoraproject.org
|
||||
hosted04.fedoraproject.org
|
||||
hosted03.fedoraproject.org
|
||||
hosted-lists01.fedoraproject.org
|
||||
lockbox01.phx2.fedoraproject.org
|
||||
people03.fedoraproject.org
|
||||
pagure01.fedoraproject.org
|
||||
people01.fedoraproject.org
|
||||
pkgs02.phx2.fedoraproject.org
|
||||
log01.phx2.fedoraproject.org
|
||||
qadevel.cloud.fedoraproject.org
|
||||
qadevel.qa.fedoraproject.org:222
|
||||
db-qa01.qa.fedoraproject.org
|
||||
db-koji01.phx2.fedoraproject.org
|
||||
copr-be.cloud.fedoraproject.org
|
||||
copr-fe.cloud.fedoraproject.org
|
||||
copr-keygen.cloud.fedoraproject.org
|
||||
value01.phx2.fedoraproject.org
|
||||
taiga.cloud.fedoraproject.org
|
||||
taskotron01.qa.fedoraproject.org
|
||||
|
|
|
@ -55,9 +55,20 @@ buildhw-12.phx2.fedoraproject.org
|
|||
buildppc-01.phx2.fedoraproject.org
|
||||
buildppc-02.phx2.fedoraproject.org
|
||||
|
||||
[buildppc64]
|
||||
ppc8-01.qa.fedoraproject.org
|
||||
|
||||
[buildaarch64]
|
||||
aarch64-03a.arm.fedoraproject.org
|
||||
aarch64-04a.arm.fedoraproject.org
|
||||
aarch64-05a.arm.fedoraproject.org
|
||||
aarch64-06a.arm.fedoraproject.org
|
||||
aarch64-07a.arm.fedoraproject.org
|
||||
aarch64-08a.arm.fedoraproject.org
|
||||
aarch64-09a.arm.fedoraproject.org
|
||||
aarch64-10a.arm.fedoraproject.org
|
||||
aarch64-11a.arm.fedoraproject.org
|
||||
aarch64-12a.arm.fedoraproject.org
|
||||
|
||||
[bkernel]
|
||||
bkernel01.phx2.fedoraproject.org
|
||||
|
@ -186,9 +197,20 @@ arm04-builder21.arm.fedoraproject.org
|
|||
arm04-builder22.arm.fedoraproject.org
|
||||
arm04-builder23.arm.fedoraproject.org
|
||||
|
||||
# These hosts get the runroot plugin installed.
|
||||
# They should be added to their own 'compose' channel in the koji db
|
||||
# .. and they should not appear in the default channel for builds.
|
||||
[runroot]
|
||||
buildvm-01.stg.phx2.fedoraproject.org
|
||||
buildvm-01.phx2.fedoraproject.org
|
||||
buildhw-01.phx2.fedoraproject.org
|
||||
arm04-builder00.arm.fedoraproject.org
|
||||
arm04-builder01.arm.fedoraproject.org
|
||||
|
||||
[builders:children]
|
||||
buildhw
|
||||
buildvm
|
||||
buildppc
|
||||
buildarm
|
||||
buildaarch64
|
||||
buildppc64
|
||||
|
|
3
inventory/group_vars/OSv3
Normal file
3
inventory/group_vars/OSv3
Normal file
|
@ -0,0 +1,3 @@
|
|||
---
|
||||
ansible_ssh_user: root
|
||||
deployment_type: origin
|
|
@ -55,6 +55,22 @@ fedmsg_certs: []
|
|||
# By default, fedmsg should not log debug info. Groups can override this.
|
||||
fedmsg_loglevel: INFO
|
||||
|
||||
# By default, fedmsg hosts are in passive mode. External hosts are typically
|
||||
# active.
|
||||
fedmsg_active: False
|
||||
|
||||
# Other defaults for fedmsg environments
|
||||
fedmsg_prefix: org.fedoraproject
|
||||
fedmsg_env: prod
|
||||
|
||||
# These are used to:
|
||||
# 1) configure mod_wsgi
|
||||
# 2) open iptables rules for fedmsg (per wsgi thread)
|
||||
# 3) declare enough fedmsg endpoints for the service
|
||||
#wsgi_fedmsg_service: bodhi
|
||||
#wsgi_procs: 4
|
||||
#wsgi_threads: 4
|
||||
|
||||
# By default, nodes don't backup any dbs on them unless they declare it.
|
||||
dbs_to_backup: []
|
||||
|
||||
|
@ -68,6 +84,7 @@ nrpe_check_postfix_queue_crit: 5
|
|||
|
||||
# env is staging or production, we default it to production here.
|
||||
env: production
|
||||
env_suffix:
|
||||
|
||||
# nfs mount options, override at the group/host level
|
||||
nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid"
|
||||
|
|
|
@ -28,8 +28,13 @@ fedmsg_certs:
|
|||
- service: anitya
|
||||
owner: root
|
||||
group: fedmsg
|
||||
can_send:
|
||||
- anitya.project.version.update
|
||||
|
||||
|
||||
fedmsg_prefix: org.release-monitoring
|
||||
fedmsg_env: prod
|
||||
|
||||
# For the MOTD
|
||||
csi_security_category: Low
|
||||
csi_primary_contact: Fedora admins - admin@fedoraproject.org
|
||||
|
|
|
@ -30,7 +30,22 @@ fedmsg_certs:
|
|||
- service: anitya
|
||||
owner: root
|
||||
group: apache
|
||||
can_send:
|
||||
- anitya.distro.add
|
||||
- anitya.distro.edit
|
||||
- anitya.distro.remove
|
||||
- anitya.project.add
|
||||
- anitya.project.add.tried
|
||||
- anitya.project.edit
|
||||
- anitya.project.map.new
|
||||
- anitya.project.map.remove
|
||||
- anitya.project.map.update
|
||||
- anitya.project.remove
|
||||
- anitya.project.version.remove
|
||||
- anitya.project.version.update
|
||||
|
||||
fedmsg_prefix: org.release-monitoring
|
||||
fedmsg_env: prod
|
||||
|
||||
# For the MOTD
|
||||
csi_security_category: Low
|
||||
|
|
|
@ -25,6 +25,12 @@ fedmsg_certs:
|
|||
- service: askbot
|
||||
owner: root
|
||||
group: apache
|
||||
can_send:
|
||||
- askbot.post.delete
|
||||
- askbot.post.edit
|
||||
- askbot.post.flag_offensive.add
|
||||
- askbot.post.flag_offensive.delete
|
||||
- askbot.tag.update
|
||||
|
||||
|
||||
# For the MOTD
|
||||
|
|
|
@ -25,7 +25,12 @@ fedmsg_certs:
|
|||
- service: askbot
|
||||
owner: root
|
||||
group: apache
|
||||
|
||||
can_send:
|
||||
- askbot.post.delete
|
||||
- askbot.post.edit
|
||||
- askbot.post.flag_offensive.add
|
||||
- askbot.post.flag_offensive.delete
|
||||
- askbot.tag.update
|
||||
|
||||
# For the MOTD
|
||||
csi_security_category: Low
|
||||
|
|
|
@ -13,10 +13,10 @@ host_group: autosign
|
|||
# For the MOTD
|
||||
csi_security_category: High
|
||||
csi_primary_contact: Release Engineering - rel-eng@lists.fedoraproject.org
|
||||
csi_purpose: Provides frontend (reverse) proxy for most web applications
|
||||
csi_purpose: Automatically sign Rawhide and Branched packages
|
||||
csi_relationship: |
|
||||
This host runs the autosigner.py script which should automatically sign new
|
||||
rawhide and branched builds. It listens to koji over fedmsg for
|
||||
This host will run the autosigner.py script which should automatically sign
|
||||
new rawhide and branched builds. It listens to koji over fedmsg for
|
||||
notifications of new builds, and then asks sigul, the signing server, to
|
||||
sign the rpms and store the new rpm header back in Koji.
|
||||
|
||||
|
|
|
@ -20,6 +20,9 @@ fedmsg_certs:
|
|||
- service: fedbadges
|
||||
owner: root
|
||||
group: fedmsg
|
||||
can_send:
|
||||
- fedbadges.badge.award
|
||||
- fedbadges.person.rank.advance
|
||||
|
||||
|
||||
# For the MOTD
|
||||
|
|
|
@ -20,6 +20,9 @@ fedmsg_certs:
|
|||
- service: fedbadges
|
||||
owner: root
|
||||
group: fedmsg
|
||||
can_send:
|
||||
- fedbadges.badge.award
|
||||
- fedbadges.person.rank.advance
|
||||
|
||||
|
||||
# For the MOTD
|
||||
|
|
|
@ -4,13 +4,15 @@ mem_size: 4096
|
|||
num_cpus: 2
|
||||
freezes: false
|
||||
|
||||
# for systems that do not match the above - specify the same parameter in
|
||||
# the host_vars/$hostname file
|
||||
# Definining these vars has a number of effects
|
||||
# 1) mod_wsgi is configured to use the vars for its own setup
|
||||
# 2) iptables opens enough ports for all threads for fedmsg
|
||||
# 3) roles/fedmsg/base/ declares enough fedmsg endpoints for all threads
|
||||
wsgi_fedmsg_service: tahrir
|
||||
wsgi_procs: 2
|
||||
wsgi_threads: 2
|
||||
|
||||
tcp_ports: [ 80, 443,
|
||||
# These 16 ports are used by fedmsg. One for each wsgi thread.
|
||||
3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007,
|
||||
3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015]
|
||||
tcp_ports: [ 80 ]
|
||||
|
||||
# Neeed for rsync from log01 for logs.
|
||||
custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ]
|
||||
|
@ -25,6 +27,10 @@ fedmsg_certs:
|
|||
- service: tahrir
|
||||
owner: root
|
||||
group: tahrir
|
||||
can_send:
|
||||
- fedbadges.badge.award
|
||||
- fedbadges.person.rank.advance
|
||||
- fedbadges.person.login.first
|
||||
|
||||
|
||||
# For the MOTD
|
||||
|
|
|
@ -4,13 +4,15 @@ lvm_size: 20000
|
|||
mem_size: 1024
|
||||
num_cpus: 2
|
||||
|
||||
# for systems that do not match the above - specify the same parameter in
|
||||
# the host_vars/$hostname file
|
||||
# Definining these vars has a number of effects
|
||||
# 1) mod_wsgi is configured to use the vars for its own setup
|
||||
# 2) iptables opens enough ports for all threads for fedmsg
|
||||
# 3) roles/fedmsg/base/ declares enough fedmsg endpoints for all threads
|
||||
wsgi_fedmsg_service: tahrir
|
||||
wsgi_procs: 2
|
||||
wsgi_threads: 2
|
||||
|
||||
tcp_ports: [ 80, 443,
|
||||
# These 16 ports are used by fedmsg. One for each wsgi thread.
|
||||
3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007,
|
||||
3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015]
|
||||
tcp_ports: [ 80 ]
|
||||
|
||||
# Neeed for rsync from log01 for logs.
|
||||
custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ]
|
||||
|
@ -25,6 +27,10 @@ fedmsg_certs:
|
|||
- service: tahrir
|
||||
owner: root
|
||||
group: tahrir
|
||||
can_send:
|
||||
- fedbadges.badge.award
|
||||
- fedbadges.person.rank.advance
|
||||
- fedbadges.person.login.first
|
||||
|
||||
|
||||
# For the MOTD
|
||||
|
|
|
@ -19,7 +19,7 @@ custom_rules: [
|
|||
#
|
||||
# allow a bunch of sysadmin groups here so they can access internal stuff
|
||||
#
|
||||
fas_client_groups: sysadmin-ask,sysadmin-web,sysadmin-main,sysadmin-cvs,sysadmin-build,sysadmin-noc,sysadmin-releng,sysadmin-dba,sysadmin-hosted,sysadmin-tools,sysadmin-spin,sysadmin-cloud,fi-apprentice,sysadmin-darkserver,sysadmin-badges,sysadmin-troubleshoot,sysadmin-qa,sysadmin-centos,sysadmin-ppc
|
||||
fas_client_groups: sysadmin-ask,sysadmin-web,sysadmin-main,sysadmin-cvs,sysadmin-build,sysadmin-noc,sysadmin-releng,sysadmin-dba,sysadmin-hosted,sysadmin-tools,sysadmin-spin,sysadmin-cloud,fi-apprentice,sysadmin-darkserver,sysadmin-badges,sysadmin-troubleshoot,sysadmin-qa,sysadmin-centos,sysadmin-ppc,sysadmin-koschei
|
||||
|
||||
#
|
||||
# This is a postfix gateway. This will pick up gateway postfix config in base
|
||||
|
|
29
inventory/group_vars/beaker-stg
Normal file
29
inventory/group_vars/beaker-stg
Normal file
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
lvm_size: 50000
|
||||
mem_size: 4096
|
||||
num_cpus: 2
|
||||
|
||||
tcp_ports: [ 80, 443, 8000 ]
|
||||
udp_ports: [ 69 ]
|
||||
fas_client_groups: sysadmin-qa,sysadmin-main,fi-apprentice
|
||||
nrpe_procs_warn: 250
|
||||
nrpe_procs_crit: 300
|
||||
|
||||
freezes: false
|
||||
|
||||
# settings for the beaker db, server and lab controller
|
||||
beaker_db_host: localhost
|
||||
beaker_db_name: beaker
|
||||
beaker_db_user: "{{ stg_beaker_db_user }}"
|
||||
beaker_db_password: "{{ stg_beaker_db_password }}"
|
||||
mariadb_root_password: "{{ stg_beaker_mariadb_root_password }}"
|
||||
|
||||
beaker_server_url: "https://beaker.stg.qa.fedoraproject.org"
|
||||
beaker_server_cname: "beaker.stg.fedoraproject.org"
|
||||
beaker_server_hostname: "beaker-stg01.qa.fedoraproject.org"
|
||||
beaker_server_admin_user: "{{ stg_beaker_server_admin_user }}"
|
||||
beaker_server_admin_pass: "{{ stg_beaker_server_admin_pass }}"
|
||||
beaker_server_email: "sysadmin-qa-members@fedoraproject.org"
|
||||
|
||||
beaker_lab_controller_username: "host/beaker01.qa.fedoraproject.org"
|
||||
beaker_lab_controller_password: "{{ stg_beaker_lab_controller_password }}"
|
10
inventory/group_vars/beaker-virthosts
Normal file
10
inventory/group_vars/beaker-virthosts
Normal file
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
virthost: true
|
||||
nrpe_procs_warn: 900
|
||||
nrpe_procs_crit: 1000
|
||||
|
||||
libvirt_remote_pubkey: 'ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAsxg20+vmLTt/U23x6yBtxU6N2Ool8ddlC5TFwr3FktCM7hcxkQ/funJ3VD5v9iN7Qg09g2YsPaPTfvmOPOP4bzX+/Fk8vJJb5nVg++XbS80Uw62eofr8g68ZPf6IWLEBiZ8/hmumK3TxTmsj/jn17bZBFTcQL7sB7Q4y7TxODt+5W9/0mJTLXbKoCvV+BCpxEfokx+50vVcX5CxXLHdgrdhPzKHcBHKtX6d2W8xzFj2dCThgAXl5tULYI1xP0BYTOtG+RaTNQWme4JxNlQZB8xbCxN2U+e1NpZl1Hn7Y9MbRL+nLfMIuWNJjYzUTGP3o9m2Tl9RCc2nhuS652rjfcQ== tflink@imagebuilder.qa.fedoraproject.org'
|
||||
libvirt_user: "{{ beaker_libvirt_user }}"
|
||||
|
||||
# beaker is not a production service, so the virthosts aren't frozen
|
||||
freezes: false
|
|
@ -1,2 +1,6 @@
|
|||
---
|
||||
host_group: kojibuilder
|
||||
|
||||
koji_server_url: "http://koji.fedoraproject.org/kojihub"
|
||||
koji_weburl: "http://koji.fedoraproject.org/koji"
|
||||
koji_topurl: "http://kojipkgs.fedoraproject.org/"
|
||||
|
|
|
@ -7,8 +7,7 @@ lvm_size: 40000
|
|||
mem_size: 4096
|
||||
num_cpus: 2
|
||||
|
||||
# for systems that do not match the above - specify the same parameter in
|
||||
# the host_vars/$hostname file
|
||||
# for systems that do not match the above - specify the same parameter in # the host_vars/$hostname file
|
||||
|
||||
tcp_ports: [ 80, 443,
|
||||
# These 16 ports are used by fedmsg. One for each wsgi thread.
|
||||
|
@ -28,3 +27,30 @@ fedmsg_certs:
|
|||
- service: bodhi
|
||||
owner: root
|
||||
group: bodhi
|
||||
can_send:
|
||||
- bodhi.buildroot_override.tag
|
||||
- bodhi.buildroot_override.untag
|
||||
- bodhi.stack.delete
|
||||
- bodhi.stack.save
|
||||
- bodhi.update.comment
|
||||
- bodhi.update.complete.testing
|
||||
- bodhi.update.edit
|
||||
- bodhi.update.karma.threshold
|
||||
- bodhi.update.request.obsolete
|
||||
- bodhi.update.request.revoke
|
||||
- bodhi.update.request.stable
|
||||
- bodhi.update.request.testing
|
||||
- bodhi.update.request.unpush
|
||||
|
||||
# Things that only the mash does - not the web UI
|
||||
#- bodhi.mashtask.complete
|
||||
#- bodhi.mashtask.mashing
|
||||
#- bodhi.mashtask.start
|
||||
#- bodhi.mashtask.sync.done
|
||||
#- bodhi.mashtask.sync.wait
|
||||
#- bodhi.errata.publish
|
||||
#- bodhi.update.eject
|
||||
|
||||
# Rsync messages that get run from somewhere else entirely.
|
||||
#- bodhi.updates.epel.sync
|
||||
#- bodhi.updates.fedora.sync
|
||||
|
|
49
inventory/group_vars/bodhi-backend
Normal file
49
inventory/group_vars/bodhi-backend
Normal file
|
@ -0,0 +1,49 @@
|
|||
---
|
||||
# common items for the releng-* boxes
|
||||
lvm_size: 100000
|
||||
mem_size: 16384
|
||||
num_cpus: 16
|
||||
nm: 255.255.255.0
|
||||
gw: 10.5.125.254
|
||||
dns: 10.5.126.21
|
||||
|
||||
ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7
|
||||
ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/
|
||||
|
||||
virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }}
|
||||
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
|
||||
--vcpus={{ num_cpus }} -l {{ ks_repo }} -x
|
||||
"ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0
|
||||
hostname={{ inventory_hostname }} nameserver={{ dns }}
|
||||
ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none
|
||||
ip={{ eth1_ip }}:::{{ nm }}:{{ inventory_hostname }}-nfs:eth1:none"
|
||||
--network=bridge=br0,model=virtio --network=bridge=br1,model=virtio
|
||||
--autostart --noautoconsole
|
||||
|
||||
# With 16 cpus, theres a bunch more kernel threads
|
||||
nrpe_procs_warn: 900
|
||||
nrpe_procs_crit: 1000
|
||||
|
||||
host_group: releng
|
||||
|
||||
# These are consumed by a task in roles/fedmsg/base/main.yml
|
||||
fedmsg_certs:
|
||||
- service: shell
|
||||
owner: root
|
||||
group: root
|
||||
- service: bodhi
|
||||
owner: root
|
||||
group: masher
|
||||
can_send:
|
||||
- bodhi.mashtask.complete
|
||||
- bodhi.mashtask.mashing
|
||||
- bodhi.mashtask.start
|
||||
- bodhi.mashtask.sync.done
|
||||
- bodhi.mashtask.sync.wait
|
||||
- bodhi.errata.publish
|
||||
- bodhi.update.eject
|
||||
# The ftp sync messages get run here too.
|
||||
- bodhi.updates.epel.sync
|
||||
- bodhi.updates.fedora.sync
|
||||
|
||||
nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3"
|
46
inventory/group_vars/bodhi-backend-stg
Normal file
46
inventory/group_vars/bodhi-backend-stg
Normal file
|
@ -0,0 +1,46 @@
|
|||
---
|
||||
# common items for the releng-* boxes
|
||||
lvm_size: 100000
|
||||
mem_size: 4096
|
||||
num_cpus: 2
|
||||
nm: 255.255.255.0
|
||||
gw: 10.5.126.254
|
||||
dns: 10.5.126.21
|
||||
|
||||
ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7
|
||||
ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/
|
||||
|
||||
virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }}
|
||||
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
|
||||
--vcpus={{ num_cpus }} -l {{ ks_repo }} -x
|
||||
"ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0
|
||||
hostname={{ inventory_hostname }} nameserver={{ dns }}
|
||||
ip={{ eth0_ip }} netmask={{ nm }} gateway={{ gw }} dns={{ dns }}"
|
||||
--network=bridge=br0,model=virtio --network=bridge=br1,model=virtio
|
||||
--autostart --noautoconsole
|
||||
|
||||
# With 16 cpus, theres a bunch more kernel threads
|
||||
nrpe_procs_warn: 900
|
||||
nrpe_procs_crit: 1000
|
||||
|
||||
host_group: releng
|
||||
|
||||
# These are consumed by a task in roles/fedmsg/base/main.yml
|
||||
fedmsg_certs:
|
||||
- service: shell
|
||||
owner: root
|
||||
group: root
|
||||
#- service: bodhi
|
||||
# owner: root
|
||||
# group: masher
|
||||
# can_send:
|
||||
# - bodhi.mashtask.complete
|
||||
# - bodhi.mashtask.mashing
|
||||
# - bodhi.mashtask.start
|
||||
# - bodhi.mashtask.sync.done
|
||||
# - bodhi.mashtask.sync.wait
|
||||
# - bodhi.errata.publish
|
||||
# - bodhi.update.eject
|
||||
# # The ftp sync messages get run here too.
|
||||
# - bodhi.updates.epel.sync
|
||||
# - bodhi.updates.fedora.sync
|
|
@ -28,3 +28,30 @@ fedmsg_certs:
|
|||
- service: bodhi
|
||||
owner: root
|
||||
group: bodhi
|
||||
can_send:
|
||||
- bodhi.buildroot_override.tag
|
||||
- bodhi.buildroot_override.untag
|
||||
- bodhi.stack.delete
|
||||
- bodhi.stack.save
|
||||
- bodhi.update.comment
|
||||
- bodhi.update.complete.testing
|
||||
- bodhi.update.edit
|
||||
- bodhi.update.karma.threshold
|
||||
- bodhi.update.request.obsolete
|
||||
- bodhi.update.request.revoke
|
||||
- bodhi.update.request.stable
|
||||
- bodhi.update.request.testing
|
||||
- bodhi.update.request.unpush
|
||||
|
||||
# Things that only the mash does - not the web UI
|
||||
#- bodhi.mashtask.complete
|
||||
#- bodhi.mashtask.mashing
|
||||
#- bodhi.mashtask.start
|
||||
#- bodhi.mashtask.sync.done
|
||||
#- bodhi.mashtask.sync.wait
|
||||
#- bodhi.errata.publish
|
||||
#- bodhi.update.eject
|
||||
|
||||
# Rsync messages that get run from somewhere else entirely.
|
||||
#- bodhi.updates.epel.sync
|
||||
#- bodhi.updates.fedora.sync
|
||||
|
|
34
inventory/group_vars/bodhi2-stg
Normal file
34
inventory/group_vars/bodhi2-stg
Normal file
|
@ -0,0 +1,34 @@
|
|||
---
|
||||
# Define resources for this group of hosts here.
|
||||
jobrunner: false
|
||||
epelmasher: false
|
||||
|
||||
lvm_size: 40000
|
||||
mem_size: 4096
|
||||
num_cpus: 2
|
||||
|
||||
# for systems that do not match the above - specify the same parameter in
|
||||
# the host_vars/$hostname file
|
||||
|
||||
tcp_ports: [ 80, 443,
|
||||
# These 16 ports are used by fedmsg. One for each wsgi thread.
|
||||
3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007,
|
||||
3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015]
|
||||
|
||||
# Neeed for rsync from log01 for logs.
|
||||
custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ]
|
||||
|
||||
fas_client_groups: sysadmin-noc
|
||||
|
||||
# These are consumed by a task in roles/fedmsg/base/main.yml
|
||||
fedmsg_certs:
|
||||
- service: shell
|
||||
owner: root
|
||||
group: sysadmin
|
||||
- service: bodhi
|
||||
owner: root
|
||||
group: bodhi
|
||||
|
||||
# Mount /mnt/fedora_koji as read-only in staging
|
||||
nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid"
|
||||
datacenter: staging
|
|
@ -19,6 +19,9 @@ fedmsg_certs:
|
|||
- service: bugzilla2fedmsg
|
||||
owner: root
|
||||
group: fedmsg
|
||||
can_send:
|
||||
- bugzilla.bug.new
|
||||
- bugzilla.bug.update
|
||||
|
||||
# For the MOTD
|
||||
csi_security_category: Low
|
||||
|
|
|
@ -19,6 +19,9 @@ fedmsg_certs:
|
|||
- service: bugzilla2fedmsg
|
||||
owner: root
|
||||
group: fedmsg
|
||||
can_send:
|
||||
- bugzilla.bug.new
|
||||
- bugzilla.bug.update
|
||||
|
||||
# For the MOTD
|
||||
csi_security_category: Low
|
||||
|
|
|
@ -1,4 +1,8 @@
|
|||
---
|
||||
host_group: kojibuilder
|
||||
fas_client_groups: sysadmin-releng
|
||||
fas_client_groups: sysadmin-releng,sysadmin-secondary
|
||||
sudoers: "{{ private }}/files/sudo/buildaarch64-sudoers"
|
||||
|
||||
koji_server_url: "http://arm.koji.fedoraproject.org/kojihub"
|
||||
koji_weburl: "http://arm.koji.fedoraproject.org/koji"
|
||||
koji_topurl: "http://armpkgs.fedoraproject.org/"
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
host_group: kojibuilder
|
||||
fas_client_groups: sysadmin-releng
|
||||
sudoers: "{{ private }}/files/sudo/arm-releng-sudoers"
|
||||
|
||||
koji_server_url: "http://koji.fedoraproject.org/kojihub"
|
||||
koji_weburl: "http:/koji.fedoraproject.org/koji"
|
||||
koji_topurl: "http://kojipkgs.fedoraproject.org/"
|
||||
|
|
|
@ -3,3 +3,7 @@ host_group: kojibuilder
|
|||
fas_client_groups: sysadmin-releng
|
||||
sudoers: "{{ private }}/files/sudo/arm-releng-sudoers"
|
||||
freezes: true
|
||||
|
||||
koji_server_url: "http://koji.fedoraproject.org/kojihub"
|
||||
koji_weburl: "http://koji.fedoraproject.org/koji"
|
||||
koji_topurl: "http://kojipkgs.fedoraproject.org/"
|
||||
|
|
7
inventory/group_vars/buildppc
Normal file
7
inventory/group_vars/buildppc
Normal file
|
@ -0,0 +1,7 @@
|
|||
host_group: kojibuilder
|
||||
fas_client_groups: sysadmin-releng
|
||||
#sudoers: "{{ private }}/files/sudo/ppc-releng-sudoers"
|
||||
|
||||
koji_server_url: "http://koji.fedoraproject.org/kojihub"
|
||||
koji_weburl: "http://koji.fedoraproject.org/koji"
|
||||
koji_topurl: "http://kojipkgs.fedoraproject.org/"
|
8
inventory/group_vars/buildppc64
Normal file
8
inventory/group_vars/buildppc64
Normal file
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
host_group: kojibuilder
|
||||
fas_client_groups: sysadmin-releng,sysadmin-secondary
|
||||
#sudoers: "{{ private }}/files/sudo/buildppc64-sudoers"
|
||||
|
||||
koji_server_url: "http://ppc.koji.fedoraproject.org/kojihub"
|
||||
koji_weburl: "http://ppc.koji.fedoraproject.org/koji"
|
||||
koji_topurl: "http://ppcpkgs.fedoraproject.org/"
|
|
@ -25,3 +25,7 @@ virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }}
|
|||
host_group: kojibuilder
|
||||
fas_client_groups: sysadmin-releng
|
||||
sudoers: "{{ private }}/files/sudo/arm-releng-sudoers"
|
||||
|
||||
koji_server_url: "http://koji.fedoraproject.org/kojihub"
|
||||
koji_weburl: "http://koji.fedoraproject.org/koji"
|
||||
koji_topurl: "http://kojipkgs.fedoraproject.org/"
|
||||
|
|
|
@ -25,3 +25,7 @@ fas_client_groups: sysadmin-releng
|
|||
sudoers: "{{ private }}/files/sudo/arm-releng-sudoers"
|
||||
datacenter: staging
|
||||
nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid"
|
||||
|
||||
koji_server_url: "http://koji.stg.fedoraproject.org/kojihub"
|
||||
koji_weburl: "http://koji.stg.fedoraproject.org/koji"
|
||||
koji_topurl: "http://kojipkgs.stg.fedoraproject.org/"
|
||||
|
|
58
inventory/group_vars/composers
Normal file
58
inventory/group_vars/composers
Normal file
|
@ -0,0 +1,58 @@
|
|||
---
|
||||
# common items for the releng-* boxes
|
||||
lvm_size: 100000
|
||||
mem_size: 16384
|
||||
num_cpus: 16
|
||||
nm: 255.255.255.0
|
||||
gw: 10.5.125.254
|
||||
dns: 10.5.126.21
|
||||
|
||||
ks_url: http://10.5.126.23/repo/rhel/ks/kvm-rhel-7
|
||||
ks_repo: http://10.5.126.23/repo/rhel/RHEL7-x86_64/
|
||||
|
||||
virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }}
|
||||
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
|
||||
--vcpus={{ num_cpus }} -l {{ ks_repo }} -x
|
||||
"ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0
|
||||
hostname={{ inventory_hostname }} nameserver={{ dns }}
|
||||
ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none
|
||||
ip={{ eth1_ip }}:::{{ nm }}:{{ inventory_hostname }}-nfs:eth1:none"
|
||||
--network=bridge=br0,model=virtio --network=bridge=br1,model=virtio
|
||||
--autostart --noautoconsole
|
||||
|
||||
# With 16 cpus, theres a bunch more kernel threads
|
||||
nrpe_procs_warn: 900
|
||||
nrpe_procs_crit: 1000
|
||||
|
||||
host_group: releng
|
||||
|
||||
# These are consumed by a task in roles/fedmsg/base/main.yml
|
||||
fedmsg_certs:
|
||||
- service: shell
|
||||
owner: root
|
||||
group: root
|
||||
- service: bodhi
|
||||
owner: root
|
||||
group: masher
|
||||
can_send:
|
||||
- compose.branched.complete
|
||||
- compose.branched.mash.complete
|
||||
- compose.branched.mash.start
|
||||
- compose.branched.pungify.complete
|
||||
- compose.branched.pungify.start
|
||||
- compose.branched.rsync.complete
|
||||
- compose.branched.rsync.start
|
||||
- compose.branched.start
|
||||
- compose.epelbeta.complete
|
||||
- compose.rawhide.complete
|
||||
- compose.rawhide.mash.complete
|
||||
- compose.rawhide.mash.start
|
||||
- compose.rawhide.rsync.complete
|
||||
- compose.rawhide.rsync.start
|
||||
- compose.rawhide.start
|
||||
|
||||
nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3"
|
||||
|
||||
koji_server_url: "http://koji.fedoraproject.org/kojihub"
|
||||
koji_weburl: "http://koji.fedoraproject.org/koji"
|
||||
koji_topurl: "http://kojipkgs.fedoraproject.org/"
|
4
inventory/group_vars/composers-stg
Normal file
4
inventory/group_vars/composers-stg
Normal file
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
koji_server_url: "http://koji.stg.fedoraproject.org/kojihub"
|
||||
koji_weburl: "http://koji.stg.fedoraproject.org/koji"
|
||||
koji_topurl: "http://kojipkgs.fedoraproject.org/"
|
|
@ -3,9 +3,14 @@ devel: false
|
|||
_forward_src: "forward"
|
||||
|
||||
# don't forget to update ip in ./copr-keygen, due to custom firewall rules
|
||||
copr_backend_ips: "172.16.5.5 209.132.184.142"
|
||||
keygen_host: "172.16.5.25"
|
||||
|
||||
copr_backend_ips: ["172.25.32.4", "209.132.184.48"]
|
||||
keygen_host: "172.25.32.5"
|
||||
|
||||
resolvconf: "resolv.conf/cloud"
|
||||
|
||||
backend_base_url: "https://copr-be.cloud.fedoraproject.org"
|
||||
postfix_maincf: "postfix/main.cf/main.cf.copr"
|
||||
|
||||
frontend_base_url: "https://copr.fedoraproject.org"
|
||||
dist_git_base_url: "copr-dist-git.fedorainfracloud.org"
|
||||
|
|
|
@ -1,15 +1,17 @@
|
|||
---
|
||||
_lighttpd_conf_src: "lighttpd/lighttpd.conf"
|
||||
|
||||
copr_nova_auth_url: "https://fed-cloud09.cloud.fedoraproject.org:5000/v2.0"
|
||||
copr_nova_auth_url: "https://fedorainfracloud.org:5000/v2.0"
|
||||
copr_nova_tenant_id: "undefined_tenant_id"
|
||||
copr_nova_tenant_name: "copr"
|
||||
copr_nova_username: "copr"
|
||||
|
||||
copr_builder_image_name: "builder_base_image_2015_04_01"
|
||||
copr_builder_flavor_name: "m1.builder"
|
||||
# copr_builder_image_name: "Fedora-Cloud-Base-20141203-21"
|
||||
copr_builder_image_name: "builder-2015-05-27"
|
||||
copr_builder_flavor_name: "ms2.builder"
|
||||
copr_builder_network_name: "copr-net"
|
||||
copr_builder_key_name: "buildsys"
|
||||
copr_builder_security_groups: "ssh-anywhere-copr,default,ssh-from-persistent-copr"
|
||||
|
||||
|
||||
fedmsg_enabled: "true"
|
||||
|
|
|
@ -1,19 +1,20 @@
|
|||
---
|
||||
_lighttpd_conf_src: "lighttpd/lighttpd_dev.conf"
|
||||
|
||||
copr_nova_auth_url: "https://fed-cloud09.cloud.fedoraproject.org:5000/v2.0"
|
||||
copr_nova_auth_url: "https://fedorainfracloud.org:5000/v2.0"
|
||||
copr_nova_tenant_id: "566a072fb1694950998ad191fee3833b"
|
||||
copr_nova_tenant_name: "coprdev"
|
||||
copr_nova_username: "copr"
|
||||
|
||||
copr_builder_image_name: "builder_base_image_2015_04_01"
|
||||
copr_builder_flavor_name: "m1.builder"
|
||||
copr_builder_image_name: "builder-2015-05-27"
|
||||
copr_builder_flavor_name: "ms2.builder"
|
||||
copr_builder_network_name: "coprdev-net"
|
||||
copr_builder_key_name: "buildsys"
|
||||
copr_builder_security_groups: "ssh-anywhere-coprdev,default,ssh-from-persistent-coprdev"
|
||||
|
||||
fedmsg_enabled: "false"
|
||||
|
||||
do_sign: "false"
|
||||
do_sign: "true"
|
||||
|
||||
spawn_in_advance: "true"
|
||||
frontend_base_url: "http://copr-fe-dev.cloud.fedoraproject.org"
|
||||
|
|
5
inventory/group_vars/copr-dist-git
Normal file
5
inventory/group_vars/copr-dist-git
Normal file
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
tcp_ports: [22, 80]
|
||||
datacenter: cloud
|
||||
freezes: false
|
||||
|
4
inventory/group_vars/copr-dist-git-stg
Normal file
4
inventory/group_vars/copr-dist-git-stg
Normal file
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
tcp_ports: [22, 80]
|
||||
datacenter: cloud
|
||||
freezes: false
|
|
@ -2,10 +2,10 @@
|
|||
tcp_ports: [22]
|
||||
|
||||
# http + signd dest ports
|
||||
custom_rules: [ '-A INPUT -p tcp -m tcp -s 172.16.5.5 --dport 80 -j ACCEPT',
|
||||
'-A INPUT -p tcp -m tcp -s 209.132.184.142 --dport 80 -j ACCEPT',
|
||||
'-A INPUT -p tcp -m tcp -s 172.16.5.5 --dport 5167 -j ACCEPT',
|
||||
'-A INPUT -p tcp -m tcp -s 209.132.184.142 --dport 5167 -j ACCEPT']
|
||||
custom_rules: [ '-A INPUT -p tcp -m tcp -s 172.25.32.4 --dport 80 -j ACCEPT',
|
||||
'-A INPUT -p tcp -m tcp -s 209.132.184.48 --dport 80 -j ACCEPT',
|
||||
'-A INPUT -p tcp -m tcp -s 172.25.32.4 --dport 5167 -j ACCEPT',
|
||||
'-A INPUT -p tcp -m tcp -s 209.132.184.48 --dport 5167 -j ACCEPT']
|
||||
|
||||
datacenter: cloud
|
||||
|
||||
|
|
|
@ -3,10 +3,10 @@ copr_hostbase: copr-keygen-dev
|
|||
tcp_ports: []
|
||||
|
||||
# http + signd dest ports
|
||||
custom_rules: [ '-A INPUT -p tcp -m tcp -s 172.16.5.24 --dport 80 -j ACCEPT',
|
||||
'-A INPUT -p tcp -m tcp -s 209.132.184.179 --dport 80 -j ACCEPT',
|
||||
'-A INPUT -p tcp -m tcp -s 172.16.5.24 --dport 5167 -j ACCEPT',
|
||||
'-A INPUT -p tcp -m tcp -s 209.132.184.179 --dport 5167 -j ACCEPT']
|
||||
custom_rules: [ '-A INPUT -p tcp -m tcp -s 172.25.32.13 --dport 80 -j ACCEPT',
|
||||
'-A INPUT -p tcp -m tcp -s 209.132.184.53 --dport 80 -j ACCEPT',
|
||||
'-A INPUT -p tcp -m tcp -s 172.25.32.13 --dport 5167 -j ACCEPT',
|
||||
'-A INPUT -p tcp -m tcp -s 209.132.184.53 --dport 5167 -j ACCEPT']
|
||||
|
||||
datacenter: cloud
|
||||
|
||||
|
|
|
@ -4,9 +4,14 @@ devel: true
|
|||
_forward_src: "forward_dev"
|
||||
|
||||
# don't forget to update ip in ./copr-keygen-stg, due to custom firewall rules
|
||||
copr_backend_ips: "172.16.5.24 209.132.184.179"
|
||||
keygen_host: "172.16.1.6"
|
||||
|
||||
copr_backend_ips: ["172.25.32.13", "209.132.184.53"]
|
||||
keygen_host: "172.25.32.11"
|
||||
|
||||
resolvconf: "resolv.conf/cloud"
|
||||
|
||||
backend_base_url: "http://copr-be-dev.cloud.fedoraproject.org"
|
||||
postfix_maincf: "postfix/main.cf/main.cf.copr"
|
||||
|
||||
frontend_base_url: "http://copr-fe-dev.cloud.fedoraproject.org"
|
||||
dist_git_base_url: "copr-dist-git-dev.fedorainfracloud.org"
|
||||
|
|
|
@ -14,3 +14,5 @@ fas_client_groups: sysadmin-main,sysadmin-dns
|
|||
|
||||
nrpe_procs_warn: 300
|
||||
nrpe_procs_crit: 500
|
||||
|
||||
sudoers: "{{ private }}/files/sudo/sysadmin-dns"
|
||||
|
|
|
@ -6,4 +6,4 @@ nrpe_procs_warn: 900
|
|||
nrpe_procs_crit: 1000
|
||||
|
||||
# nfs mount options, overrides the all/default
|
||||
nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid,actimeo=600"
|
||||
nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid,actimeo=600,nfsvers=3"
|
||||
|
|
|
@ -6,4 +6,4 @@ nrpe_procs_warn: 900
|
|||
nrpe_procs_crit: 1000
|
||||
|
||||
# nfs mount options, overrides the all/default
|
||||
nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid,actimeo=600"
|
||||
nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid,actimeo=600,nfsvers=3"
|
||||
|
|
|
@ -4,11 +4,11 @@ lvm_size: 20000
|
|||
mem_size: 2048
|
||||
num_cpus: 2
|
||||
|
||||
tcp_ports: [ 80,
|
||||
# These 16 ports are used by fedmsg. One for each wsgi thread.
|
||||
3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007,
|
||||
3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015]
|
||||
wsgi_fedmsg_service: fedora_elections
|
||||
wsgi_procs: 2
|
||||
wsgi_threads: 2
|
||||
|
||||
tcp_ports: [ 80 ]
|
||||
|
||||
# Neeed for rsync from log01 for logs.
|
||||
custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ]
|
||||
|
@ -25,4 +25,9 @@ fedmsg_certs:
|
|||
- service: fedora_elections
|
||||
owner: root
|
||||
group: apache
|
||||
|
||||
can_send:
|
||||
- fedora_elections.candidate.delete
|
||||
- fedora_elections.candidate.edit
|
||||
- fedora_elections.candidate.new
|
||||
- fedora_elections.election.edit
|
||||
- fedora_elections.election.new
|
||||
|
|
|
@ -4,10 +4,11 @@ lvm_size: 20000
|
|||
mem_size: 1024
|
||||
num_cpus: 2
|
||||
|
||||
tcp_ports: [ 80,
|
||||
# These 16 ports are used by fedmsg. One for each wsgi thread.
|
||||
3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007,
|
||||
3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015]
|
||||
wsgi_fedmsg_service: fedora_elections
|
||||
wsgi_procs: 2
|
||||
wsgi_threads: 2
|
||||
|
||||
tcp_ports: [ 80 ]
|
||||
|
||||
# Neeed for rsync from log01 for logs.
|
||||
custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ]
|
||||
|
@ -24,4 +25,9 @@ fedmsg_certs:
|
|||
- service: fedora_elections
|
||||
owner: root
|
||||
group: apache
|
||||
|
||||
can_send:
|
||||
- fedora_elections.candidate.delete
|
||||
- fedora_elections.candidate.edit
|
||||
- fedora_elections.candidate.new
|
||||
- fedora_elections.election.edit
|
||||
- fedora_elections.election.new
|
||||
|
|
|
@ -7,15 +7,11 @@ num_cpus: 4
|
|||
# for systems that do not match the above - specify the same parameter in
|
||||
# the host_vars/$hostname file
|
||||
|
||||
tcp_ports: [ 80, 873, 8443, 8444,
|
||||
# fas has 40 wsgi processes, each of which need their own port
|
||||
# open for outbound fedmsg messages.
|
||||
3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007,
|
||||
3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015,
|
||||
3016, 3017, 3018, 3019, 3020, 3021, 3022, 3023,
|
||||
3024, 3025, 3026, 3027, 3028, 3029, 3030, 3031,
|
||||
3032, 3033, 3034, 3035, 3036, 3037, 3038, 3039,
|
||||
]
|
||||
wsgi_fedmsg_service: fas
|
||||
wsgi_procs: 40
|
||||
wsgi_threads: 1
|
||||
|
||||
tcp_ports: [ 80, 873, 8443, 8444 ]
|
||||
|
||||
fas_client_groups: sysadmin-main,sysadmin-accounts
|
||||
|
||||
|
@ -36,3 +32,12 @@ fedmsg_certs:
|
|||
- service: fas
|
||||
owner: root
|
||||
group: fas
|
||||
can_send:
|
||||
- fas.group.create
|
||||
- fas.group.member.apply
|
||||
- fas.group.member.remove
|
||||
- fas.group.member.sponsor
|
||||
- fas.group.update
|
||||
- fas.role.update
|
||||
- fas.user.create
|
||||
- fas.user.update
|
||||
|
|
|
@ -7,15 +7,11 @@ num_cpus: 2
|
|||
# for systems that do not match the above - specify the same parameter in
|
||||
# the host_vars/$hostname file
|
||||
|
||||
tcp_ports: [ 80, 873, 8443, 8444,
|
||||
# fas has 40 wsgi processes, each of which need their own port
|
||||
# open for outbound fedmsg messages.
|
||||
3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007,
|
||||
3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015,
|
||||
3016, 3017, 3018, 3019, 3020, 3021, 3022, 3023,
|
||||
3024, 3025, 3026, 3027, 3028, 3029, 3030, 3031,
|
||||
3032, 3033, 3034, 3035, 3036, 3037, 3038, 3039,
|
||||
]
|
||||
wsgi_fedmsg_service: fas
|
||||
wsgi_procs: 40
|
||||
wsgi_threads: 1
|
||||
|
||||
tcp_ports: [ 80, 873, 8443, 8444 ]
|
||||
|
||||
fas_client_groups: sysadmin-main,sysadmin-accounts
|
||||
|
||||
|
@ -36,3 +32,12 @@ fedmsg_certs:
|
|||
- service: fas
|
||||
owner: root
|
||||
group: fas
|
||||
can_send:
|
||||
- fas.group.create
|
||||
- fas.group.member.apply
|
||||
- fas.group.member.remove
|
||||
- fas.group.member.sponsor
|
||||
- fas.group.update
|
||||
- fas.role.update
|
||||
- fas.user.create
|
||||
- fas.user.update
|
||||
|
|
|
@ -6,7 +6,11 @@ num_cpus: 2
|
|||
# for systems that do not match the above - specify the same parameter in
|
||||
# the host_vars/$hostname file
|
||||
|
||||
tcp_ports: [ 3000 ]
|
||||
tcp_ports: [
|
||||
# These are all for outgoing fedmsg.
|
||||
3000, 3001, 3002, 3003, 3004, 3005, 3006,
|
||||
3007, 3008, 3009, 3010, 3011, 3012, 3013,
|
||||
]
|
||||
|
||||
# TODO, restrict this down to just sysadmin-releng
|
||||
fas_client_groups: sysadmin-datanommer,sysadmin-releng,sysadmin-fedimg
|
||||
|
@ -19,3 +23,6 @@ fedmsg_certs:
|
|||
- service: fedimg
|
||||
owner: root
|
||||
group: fedmsg
|
||||
can_send:
|
||||
- fedimg.image.test
|
||||
- fedimg.image.upload
|
||||
|
|
|
@ -6,7 +6,11 @@ num_cpus: 2
|
|||
# for systems that do not match the above - specify the same parameter in
|
||||
# the host_vars/$hostname file
|
||||
|
||||
tcp_ports: [ 3000 ]
|
||||
tcp_ports: [
|
||||
# These are all for outgoing fedmsg.
|
||||
3000, 3001, 3002, 3003, 3004, 3005, 3006,
|
||||
3007, 3008, 3009, 3010, 3011, 3012, 3013,
|
||||
]
|
||||
|
||||
# TODO, restrict this down to just sysadmin-releng
|
||||
fas_client_groups: sysadmin-datanommer,sysadmin-releng,sysadmin-fedimg
|
||||
|
@ -19,3 +23,6 @@ fedmsg_certs:
|
|||
- service: fedimg
|
||||
owner: root
|
||||
group: fedmsg
|
||||
can_send:
|
||||
- fedimg.image.test
|
||||
- fedimg.image.upload
|
||||
|
|
|
@ -1,15 +0,0 @@
|
|||
---
|
||||
# Define resources for this group of hosts here.
|
||||
lvm_size: 20000
|
||||
mem_size: 1024
|
||||
num_cpus: 2
|
||||
|
||||
# for systems that do not match the above - specify the same parameter in
|
||||
# the host_vars/$hostname file
|
||||
|
||||
tcp_ports: [ 80, 443 ]
|
||||
|
||||
# Neeed for rsync from log01 for logs.
|
||||
custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ]
|
||||
|
||||
fas_client_groups: sysadmin-main,sysadmin-accounts
|
|
@ -27,3 +27,13 @@ fedmsg_certs:
|
|||
- service: fedocal
|
||||
owner: root
|
||||
group: apache
|
||||
can_send:
|
||||
- fedocal.calendar.clear
|
||||
- fedocal.calendar.delete
|
||||
- fedocal.calendar.new
|
||||
- fedocal.calendar.update
|
||||
- fedocal.calendar.upload
|
||||
- fedocal.meeting.delete
|
||||
- fedocal.meeting.new
|
||||
- fedocal.meeting.reminder
|
||||
- fedocal.meeting.update
|
||||
|
|
|
@ -27,3 +27,13 @@ fedmsg_certs:
|
|||
- service: fedocal
|
||||
owner: root
|
||||
group: apache
|
||||
can_send:
|
||||
- fedocal.calendar.clear
|
||||
- fedocal.calendar.delete
|
||||
- fedocal.calendar.new
|
||||
- fedocal.calendar.update
|
||||
- fedocal.calendar.upload
|
||||
- fedocal.meeting.delete
|
||||
- fedocal.meeting.new
|
||||
- fedocal.meeting.reminder
|
||||
- fedocal.meeting.update
|
||||
|
|
|
@ -4,13 +4,15 @@ lvm_size: 20000
|
|||
mem_size: 2048
|
||||
num_cpus: 2
|
||||
|
||||
# for systems that do not match the above - specify the same parameter in
|
||||
# the host_vars/$hostname file
|
||||
# Definining these vars has a number of effects
|
||||
# 1) mod_wsgi is configured to use the vars for its own setup
|
||||
# 2) iptables opens enough ports for all threads for fedmsg
|
||||
# 3) roles/fedmsg/base/ declares enough fedmsg endpoints for all threads
|
||||
wsgi_fedmsg_service: github2fedmsg
|
||||
wsgi_procs: 2
|
||||
wsgi_threads: 2
|
||||
|
||||
tcp_ports: [ 80, 443,
|
||||
# These 16 ports are used by fedmsg. One for each wsgi thread.
|
||||
3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007,
|
||||
3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015]
|
||||
tcp_ports: [ 80 ]
|
||||
|
||||
# Neeed for rsync from log01 for logs.
|
||||
custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ]
|
||||
|
@ -25,3 +27,21 @@ fedmsg_certs:
|
|||
- service: github2fedmsg
|
||||
owner: root
|
||||
group: apache
|
||||
can_send:
|
||||
- github.commit_comment
|
||||
- github.create
|
||||
- github.delete
|
||||
- github.fork
|
||||
- github.issue.comment
|
||||
- github.issue.reopened
|
||||
- github.member
|
||||
- github.page_build
|
||||
- github.pull_request.closed
|
||||
- github.pull_request_review_comment
|
||||
- github.push
|
||||
- github.release
|
||||
- github.star
|
||||
- github.status
|
||||
- github.team_add
|
||||
- github.webhook
|
||||
- github.gollum
|
||||
|
|
|
@ -4,13 +4,15 @@ lvm_size: 20000
|
|||
mem_size: 1024
|
||||
num_cpus: 1
|
||||
|
||||
# for systems that do not match the above - specify the same parameter in
|
||||
# the host_vars/$hostname file
|
||||
# Definining these vars has a number of effects
|
||||
# 1) mod_wsgi is configured to use the vars for its own setup
|
||||
# 2) iptables opens enough ports for all threads for fedmsg
|
||||
# 3) roles/fedmsg/base/ declares enough fedmsg endpoints for all threads
|
||||
wsgi_fedmsg_service: github2fedmsg
|
||||
wsgi_procs: 2
|
||||
wsgi_threads: 2
|
||||
|
||||
tcp_ports: [ 80, 443,
|
||||
# These 16 ports are used by fedmsg. One for each wsgi thread.
|
||||
3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007,
|
||||
3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015]
|
||||
tcp_ports: [ 80 ]
|
||||
|
||||
# Neeed for rsync from log01 for logs.
|
||||
custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ]
|
||||
|
@ -25,3 +27,21 @@ fedmsg_certs:
|
|||
- service: github2fedmsg
|
||||
owner: root
|
||||
group: apache
|
||||
can_send:
|
||||
- github.commit_comment
|
||||
- github.create
|
||||
- github.delete
|
||||
- github.fork
|
||||
- github.issue.comment
|
||||
- github.issue.reopened
|
||||
- github.member
|
||||
- github.page_build
|
||||
- github.pull_request.closed
|
||||
- github.pull_request_review_comment
|
||||
- github.push
|
||||
- github.release
|
||||
- github.star
|
||||
- github.status
|
||||
- github.team_add
|
||||
- github.webhook
|
||||
- github.gollum
|
||||
|
|
27
inventory/group_vars/hosted
Normal file
27
inventory/group_vars/hosted
Normal file
|
@ -0,0 +1,27 @@
|
|||
|
||||
|
||||
# Even though the hosted nodes are still deployed with puppet, we have this
|
||||
# definition here so that the fedmsg authz policy can be generated correctly.
|
||||
# ... when we eventually fully ansibilize these hosts, just fill out the rest of
|
||||
# this file with the other vars we need. --threebean
|
||||
fedmsg_certs:
|
||||
- service: shell
|
||||
owner: root
|
||||
group: sysadmin
|
||||
- service: trac
|
||||
owner: root
|
||||
group: apache
|
||||
can_send:
|
||||
- trac.ticket.delete
|
||||
- trac.ticket.new
|
||||
- trac.ticket.update
|
||||
- trac.wiki.page.delete
|
||||
- trac.wiki.page.new
|
||||
- trac.wiki.page.rename
|
||||
- trac.wiki.page.update
|
||||
- trac.wiki.page.version.delete
|
||||
- service: git
|
||||
owner: root
|
||||
group: cla_done
|
||||
can_send:
|
||||
- trac.git.receive
|
|
@ -19,3 +19,8 @@ fedmsg_certs:
|
|||
- service: hotness
|
||||
owner: root
|
||||
group: fedmsg
|
||||
can_send:
|
||||
- hotness.project.map
|
||||
- hotness.update.bug.file
|
||||
- hotness.update.bug.followup
|
||||
- hotness.update.drop
|
||||
|
|
|
@ -19,3 +19,8 @@ fedmsg_certs:
|
|||
- service: hotness
|
||||
owner: root
|
||||
group: fedmsg
|
||||
can_send:
|
||||
- hotness.project.map
|
||||
- hotness.update.bug.file
|
||||
- hotness.update.bug.followup
|
||||
- hotness.update.drop
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
postfix_group: jenkins-cloud
|
||||
freezes: false
|
||||
|
||||
tcp_ports: [22, 80, 443]
|
||||
|
||||
|
@ -10,3 +11,10 @@ fedmsg_certs:
|
|||
- service: jenkins
|
||||
owner: root
|
||||
group: jenkins
|
||||
can_send:
|
||||
- jenkins.build.aborted
|
||||
- jenkins.build.failed
|
||||
- jenkins.build.notbuilt
|
||||
- jenkins.build.passed
|
||||
- jenkins.build.start
|
||||
- jenkins.build.unstable
|
||||
|
|
184
inventory/group_vars/jenkins-dev
Normal file
184
inventory/group_vars/jenkins-dev
Normal file
|
@ -0,0 +1,184 @@
|
|||
---
|
||||
datacenter: fedorainfracloud
|
||||
freezes: false
|
||||
|
||||
slaves:
|
||||
- name: EL6
|
||||
host: jenkins-slave-el6.fedorainfracloud.org
|
||||
description: CentOS 6.6
|
||||
labels: el EL el6 EL6 centos CentOS centos6 CentOS6
|
||||
- name: EL7
|
||||
host: jenkins-slave-el7.fedorainfracloud.org
|
||||
description: Red Hat Enterprise Linux Server 7.1
|
||||
labels: el EL el7 EL7 rhel RHEL rhel7 RHEL7
|
||||
- name: F22
|
||||
host: jenkins-slave-f22.fedorainfracloud.org
|
||||
description: Fedora 22
|
||||
labels: fedora Fedora fedora22 Fedora22
|
||||
|
||||
# Packages installed on all Jenkins slaves (Fedora, CentOS)
|
||||
slave_packages_common:
|
||||
- java-1.8.0-openjdk-devel
|
||||
- vim
|
||||
- subversion
|
||||
- bzr
|
||||
- git
|
||||
- rpmlint
|
||||
- rpmdevtools
|
||||
- mercurial
|
||||
- mock
|
||||
- gcc
|
||||
- gcc-c++
|
||||
- libjpeg-turbo-devel
|
||||
- python-bugzilla
|
||||
- python-pip
|
||||
- python-virtualenv
|
||||
- python-coverage
|
||||
- pylint
|
||||
- python-argparse
|
||||
- python-nose
|
||||
- python-BeautifulSoup
|
||||
- python-fedora
|
||||
- python-unittest2
|
||||
- python-pep8
|
||||
- python-psycopg2
|
||||
- postgresql-devel # Required to install python-psycopg2 w/in a venv
|
||||
- docbook-style-xsl # Required by gimp-help-2
|
||||
- make # Required by gimp-help-2
|
||||
- automake # Required by gimp-help-2
|
||||
- libcurl-devel # Required by blockerbugs
|
||||
- python-formencode # Required by javapackages-tools
|
||||
- asciidoc # Required by javapackages-tools
|
||||
- xmlto # Required by javapackages-tools
|
||||
- pycairo-devel # Required by dogtail
|
||||
- packagedb-cli # Required by FedoraReview
|
||||
- xorg-x11-server-Xvfb # Required by fedora-rube
|
||||
- libffi-devel # Required by bodhi/cffi/cryptography
|
||||
- openssl-devel # Required by bodhi/cffi/cryptography
|
||||
- redis # Required by copr
|
||||
- createrepo_c # Required by bodhi2
|
||||
- python-createrepo_c # Required by bodhi2
|
||||
- python-straight-plugin
|
||||
- pyflakes # Requested by user rholy (ticket #4175)
|
||||
- koji # Required by koschei (ticket #4852)
|
||||
- python-hawkey # Required by koschei (ticket #4852)
|
||||
- python-librepo # Required by koschei (ticket #4852)
|
||||
- rpm-python # Required by koschei (ticket #4852)
|
||||
|
||||
# Packages installed only on Fedora Jenkins slaves
|
||||
slave_packages_fedora:
|
||||
- python3
|
||||
- python-nose-cover3
|
||||
- python3-nose-cover3
|
||||
- glibc.i686
|
||||
- glibc-devel.i686
|
||||
- libstdc++.i686
|
||||
- zlib-devel.i686
|
||||
- ncurses-devel.i686
|
||||
- libX11-devel.i686
|
||||
- libXrender.i686
|
||||
- libXrandr.i686
|
||||
- nspr-devel ## Requested by 389-ds-base
|
||||
- nss-devel
|
||||
- svrcore-devel
|
||||
- openldap-devel
|
||||
- libdb-devel
|
||||
- cyrus-sasl-devel
|
||||
- icu
|
||||
- libicu-devel
|
||||
- gcc-c++
|
||||
- net-snmp-devel
|
||||
- lm_sensors-devel
|
||||
- bzip2-devel
|
||||
- zlib-devel
|
||||
- openssl-devel
|
||||
- tcp_wrappers
|
||||
- pam-devel
|
||||
- systemd-units
|
||||
- policycoreutils-python
|
||||
- openldap-clients
|
||||
- perl-Mozilla-LDAP
|
||||
- nss-tools
|
||||
- cyrus-sasl-gssapi
|
||||
- cyrus-sasl-md5
|
||||
- libdb-utils
|
||||
- systemd-units
|
||||
- perl-Socket
|
||||
- perl-NetAddr-IP
|
||||
- pcre-devel ## End of request list for 389-ds-base
|
||||
- maven # Required by xmvn https://fedorahosted.org/fedora-infrastructure/ticket/4054
|
||||
- gtk3-devel # Required by dogtail
|
||||
- glib2-devel # Required by Cockpit
|
||||
- libgudev1-devel
|
||||
- json-glib-devel
|
||||
- gobject-introspection-devel
|
||||
- libudisks2-devel
|
||||
- NetworkManager-glib-devel
|
||||
- systemd-devel
|
||||
- accountsservice-devel
|
||||
- pam-devel
|
||||
- autoconf
|
||||
- libtool
|
||||
- intltool
|
||||
- jsl
|
||||
- python-scss
|
||||
- gtk-doc
|
||||
- krb5-devel
|
||||
- sshpass
|
||||
- perl-Locale-PO
|
||||
- perl-JSON
|
||||
- glib-networking
|
||||
- realmd
|
||||
- udisks2
|
||||
- mdadm
|
||||
- lvm2
|
||||
- sshpass # End requires for Cockpit
|
||||
- tito # Requested by msrb for javapackages-tools and xmvn (ticket#4113)
|
||||
- pyflakes # Requested by user rholy (ticket #4175)
|
||||
- devscripts-minimal # Required by FedoraReview
|
||||
- firefox # Required for rube
|
||||
- python-devel # Required for mpi4py
|
||||
- python3-devel # Required for mpi4py
|
||||
- pwgen # Required for mpi4py
|
||||
- openmpi-devel # Required for mpi4py
|
||||
- mpich2-devel # Required for mpi4py
|
||||
- pylint # Required by Ipsilon
|
||||
- python-pep8
|
||||
- nodejs-less
|
||||
- python-openid
|
||||
- python-openid-teams
|
||||
- python-openid-cla
|
||||
- python-cherrypy
|
||||
- m2crypto
|
||||
- lasso-python
|
||||
- python-sqlalchemy
|
||||
- python-ldap
|
||||
- python-pam
|
||||
- python-fedora
|
||||
- freeipa-python
|
||||
- httpd
|
||||
- mod_auth_mellon
|
||||
- postgresql-server
|
||||
- openssl
|
||||
- mod_wsgi
|
||||
- python-jinja2
|
||||
- python-psycopg2
|
||||
- sssd
|
||||
- libsss_simpleifp
|
||||
- openldap-servers
|
||||
- mod_auth_gssapi
|
||||
- krb5-server
|
||||
- socket_wrapper
|
||||
- nss_wrapper
|
||||
- python-requests-kerberos
|
||||
- python-lesscpy # End requires for Ipsilon
|
||||
- libxml2-python # Required by gimp-docs
|
||||
- createrepo # Required by dnf
|
||||
- dia # Required by javapackages-tools ticket #4279
|
||||
|
||||
# Packages installed only on CentOS Jenkins slaves
|
||||
slave_packages_centos:
|
||||
# "setup" is just a placeholder value
|
||||
- setup
|
||||
# el7-only
|
||||
# - python-webob1.4 # Required by bodhi2
|
|
@ -4,13 +4,15 @@ lvm_size: 20000
|
|||
mem_size: 1024
|
||||
num_cpus: 1
|
||||
|
||||
# for systems that do not match the above - specify the same parameter in
|
||||
# the host_vars/$hostname file
|
||||
# Definining these vars has a number of effects
|
||||
# 1) mod_wsgi is configured to use the vars for its own setup
|
||||
# 2) iptables opens enough ports for all threads for fedmsg
|
||||
# 3) roles/fedmsg/base/ declares enough fedmsg endpoints for all threads
|
||||
wsgi_fedmsg_service: kerneltest
|
||||
wsgi_procs: 2
|
||||
wsgi_threads: 1
|
||||
|
||||
tcp_ports: [ 80, 443,
|
||||
# These 16 ports are used by fedmsg. One for each wsgi thread.
|
||||
3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007,
|
||||
3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015]
|
||||
tcp_ports: [ 80 ]
|
||||
|
||||
# Neeed for rsync from log01 for logs.
|
||||
custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ]
|
||||
|
@ -25,3 +27,7 @@ fedmsg_certs:
|
|||
- service: kerneltest
|
||||
owner: root
|
||||
group: apache
|
||||
can_send:
|
||||
- kerneltest.release.edit
|
||||
- kerneltest.release.new
|
||||
- kerneltest.upload.new
|
||||
|
|
|
@ -4,13 +4,15 @@ lvm_size: 20000
|
|||
mem_size: 1024
|
||||
num_cpus: 1
|
||||
|
||||
# for systems that do not match the above - specify the same parameter in
|
||||
# the host_vars/$hostname file
|
||||
# Definining these vars has a number of effects
|
||||
# 1) mod_wsgi is configured to use the vars for its own setup
|
||||
# 2) iptables opens enough ports for all threads for fedmsg
|
||||
# 3) roles/fedmsg/base/ declares enough fedmsg endpoints for all threads
|
||||
wsgi_fedmsg_service: kerneltest
|
||||
wsgi_procs: 2
|
||||
wsgi_threads: 1
|
||||
|
||||
tcp_ports: [ 80, 443,
|
||||
# These 16 ports are used by fedmsg. One for each wsgi thread.
|
||||
3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007,
|
||||
3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015]
|
||||
tcp_ports: [ 80 ]
|
||||
|
||||
# Neeed for rsync from log01 for logs.
|
||||
custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ]
|
||||
|
@ -25,3 +27,7 @@ fedmsg_certs:
|
|||
- service: kerneltest
|
||||
owner: root
|
||||
group: apache
|
||||
can_send:
|
||||
- kerneltest.release.edit
|
||||
- kerneltest.release.new
|
||||
- kerneltest.upload.new
|
||||
|
|
|
@ -26,8 +26,17 @@ fedmsg_certs:
|
|||
- service: koji
|
||||
owner: root
|
||||
group: apache
|
||||
can_send:
|
||||
- buildsys.build.state.change
|
||||
- buildsys.package.list.change
|
||||
- buildsys.repo.done
|
||||
- buildsys.repo.init
|
||||
- buildsys.rpm.sign
|
||||
- buildsys.tag
|
||||
- buildsys.task.state.change
|
||||
- buildsys.untag
|
||||
|
||||
nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid"
|
||||
nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3"
|
||||
|
||||
virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }}
|
||||
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
|
||||
|
@ -38,3 +47,5 @@ virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }}
|
|||
ip={{ eth1_ip }}:::{{ nm }}:{{ inventory_hostname }}-nfs:eth1:none"
|
||||
--network=bridge=br0,model=virtio --network=bridge=br1,model=virtio
|
||||
--autostart --noautoconsole
|
||||
|
||||
sudoers: "{{ private }}/files/sudo/arm-releng-sudoers"
|
||||
|
|
17
inventory/group_vars/koji-not-yet-ansibilized
Normal file
17
inventory/group_vars/koji-not-yet-ansibilized
Normal file
|
@ -0,0 +1,17 @@
|
|||
# See the comment with the explanation of this group in ``inventory/inventory``
|
||||
fedmsg_certs:
|
||||
- service: shell
|
||||
owner: root
|
||||
group: sysadmin
|
||||
- service: koji
|
||||
owner: root
|
||||
group: apache
|
||||
can_send:
|
||||
- buildsys.build.state.change
|
||||
- buildsys.package.list.change
|
||||
- buildsys.repo.done
|
||||
- buildsys.repo.init
|
||||
- buildsys.rpm.sign
|
||||
- buildsys.tag
|
||||
- buildsys.task.state.change
|
||||
- buildsys.untag
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
# Define resources for this group of hosts here.
|
||||
lvm_size: 30000
|
||||
mem_size: 2048
|
||||
mem_size: 4096
|
||||
num_cpus: 2
|
||||
|
||||
# for systems that do not match the above - specify the same parameter in
|
||||
|
@ -22,5 +22,20 @@ fedmsg_certs:
|
|||
- service: koji
|
||||
owner: root
|
||||
group: apache
|
||||
can_send:
|
||||
- buildsys.build.state.change
|
||||
- buildsys.package.list.change
|
||||
- buildsys.repo.done
|
||||
- buildsys.repo.init
|
||||
- buildsys.rpm.sign
|
||||
- buildsys.tag
|
||||
- buildsys.task.state.change
|
||||
- buildsys.untag
|
||||
|
||||
nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid"
|
||||
# NOTE -- staging mounts read-only
|
||||
nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid"
|
||||
sudoers: "{{ private }}/files/sudo/arm-releng-sudoers"
|
||||
|
||||
koji_server_url: "http://koji.stg.fedoraproject.org/kojihub"
|
||||
koji_weburl: "http://koji.stg.fedoraproject.org/koji"
|
||||
koji_topurl: "http://kojipkgs.fedoraproject.org/"
|
||||
|
|
|
@ -29,6 +29,7 @@ csi_relationship: |
|
|||
|
||||
- Things that rely on this host:
|
||||
- all koji builders/buildsystem
|
||||
- koschei
|
||||
- external users downloading packages from koji.
|
||||
|
||||
# Need a eth0/eth1 install here.
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue