openstack: remove more lingering files and playbooks and libraries.

Signed-off-by: Kevin Fenzi <kevin@scrye.com>
This commit is contained in:
Kevin Fenzi 2020-02-28 20:53:11 +00:00 committed by Pierre-Yves Chibon
parent 00af04a024
commit aa580f72c5
17 changed files with 0 additions and 2230 deletions

View file

@ -1,186 +0,0 @@
== Cloud information ==
The dashboard for the production cloud instance is:
https://fedorainfracloud.org/dashboard/
You can download credentials via the dashboard (under security and access)
=== Transient instances ===
Transient instances are short term use instances for Fedora
contributors. They can be terminated at any time and shouldn't be
relied on for any production use. If you have an application
or longer term item that should always be around
please create a persistent playbook instead. (see below)
to startup a new transient cloud instance and configure for basic
server use run (as root):
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/transient_cloud_instance.yml -e 'name=somename'
The -i is important - ansible's tools need access to root's sshagent as well
as the cloud credentials to run the above playbooks successfully.
This will setup a new instance, provision it and email sysadmin-main that
the instance was created and it's ip address.
You will then be able to login, as root if you are in the sysadmin-main group.
(If you are making the instance for another user, see below)
You MUST pass a name to it, ie: -e 'name=somethingdescriptive'
You can optionally override defaults by passing any of the following:
image=imagename (default is centos70_x86_64)
instance_type=some instance type (default is m1.small)
root_auth_users='user1 user2 user3 @group1' (default always includes sysadmin-main group)
Note: if you run this playbook with the same name= multiple times
openstack is smart enough to just return the current ip of that instance
and go on. This way you can re-run if you want to reconfigure it without
reprovisioning it.
Sizes options
-------------
Name Memory_MB Disk VCPUs
m1.tiny 512 0 1
m1.small 2048 20 1
m1.medium 4096 40 2
m1.large 8192 80 4
m1.xlarge 16384 160 8
m1.builder 5120 50 3
=== Persistent cloud instances ===
Persistent cloud instances are ones that we want to always have up and
configured. These are things like dev instances for various applications,
proof of concept servers for evaluating something, etc. They will be
reprovisioned after a reboot/maint window for the cloud.
Setting up a new persistent cloud host:
1) Select an available floating IP
source /srv/private/ansible/files/openstack/novarc
nova floating-ip-list
Note that an "available floating IP" is one that has only a "-" in the Fixed IP
column of the above `nova` command. Ignore the fact that the "Server Id" column
is completely blank for all instances. If there are no ip's with -, use:
nova floating-ip-create
and retry the list.
2) Add that IP addr to dns (typically as foo.fedorainfracloud.org)
3) Create persistent storage disk for the instance (if necessary.. you might not
need this).
nova volume-create --display-name SOME_NAME SIZE_IN_GB
4) Add to ansible inventory in the persistent-cloud group.
You should use the FQDN for this and not the IP. Names are good.
5) setup the host_vars file. It should looks something like this::
instance_type: m1.medium
image:
keypair: fedora-admin-20130801
security_group: default # NOTE: security_group MUST contain default.
zone: nova
tcp_ports: [22, 80, 443]
inventory_tenant: persistent
inventory_instance_name: taiga
hostbase: taiga
public_ip: 209.132.184.50
root_auth_users: ralph maxamillion
description: taiga frontend server
volumes:
- volume_id: VOLUME_UUID_GOES_HERE
device: /dev/vdc
cloud_networks:
# persistent-net
- net-id: "67b77354-39a4-43de-b007-bb813ac5c35f"
6) setup the host playbook
7) run the playbook:
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/hosts/$YOUR_HOSTNAME_HERE.yml
You should be able to run that playbook over and over again safely, it will
only setup/create a new instance if the ip is not up/responding.
=== SECURITY GROUPS ===
FIXME: needs work for new cloud.
- to edit security groups you must either have your own cloud account or
be a member of sysadmin-main
This gives you the credential to change things in the persistent tenant
- source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
This lists all security groups in that tenant:
- euca-describe-groups | grep GROUP
the output will look like this:
euca-describe-groups | grep GROU
GROUP d4e664a10e2c4210839150be09c46e5e default default
GROUP d4e664a10e2c4210839150be09c46e5e logstash logstash security group
GROUP d4e664a10e2c4210839150be09c46e5e smtpserver list server group. needs web and smtp
GROUP d4e664a10e2c4210839150be09c46e5e webserver webserver security group
GROUP d4e664a10e2c4210839150be09c46e5e wideopen wideopen
This lets you list the rules in a specific group:
- euca-describe-group groupname
the output will look like this:
euca-describe-group wideopen
GROUP d4e664a10e2c4210839150be09c46e5e wideopen wideopen
PERMISSION d4e664a10e2c4210839150be09c46e5e wideopen ALLOWS tcp 1 65535 FROM CIDR 0.0.0.0/0
PERMISSION d4e664a10e2c4210839150be09c46e5e wideopen ALLOWS icmp -1 -1 FROM CIDR 0.0.0.0/0
To create a new group:
euca-create-group -d "group description here" groupname
To add a rule to a group:
euca-authorize -P tcp -p 22 groupname
euca-authorize -P icmp -t -1:-1 groupname
To delete a rule from a group:
euca-revoke -P tcp -p 22 groupname
Notes:
- Be careful removing or adding rules to existing groups b/c you could be
impacting other instances using that security group.
- You will almost always want to allow 22/tcp (sshd) and icmp -1 -1 (ping
and traceroute and friends).
=== TERMINATING INSTANCES ===
For transient:
1. source /srv/private/ansible/files/openstack/novarc
2. export OS_TENANT_NAME=transient
2. nova list | grep <ip of your instance or name of your instance>
3. nova delete <name of instance or ID of instance>
- OR -
For persistent:
1. source /srv/private/ansible/files/openstack/novarc
2. nova list | grep <ip of your instance or name of your instance>
3. nova delete <name of instance or ID of instance>

View file

@ -1,2 +0,0 @@
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCv8WqXOuL78Rd7ZvDqoi84M7uRV3uueXTXtvlPdyNQBzIBmxh+spw9IhtoR+FlzgQQ1MN4B7YVLTGki6QDxWDM5jgTVfzxTh/HTg7kJ31HbM1/jDuBK7HMfay2BGx/HCqS2oxIBgIBwIMQAU93jBZUxNyYWvO+5TiU35IHEkYOtHyGYtTtuGCopYRQoAAOIVIIzzDbPvopojCBF5cMYglR/G02YgWM7hMpQ9IqEttLctLmpg6ckcp/sDTHV/8CbXbrSN6pOYxn1YutOgC9MHNmxC1joMH18qkwvSnzXaeVNh4PBWnm1f3KVTSZXKuewPThc3fk2sozgM9BH6KmZoKl

View file

@ -1 +0,0 @@
{{fed_cloud09_nova_public_key}}

View file

@ -1 +0,0 @@
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA1sBKROSJ3rzI0IlBkM926Dvpiw3a4wYSys0ZeKRohWZg369ilZkUkRhsy0g4JU85lt6rxf5JLwURF+fWBEohauF1Uvklc25LdZpRS3IBQPaXvWeM8lygQQomFc0Df6iUbCYFWnEWMjKd7FGYX3DgOZLnG8tV2vX7jFjqitsh5LRAbmghUBRarw/ix4CFx7+VIeKCBkAybviQIW828N1IqJC6/e7v6/QStpblYpCFPqMflXhQ/KS2D043Yy/uUjmOjMWwOMFS6Qk+py1C0mDU0TUptFYwDP5o9IK/c5HaccmOl2IyUPB1/RCtTfOn6wXPRTMUU+5w+TcPH6MPvvuiSQ== root@lockbox01.phx2.fedoraproject.org

View file

@ -1,135 +0,0 @@
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
tune.ssl.default-dh-param 1024
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#frontend keystone_public *:5000
# default_backend keystone_public
#frontend keystone_admin *:35357
# default_backend keystone_admin
frontend neutron
bind 0.0.0.0:9696 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
default_backend neutron
# HSTS (31536000 seconds = 365 days)
rspadd Strict-Transport-Security:\ max-age=31536000
frontend cinder
bind 0.0.0.0:8776 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
default_backend cinder
# HSTS (31536000 seconds = 365 days)
rspadd Strict-Transport-Security:\ max-age=31536000
frontend swift
bind 0.0.0.0:8080 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
default_backend swift
# HSTS (31536000 seconds = 365 days)
rspadd Strict-Transport-Security:\ max-age=31536000
frontend nova
bind 0.0.0.0:8774 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
default_backend nova
# HSTS (31536000 seconds = 365 days)
rspadd Strict-Transport-Security:\ max-age=31536000
frontend ceilometer
bind 0.0.0.0:8777 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
default_backend ceilometer
# HSTS (31536000 seconds = 365 days)
rspadd Strict-Transport-Security:\ max-age=31536000
frontend ec2
bind 0.0.0.0:8773 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
default_backend ec2
# HSTS (31536000 seconds = 365 days)
rspadd Strict-Transport-Security:\ max-age=31536000
frontend glance
bind 0.0.0.0:9292 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
default_backend glance
# HSTS (31536000 seconds = 365 days)
rspadd Strict-Transport-Security:\ max-age=31536000
backend neutron
server neutron 127.0.0.1:8696 check
backend cinder
server cinder 127.0.0.1:6776 check
backend swift
server swift 127.0.0.1:7080 check
backend nova
server nova 127.0.0.1:6774 check
backend ceilometer
server ceilometer 127.0.0.1:6777 check
backend ec2
server ec2 127.0.0.1:6773 check
backend glance
server glance 127.0.0.1:7292 check
backend keystone_public
server keystone_public 127.0.0.1:5000 check
backend keystone_admin
server keystone_admin 127.0.0.1:35357 check

View file

@ -1,9 +0,0 @@
DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR={{ network_public_ip }}
NETMASK={{ public_netmask }} # your netmask
GATEWAY={{ public_gateway_ip }} # your gateway
DNS1={{ public_dns }} # your nameserver
ONBOOT=yes

View file

@ -1,8 +0,0 @@
DEVICE="eth0"
NAME="eth0"
ONBOOT=yes
BOOTPROTO=none
HWADDR="f0:1f:af:e3:5f:0c"
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-ex

View file

@ -1,5 +0,0 @@
export OS_USERNAME=msuchy
export OS_TENANT_NAME=copr
export OS_PASSWORD=TBD
export OS_AUTH_URL=http://209.132.184.9:5000/v2.0/
export PS1='[\u@\h \W(keystone_msuchy)]\$ '

View file

@ -1,4 +0,0 @@
[client]
host=localhost
user=root
password={{ DBPASSWORD }}

View file

@ -1 +0,0 @@
StrictHostKeyChecking no

View file

@ -1,2 +0,0 @@
# You may specify other parameters to the nova-novncproxy here
OPTIONS="--novncproxy_host 209.132.184.9 --ssl_only"

View file

@ -1,512 +0,0 @@
[general]
# Path to a Public key to install on servers. If a usable key has not
# been installed on the remote servers the user will be prompted for a
# password and this key will be installed so the password will not be
# required again
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
# Set to 'y' if you would like Packstack to install MySQL
CONFIG_MARIADB_INSTALL=y
# Set to 'y' if you would like Packstack to install OpenStack Image
# Service (Glance)
CONFIG_GLANCE_INSTALL=y
# Set to 'y' if you would like Packstack to install OpenStack Block
# Storage (Cinder)
CONFIG_CINDER_INSTALL=y
# Set to 'y' if you would like Packstack to install OpenStack Compute
# (Nova)
CONFIG_NOVA_INSTALL=y
# Set to 'y' if you would like Packstack to install OpenStack
# Networking (Neutron)
CONFIG_NEUTRON_INSTALL=y
# Set to 'y' if you would like Packstack to install OpenStack
# Dashboard (Horizon)
CONFIG_HORIZON_INSTALL=y
# Set to 'y' if you would like Packstack to install OpenStack Object
# Storage (Swift)
CONFIG_SWIFT_INSTALL=y
# Set to 'y' if you would like Packstack to install OpenStack
# Metering (Ceilometer)
CONFIG_CEILOMETER_INSTALL=y
# Set to 'y' if you would like Packstack to install OpenStack
# Orchestration (Heat)
CONFIG_HEAT_INSTALL=n
# Set to 'y' if you would like Packstack to install the OpenStack
# Client packages. An admin "rc" file will also be installed
CONFIG_CLIENT_INSTALL=y
# Comma separated list of NTP servers. Leave plain if Packstack
# should not install ntpd on instances.
CONFIG_NTP_SERVERS=
# Set to 'y' if you would like Packstack to install Nagios to monitor
# OpenStack hosts
CONFIG_NAGIOS_INSTALL=n
# Comma separated list of servers to be excluded from installation in
# case you are running Packstack the second time with the same answer
# file and don't want Packstack to touch these servers. Leave plain if
# you don't need to exclude any server.
EXCLUDE_SERVERS=
# Set to 'y' if you want to run OpenStack services in debug mode.
# Otherwise set to 'n'.
CONFIG_DEBUG_MODE=n
# Set to 'y' if you want to use VMware vCenter as hypervisor and
# storageOtherwise set to 'n'.
CONFIG_VMWARE_BACKEND=n
# The IP address of the server on which to install MySQL
CONFIG_MARIADB_HOST={{ controller_public_ip }}
# Username for the MySQL admin user
CONFIG_MARIADB_USER=root
# Password for the MySQL admin user
CONFIG_MARIADB_PW={{ DBPASSWORD }}
# Set the server for the AMQP service
CONFIG_AMQP_BACKEND=rabbitmq
# The IP address of the server on which to install the AMQP service
CONFIG_AMQP_HOST={{ controller_public_ip }}
# Enable SSL for the AMQP service
CONFIG_AMQP_ENABLE_SSL=n
# Enable Authentication for the AMQP service
CONFIG_AMQP_ENABLE_AUTH=y
# The password for the NSS certificate database of the AMQP service
CONFIG_AMQP_NSS_CERTDB_PW={{ CONFIG_AMQP_NSS_CERTDB_PW }}
# The port in which the AMQP service listens to SSL connections
CONFIG_AMQP_SSL_PORT=5671
# The filename of the certificate that the AMQP service is going to
# use
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/fedorainfracloud.org.pem
# The filename of the private key that the AMQP service is going to
# use
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/fedorainfracloud.org.key
# Auto Generates self signed SSL certificate and key
CONFIG_AMQP_SSL_SELF_SIGNED=n
# User for amqp authentication
CONFIG_AMQP_AUTH_USER=amqp_user
# Password for user authentication
CONFIG_AMQP_AUTH_PASSWORD={{ CONFIG_AMQP_AUTH_PASSWORD }}
# The password to use for the Keystone to access DB
CONFIG_KEYSTONE_DB_PW={{ KEYSTONE_DBPASS }}
# The token to use for the Keystone service api
CONFIG_KEYSTONE_ADMIN_TOKEN={{ ADMIN_TOKEN }}
# The password to use for the Keystone admin user
CONFIG_KEYSTONE_ADMIN_PW={{ ADMIN_PASS }}
# The password to use for the Keystone demo user
CONFIG_KEYSTONE_DEMO_PW={{ DEMO_PASS }}
# Kestone token format. Use either UUID or PKI
CONFIG_KEYSTONE_TOKEN_FORMAT=PKI
# The password to use for the Glance to access DB
CONFIG_GLANCE_DB_PW={{ GLANCE_DBPASS }}
# The password to use for the Glance to authenticate with Keystone
CONFIG_GLANCE_KS_PW={{ GLANCE_PASS }}
# The password to use for the Cinder to access DB
CONFIG_CINDER_DB_PW={{ CINDER_DBPASS }}
# The password to use for the Cinder to authenticate with Keystone
CONFIG_CINDER_KS_PW={{ CINDER_PASS }}
# The Cinder backend to use, valid options are: lvm, gluster, nfs,
# vmdk
CONFIG_CINDER_BACKEND=lvm
# Create Cinder's volumes group. This should only be done for testing
# on a proof-of-concept installation of Cinder. This will create a
# file-backed volume group and is not suitable for production usage.
CONFIG_CINDER_VOLUMES_CREATE=n
# Cinder's volumes group size. Note that actual volume size will be
# extended with 3% more space for VG metadata.
CONFIG_CINDER_VOLUMES_SIZE=5G
# A single or comma separated list of gluster volume shares to mount,
# eg: ip-address:/vol-name, domain:/vol-name
CONFIG_CINDER_GLUSTER_MOUNTS=
# A single or comma seprated list of NFS exports to mount, eg: ip-
# address:/export-name
CONFIG_CINDER_NFS_MOUNTS=
# The IP address of the VMware vCenter datastore
CONFIG_VCENTER_HOST=
# The username to authenticate to VMware vCenter datastore
CONFIG_VCENTER_USER=
# The password to authenticate to VMware vCenter datastore
CONFIG_VCENTER_PASSWORD=
# A comma separated list of IP addresses on which to install the Nova
# Compute services
CONFIG_COMPUTE_HOSTS={{ controller_public_ip }}
# The IP address of the server on which to install the Nova Conductor
# service
CONFIG_NOVA_CONDUCTOR_HOST={{ controller_public_ip }}
# The password to use for the Nova to access DB
CONFIG_NOVA_DB_PW={{ NOVA_DBPASS }}
# The password to use for the Nova to authenticate with Keystone
CONFIG_NOVA_KS_PW={{ NOVA_PASS }}
# The overcommitment ratio for virtual to physical CPUs. Set to 1.0
# to disable CPU overcommitment
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
# The overcommitment ratio for virtual to physical RAM. Set to 1.0 to
# disable RAM overcommitment
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
# Private interface for Flat DHCP on the Nova compute servers
CONFIG_NOVA_COMPUTE_PRIVIF=lo
# The list of IP addresses of the server on which to install the Nova
# Nova network manager
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
# Public interface on the Nova network server
CONFIG_NOVA_NETWORK_PUBIF=eth0
# Private interface for network manager on the Nova network server
CONFIG_NOVA_NETWORK_PRIVIF=eth1
# IP Range for network manager
CONFIG_NOVA_NETWORK_FIXEDRANGE={{ internal_interface_cidr }}
# IP Range for Floating IP's
CONFIG_NOVA_NETWORK_FLOATRANGE={{ public_interface_cidr }}
# Name of the default floating pool to which the specified floating
# ranges are added to
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=external
# Automatically assign a floating IP to new instances
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
# First VLAN for private networks
CONFIG_NOVA_NETWORK_VLAN_START=100
# Number of networks to support
CONFIG_NOVA_NETWORK_NUMBER=1
# Number of addresses in each private subnet
CONFIG_NOVA_NETWORK_SIZE=255
# The IP address of the VMware vCenter server
CONFIG_VCENTER_HOST=
# The username to authenticate to VMware vCenter server
CONFIG_VCENTER_USER=
# The password to authenticate to VMware vCenter server
CONFIG_VCENTER_PASSWORD=
# The name of the vCenter cluster
CONFIG_VCENTER_CLUSTER_NAME=
# The password to use for Neutron to authenticate with Keystone
CONFIG_NEUTRON_KS_PW={{ NEUTRON_PASS }}
# The password to use for Neutron to access DB
CONFIG_NEUTRON_DB_PW={{ NEUTRON_DBPASS }}
# A comma separated list of IP addresses on which to install Neutron
CONFIG_NETWORK_HOSTS={{ controller_public_ip }}
# The name of the bridge that the Neutron L3 agent will use for
# external traffic, or 'provider' if using provider networks
CONFIG_NEUTRON_L3_EXT_BRIDGE=provider
# The name of the L2 plugin to be used with Neutron
CONFIG_NEUTRON_L2_PLUGIN=ml2
# A comma separated list of IP addresses on which to install Neutron
# metadata agent
CONFIG_NEUTRON_METADATA_PW={{ NEUTRON_PASS }}
# Set to 'y' if you would like Packstack to install Neutron LBaaS
CONFIG_LBAAS_INSTALL=y
# Set to 'y' if you would like Packstack to install Neutron L3
# Metering agent
CONFIG_NEUTRON_METERING_AGENT_INSTALL=y
# Whether to configure neutron Firewall as a Service
CONFIG_NEUTRON_FWAAS=y
# A comma separated list of network type driver entrypoints to be
# loaded from the neutron.ml2.type_drivers namespace.
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=local,flat,gre
# A comma separated ordered list of network_types to allocate as
# tenant networks. The value 'local' is only useful for single-box
# testing but provides no connectivity between hosts.
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=gre
# A comma separated ordered list of networking mechanism driver
# entrypoints to be loaded from the neutron.ml2.mechanism_drivers
# namespace.
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
# A comma separated list of physical_network names with which flat
# networks can be created. Use * to allow flat networks with arbitrary
# physical_network names.
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
# A comma separated list of <physical_network>:<vlan_min>:<vlan_max>
# or <physical_network> specifying physical_network names usable for
# VLAN provider and tenant networks, as well as ranges of VLAN tags on
# each available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=
# A comma separated list of <tun_min>:<tun_max> tuples enumerating
# ranges of GRE tunnel IDs that are available for tenant network
# allocation. Should be an array with tun_max +1 - tun_min > 1000000
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1:1000
# Multicast group for VXLAN. If unset, disables VXLAN enable sending
# allocate broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode. Should be an
# Multicast IP (v4 or v6) address.
CONFIG_NEUTRON_ML2_VXLAN_GROUP=
# A comma separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network
# allocation. Min value is 0 and Max value is 16777215.
CONFIG_NEUTRON_ML2_VNI_RANGES=
# The name of the L2 agent to be used with Neutron
CONFIG_NEUTRON_L2_AGENT=openvswitch
# The type of network to allocate for tenant networks (eg. vlan,
# local)
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=gre
# A comma separated list of VLAN ranges for the Neutron linuxbridge
# plugin (eg. physnet1:1:4094,physnet2,physnet3:3000:3999)
CONFIG_NEUTRON_LB_VLAN_RANGES=
# A comma separated list of interface mappings for the Neutron
# linuxbridge plugin (eg. physnet1:br-eth1,physnet2:br-eth2,physnet3
# :br-eth3)
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
# Type of network to allocate for tenant networks (eg. vlan, local,
# gre, vxlan)
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre
# A comma separated list of VLAN ranges for the Neutron openvswitch
# plugin (eg. physnet1:1:4094,physnet2,physnet3:3000:3999)
CONFIG_NEUTRON_OVS_VLAN_RANGES=floatnet
# A comma separated list of bridge mappings for the Neutron
# openvswitch plugin (eg. physnet1:br-eth1,physnet2:br-eth2,physnet3
# :br-eth3)
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=floatnet:br-ex
# A comma separated list of colon-separated OVS bridge:interface
# pairs. The interface will be added to the associated bridge.
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
# A comma separated list of tunnel ranges for the Neutron openvswitch
# plugin (eg. 1:1000)
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1:1000
# The interface for the OVS tunnel. Packstack will override the IP
# address used for tunnels on this hypervisor to the IP found on the
# specified interface. (eg. eth1)
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
# VXLAN UDP port
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
# To set up Horizon communication over https set this to "y"
CONFIG_HORIZON_SSL=y
# PEM encoded certificate to be used for ssl on the https server,
# leave blank if one should be generated, this certificate should not
# require a passphrase
CONFIG_SSL_CERT=/etc/pki/tls/certs/fedorainfracloud.org.pem
# PEM encoded CA certificates from which the certificate chain of the
# # server certificate can be assembled.
CONFIG_SSL_CACHAIN=/etc/pki/tls/certs/fedorainfracloud.org.digicert.pem
# Keyfile corresponding to the certificate if one was entered
CONFIG_SSL_KEY=/etc/pki/tls/private/fedorainfracloud.key
# The password to use for the Swift to authenticate with Keystone
CONFIG_SWIFT_KS_PW={{ SWIFT_PASS }}
# A comma separated list of IP addresses on which to install the
# Swift Storage services, each entry should take the format
# <ipaddress>[/dev], for example 127.0.0.1/vdb will install /dev/vdb
# on 127.0.0.1 as a swift storage device(packstack does not create the
# filesystem, you must do this first), if /dev is omitted Packstack
# will create a loopback device for a test setup
CONFIG_SWIFT_STORAGES={{ swift_storages }}
# Number of swift storage zones, this number MUST be no bigger than
# the number of storage devices configured
CONFIG_SWIFT_STORAGE_ZONES=1
# Number of swift storage replicas, this number MUST be no bigger
# than the number of storage zones configured
CONFIG_SWIFT_STORAGE_REPLICAS=1
# FileSystem type for storage nodes
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
# Shared secret for Swift
CONFIG_SWIFT_HASH={{ SWIFT_HASH }}
# Size of the swift loopback file storage device
CONFIG_SWIFT_STORAGE_SIZE=2G
# Whether to provision for demo usage and testing. Note that
# provisioning is only supported for all-in-one installations.
CONFIG_PROVISION_DEMO=n
# Whether to configure tempest for testing. Note that provisioning is
# only supported for all-in-one installations.
CONFIG_PROVISION_TEMPEST=n
# The CIDR network address for the floating IP subnet
CONFIG_PROVISION_DEMO_FLOATRANGE=
# The uri of the tempest git repository to use
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
# The revision of the tempest git repository to use
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
# Whether to configure the ovs external bridge in an all-in-one
# deployment
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
# The password used by Heat user to authenticate against MySQL
CONFIG_HEAT_DB_PW={{ HEAT_DBPASS }}
# The encryption key to use for authentication info in database
CONFIG_HEAT_AUTH_ENC_KEY={{ HEAT_AUTH_ENC_KEY }}
# The password to use for the Heat to authenticate with Keystone
CONFIG_HEAT_KS_PW={{ HEAT_PASS }}
# Set to 'y' if you would like Packstack to install Heat CloudWatch
# API
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
# Set to 'y' if you would like Packstack to install Heat
# CloudFormation API
CONFIG_HEAT_CFN_INSTALL=n
# The IP address of the server on which to install Heat CloudWatch
# API service
CONFIG_HEAT_CLOUDWATCH_HOST={{ controller_public_ip }}
# The IP address of the server on which to install Heat
# CloudFormation API service
CONFIG_HEAT_CFN_HOST={{ controller_public_ip }}
# The IP address of the management node
CONFIG_CONTROLLER_HOST={{ controller_public_ip }}
# Secret key for signing metering messages.
CONFIG_CEILOMETER_SECRET={{ CEILOMETER_SECRET }}
# The password to use for Ceilometer to authenticate with Keystone
CONFIG_CEILOMETER_KS_PW={{ CEILOMETER_PASS }}
# The IP address of the server on which to install mongodb
CONFIG_MONGODB_HOST=127.0.0.1
# The password of the nagiosadmin user on the Nagios server
CONFIG_NAGIOS_PW=
# To subscribe each server to EPEL enter "y"
CONFIG_USE_EPEL=y
# A comma separated list of URLs to any additional yum repositories
# to install
CONFIG_REPO=
# To subscribe each server with Red Hat subscription manager, include
# this with CONFIG_RH_PW
CONFIG_RH_USER=
# To subscribe each server with Red Hat subscription manager, include
# this with CONFIG_RH_USER
CONFIG_RH_PW=
# To subscribe each server to Red Hat Enterprise Linux 6 Server Beta
# channel (only needed for Preview versions of RHOS) enter "y"
CONFIG_RH_BETA_REPO=n
# To subscribe each server with RHN Satellite,fill Satellite's URL
# here. Note that either satellite's username/password or activation
# key has to be provided
CONFIG_SATELLITE_URL=
# Username to access RHN Satellite
CONFIG_SATELLITE_USER=
# Password to access RHN Satellite
CONFIG_SATELLITE_PW=
# Activation key for subscription to RHN Satellite
CONFIG_SATELLITE_AKEY=
# Specify a path or URL to a SSL CA certificate to use
CONFIG_SATELLITE_CACERT=
# If required specify the profile name that should be used as an
# identifier for the system in RHN Satellite
CONFIG_SATELLITE_PROFILE=
# Comma separated list of flags passed to rhnreg_ks. Valid flags are:
# novirtinfo, norhnsd, nopackages
CONFIG_SATELLITE_FLAGS=
# Specify a HTTP proxy to use with RHN Satellite
CONFIG_SATELLITE_PROXY=
# Specify a username to use with an authenticated HTTP proxy
CONFIG_SATELLITE_PROXY_USER=
# Specify a password to use with an authenticated HTTP proxy.
CONFIG_SATELLITE_PROXY_PW=

View file

@ -1,32 +0,0 @@
# Warning! Dangerous step! Destroys VMs
# if you do know what you are doing feel free to remove the line below to proceed
exit 1
# also if you really insist to remove VM, uncomment that vgremove near bottom
for x in $(virsh list --all | grep instance- | awk '{print $2}') ; do
virsh destroy $x ;
virsh undefine $x ;
done ;
# Warning! Dangerous step! Removes lots of packages, including many
# which may be unrelated to RDO.
yum remove -y nrpe "*openstack*" \
"*nova*" "*keystone*" "*glance*" "*cinder*" "*swift*" \
mysql mysql-server httpd "*memcache*" ;
ps -ef | grep -i repli | grep swift | awk '{print $2}' | xargs kill ;
# Warning! Dangerous step! Deletes local application data
rm -rf /etc/nagios /etc/yum.repos.d/packstack_* /root/.my.cnf \
/var/lib/mysql/* /var/lib/glance /var/lib/nova /etc/nova /etc/swift \
/srv/node/device*/* /var/lib/cinder/ /etc/rsync.d/frag* \
/var/cache/swift /var/log/keystone ;
umount /srv/node/device* ;
killall -9 dnsmasq tgtd httpd ;
#vgremove -f cinder-volumes ;
losetup -a | sed -e 's/:.*//g' | xargs losetup -d ;
find /etc/pki/tls -name "ssl_ps*" | xargs rm -rf ;
for x in $(df | grep "/lib/" | sed -e 's/.* //g') ; do
umount $x ;
done

View file

@ -1,604 +0,0 @@
#!/usr/bin/python
#coding: utf-8 -*-
# (c) 2013, Benno Joy <benno@ansible.com>
# (c) 2013, John Dewey <john@dewey.ws>
#
# This module is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This software is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this software. If not, see <http://www.gnu.org/licenses/>.
ANSIBLE_METADATA = {'metadata_version': '1.0',
'status': ['deprecated'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: nova_compute
version_added: "1.2"
author: "Benno Joy (@bennojoy)"
deprecated: Deprecated in 2.0. Use M(os_server) instead.
short_description: Create/Delete VMs from OpenStack
description:
- Create or Remove virtual machines from Openstack.
options:
login_username:
description:
- login username to authenticate to keystone
required: true
default: admin
login_password:
description:
- Password of login user
required: true
default: 'yes'
login_tenant_name:
description:
- The tenant name of the login user
required: true
default: 'yes'
auth_url:
description:
- The keystone url for authentication
required: false
default: http://127.0.0.1:35357/v2.0/
region_name:
description:
- Name of the region
required: false
default: None
state:
description:
- Indicate desired state of the resource
choices: ['present', 'absent']
default: present
name:
description:
- Name that has to be given to the instance
required: true
default: None
image_id:
description:
- The id of the base image to boot. Mutually exclusive with image_name
required: true
default: None
image_name:
description:
- The name of the base image to boot. Mutually exclusive with image_id
required: true
default: None
version_added: "1.8"
image_exclude:
description:
- Text to use to filter image names, for the case, such as HP, where there are multiple image names matching the common identifying portions. image_exclude is a negative match filter - it is text that may not exist in the image name. Defaults to "(deprecated)"
version_added: "1.8"
flavor_id:
description:
- The id of the flavor in which the new VM has to be created. Mutually exclusive with flavor_ram
required: false
default: 1
flavor_ram:
description:
- The minimum amount of ram in MB that the flavor in which the new VM has to be created must have. Mutually exclusive with flavor_id
required: false
default: 1
version_added: "1.8"
flavor_include:
description:
- Text to use to filter flavor names, for the case, such as Rackspace, where there are multiple flavors that have the same ram count. flavor_include is a positive match filter - it must exist in the flavor name.
version_added: "1.8"
key_name:
description:
- The key pair name to be used when creating a VM
required: false
default: None
security_groups:
description:
- The name of the security group to which the VM should be added
required: false
default: None
nics:
description:
- A list of network id's to which the VM's interface should be attached
required: false
default: None
auto_floating_ip:
description:
- Should a floating ip be auto created and assigned
required: false
default: 'no'
version_added: "1.8"
floating_ips:
description:
- list of valid floating IPs that pre-exist to assign to this node
required: false
default: None
version_added: "1.8"
floating_ip_pools:
description:
- list of floating IP pools from which to choose a floating IP
required: false
default: None
version_added: "1.8"
availability_zone:
description:
- Name of the availability zone
required: false
default: None
version_added: "1.8"
meta:
description:
- A list of key value pairs that should be provided as a metadata to the new VM
required: false
default: None
wait:
description:
- If the module should wait for the VM to be created.
required: false
default: 'yes'
wait_for:
description:
- The amount of time the module should wait for the VM to get into active state
required: false
default: 180
config_drive:
description:
- Whether to boot the server with config drive enabled
required: false
default: 'no'
version_added: "1.8"
user_data:
description:
- Opaque blob of data which is made available to the instance
required: false
default: None
version_added: "1.6"
scheduler_hints:
description:
- Arbitrary key/value pairs to the scheduler for custom use
required: false
default: None
version_added: "1.9"
requirements:
- "python >= 2.6"
- "python-novaclient"
'''
EXAMPLES = '''
# Creates a new VM and attaches to a network and passes metadata to the instance
- nova_compute:
state: present
login_username: admin
login_password: admin
login_tenant_name: admin
name: vm1
image_id: 4f905f38-e52a-43d2-b6ec-754a13ffb529
key_name: ansible_key
wait_for: 200
flavor_id: 4
nics:
- net-id: 34605f38-e52a-25d2-b6ec-754a13ffb723
meta:
hostname: test1
group: uge_master
# Creates a new VM in HP Cloud AE1 region availability zone az2 and automatically assigns a floating IP
- name: launch a nova instance
hosts: localhost
tasks:
- name: launch an instance
nova_compute:
state: present
login_username: username
login_password: Equality7-2521
login_tenant_name: username-project1
name: vm1
auth_url: https://region-b.geo-1.identity.hpcloudsvc.com:35357/v2.0/
region_name: region-b.geo-1
availability_zone: az2
image_id: 9302692b-b787-4b52-a3a6-daebb79cb498
key_name: test
wait_for: 200
flavor_id: 101
security_groups: default
auto_floating_ip: yes
# Creates a new VM in HP Cloud AE1 region availability zone az2 and assigns a pre-known floating IP
- name: launch a nova instance
hosts: localhost
tasks:
- name: launch an instance
nova_compute:
state: present
login_username: username
login_password: Equality7-2521
login_tenant_name: username-project1
name: vm1
auth_url: https://region-b.geo-1.identity.hpcloudsvc.com:35357/v2.0/
region_name: region-b.geo-1
availability_zone: az2
image_id: 9302692b-b787-4b52-a3a6-daebb79cb498
key_name: test
wait_for: 200
flavor_id: 101
floating_ips:
- 12.34.56.79
# Creates a new VM with 4G of RAM on Ubuntu Trusty, ignoring deprecated images
- name: launch a nova instance
hosts: localhost
tasks:
- name: launch an instance
nova_compute:
name: vm1
state: present
login_username: username
login_password: Equality7-2521
login_tenant_name: username-project1
auth_url: https://region-b.geo-1.identity.hpcloudsvc.com:35357/v2.0/
region_name: region-b.geo-1
image_name: Ubuntu Server 14.04
image_exclude: deprecated
flavor_ram: 4096
# Creates a new VM with 4G of RAM on Ubuntu Trusty on a Rackspace Performance node in DFW
- name: launch a nova instance
hosts: localhost
tasks:
- name: launch an instance
nova_compute:
name: vm1
state: present
login_username: username
login_password: Equality7-2521
login_tenant_name: username-project1
auth_url: https://identity.api.rackspacecloud.com/v2.0/
region_name: DFW
image_name: Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)
flavor_ram: 4096
flavor_include: Performance
'''
import operator
import os
import time
try:
from novaclient.v1_1 import client as nova_client
from novaclient.v1_1 import floating_ips
from novaclient import exceptions
from novaclient import utils
HAS_NOVACLIENT = True
except ImportError:
HAS_NOVACLIENT = False
def _delete_server(module, nova):
name = None
server_list = None
try:
server_list = nova.servers.list(True, {'name': module.params['name']})
if server_list:
server = [x for x in server_list if x.name == module.params['name']]
nova.servers.delete(server.pop())
except Exception as e:
module.fail_json( msg = "Error in deleting vm: %s" % e.message)
if module.params['wait'] == 'no':
module.exit_json(changed = True, result = "deleted")
expire = time.time() + int(module.params['wait_for'])
while time.time() < expire:
name = nova.servers.list(True, {'name': module.params['name']})
if not name:
module.exit_json(changed = True, result = "deleted")
time.sleep(5)
module.fail_json(msg = "Timed out waiting for server to get deleted, please check manually")
def _add_floating_ip_from_pool(module, nova, server):
# instantiate FloatingIPManager object
floating_ip_obj = floating_ips.FloatingIPManager(nova)
# empty dict and list
usable_floating_ips = {}
pools = []
# user specified
pools = module.params['floating_ip_pools']
# get the list of all floating IPs. Mileage may
# vary according to Nova Compute configuration
# per cloud provider
all_floating_ips = floating_ip_obj.list()
# iterate through all pools of IP address. Empty
# string means all and is the default value
for pool in pools:
# temporary list per pool
pool_ips = []
# loop through all floating IPs
for f_ip in all_floating_ips:
# if not reserved and the correct pool, add
if f_ip.fixed_ip is None and (f_ip.pool == pool):
pool_ips.append(f_ip.ip)
# only need one
break
# if the list is empty, add for this pool
if not pool_ips:
try:
new_ip = nova.floating_ips.create(pool)
except Exception as e:
module.fail_json(msg = "Unable to create floating ip: %s" % (e.message))
pool_ips.append(new_ip.ip)
# Add to the main list
usable_floating_ips[pool] = pool_ips
# finally, add ip(s) to instance for each pool
for pool in usable_floating_ips:
for ip in usable_floating_ips[pool]:
try:
server.add_floating_ip(ip)
# We only need to assign one ip - but there is an inherent
# race condition and some other cloud operation may have
# stolen an available floating ip
break
except Exception as e:
module.fail_json(msg = "Error attaching IP %s to instance %s: %s " % (ip, server.id, e.message))
def _add_floating_ip_list(module, server, ips):
# add ip(s) to instance
for ip in ips:
try:
server.add_floating_ip(ip)
except Exception as e:
module.fail_json(msg = "Error attaching IP %s to instance %s: %s " % (ip, server.id, e.message))
def _add_auto_floating_ip(module, nova, server):
try:
new_ip = nova.floating_ips.create()
except Exception as e:
module.fail_json(msg = "Unable to create floating ip: %s" % (e))
try:
server.add_floating_ip(new_ip)
except Exception as e:
# Clean up - we auto-created this ip, and it's not attached
# to the server, so the cloud will not know what to do with it
server.floating_ips.delete(new_ip)
module.fail_json(msg = "Error attaching IP %s to instance %s: %s " % (ip, server.id, e.message))
def _add_floating_ip(module, nova, server):
if module.params['floating_ip_pools']:
_add_floating_ip_from_pool(module, nova, server)
elif module.params['floating_ips']:
_add_floating_ip_list(module, server, module.params['floating_ips'])
elif module.params['auto_floating_ip']:
_add_auto_floating_ip(module, nova, server)
else:
return server
# this may look redundant, but if there is now a
# floating IP, then it needs to be obtained from
# a recent server object if the above code path exec'd
try:
server = nova.servers.get(server.id)
except Exception as e:
module.fail_json(msg = "Error in getting info from instance: %s " % e.message)
return server
def _get_image_id(module, nova):
if module.params['image_name']:
for image in nova.images.list():
if (module.params['image_name'] in image.name and (
not module.params['image_exclude']
or module.params['image_exclude'] not in image.name)):
return image.id
module.fail_json(msg = "Error finding image id from name(%s)" % module.params['image_name'])
return module.params['image_id']
def _get_flavor_id(module, nova):
if module.params['flavor_ram']:
for flavor in sorted(nova.flavors.list(), key=operator.attrgetter('ram')):
if (flavor.ram >= module.params['flavor_ram'] and
(not module.params['flavor_include'] or module.params['flavor_include'] in flavor.name)):
return flavor.id
module.fail_json(msg = "Error finding flavor with %sMB of RAM" % module.params['flavor_ram'])
return module.params['flavor_id']
def _create_server(module, nova):
image_id = _get_image_id(module, nova)
flavor_id = _get_flavor_id(module, nova)
bootargs = [module.params['name'], image_id, flavor_id]
bootkwargs = {
'nics' : module.params['nics'],
'meta' : module.params['meta'],
'security_groups': module.params['security_groups'].split(','),
#userdata is unhyphenated in novaclient, but hyphenated here for consistency with the ec2 module:
'userdata': module.params['user_data'],
'config_drive': module.params['config_drive'],
}
for optional_param in ('region_name', 'key_name', 'availability_zone', 'scheduler_hints'):
if module.params[optional_param]:
bootkwargs[optional_param] = module.params[optional_param]
try:
server = nova.servers.create(*bootargs, **bootkwargs)
server = nova.servers.get(server.id)
except Exception as e:
module.fail_json( msg = "Error in creating instance: %s " % e.message)
if module.params['wait'] == 'yes':
expire = time.time() + int(module.params['wait_for'])
while time.time() < expire:
try:
server = nova.servers.get(server.id)
except Exception as e:
module.fail_json( msg = "Error in getting info from instance: %s" % e.message)
if server.status == 'ACTIVE':
server = _add_floating_ip(module, nova, server)
private = openstack_find_nova_addresses(getattr(server, 'addresses'), 'fixed', 'private')
public = openstack_find_nova_addresses(getattr(server, 'addresses'), 'floating', 'public')
# now exit with info
module.exit_json(changed = True, id = server.id, private_ip=''.join(private), public_ip=''.join(public), status = server.status, info = server._info)
if server.status == 'ERROR':
module.fail_json(msg = "Error in creating the server, please check logs")
time.sleep(2)
module.fail_json(msg = "Timeout waiting for the server to come up.. Please check manually")
if server.status == 'ERROR':
module.fail_json(msg = "Error in creating the server.. Please check manually")
private = openstack_find_nova_addresses(getattr(server, 'addresses'), 'fixed', 'private')
public = openstack_find_nova_addresses(getattr(server, 'addresses'), 'floating', 'public')
module.exit_json(changed = True, id = info['id'], private_ip=''.join(private), public_ip=''.join(public), status = server.status, info = server._info)
def _delete_floating_ip_list(module, nova, server, extra_ips):
for ip in extra_ips:
nova.servers.remove_floating_ip(server=server.id, address=ip)
def _check_floating_ips(module, nova, server):
changed = False
if module.params['floating_ip_pools'] or module.params['floating_ips'] or module.params['auto_floating_ip']:
ips = openstack_find_nova_addresses(server.addresses, 'floating')
if not ips:
# If we're configured to have a floating but we don't have one,
# let's add one
server = _add_floating_ip(module, nova, server)
changed = True
elif module.params['floating_ips']:
# we were configured to have specific ips, let's make sure we have
# those
missing_ips = []
for ip in module.params['floating_ips']:
if ip not in ips:
missing_ips.append(ip)
if missing_ips:
server = _add_floating_ip_list(module, server, missing_ips)
changed = True
extra_ips = []
for ip in ips:
if ip not in module.params['floating_ips']:
extra_ips.append(ip)
if extra_ips:
_delete_floating_ip_list(module, server, extra_ips)
changed = True
return (changed, server)
def _get_server_state(module, nova):
server = None
try:
servers = nova.servers.list(True, {'name': module.params['name']})
if servers:
# the {'name': module.params['name']} will also return servers
# with names that partially match the server name, so we have to
# strictly filter here
servers = [x for x in servers if x.name == module.params['name']]
if servers:
server = servers[0]
except Exception as e:
module.fail_json(msg = "Error in getting the server list: %s" % e.message)
if server and module.params['state'] == 'present':
if server.status != 'ACTIVE':
module.fail_json( msg="The VM is available but not Active. state:" + server.status)
(ip_changed, server) = _check_floating_ips(module, nova, server)
private = openstack_find_nova_addresses(getattr(server, 'addresses'), 'fixed', 'private')
public = openstack_find_nova_addresses(getattr(server, 'addresses'), 'floating', 'public')
module.exit_json(changed = ip_changed, id = server.id, public_ip = public, private_ip = private, info = server._info)
if server and module.params['state'] == 'absent':
return True
if module.params['state'] == 'absent':
module.exit_json(changed = False, result = "not present")
return True
def main():
argument_spec = openstack_argument_spec()
argument_spec.update(dict(
name = dict(required=True),
image_id = dict(default=None),
image_name = dict(default=None),
image_exclude = dict(default='(deprecated)'),
flavor_id = dict(default=1),
flavor_ram = dict(default=None, type='int'),
flavor_include = dict(default=None),
key_name = dict(default=None),
security_groups = dict(default='default'),
nics = dict(default=None, type='list'),
meta = dict(default=None, type='dict'),
wait = dict(default='yes', choices=['yes', 'no']),
wait_for = dict(default=180),
state = dict(default='present', choices=['absent', 'present']),
user_data = dict(default=None),
config_drive = dict(default=False, type='bool'),
auto_floating_ip = dict(default=False, type='bool'),
floating_ips = dict(default=None, type='list'),
floating_ip_pools = dict(default=None, type='list'),
scheduler_hints = dict(default=None, type='dict'),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['auto_floating_ip','floating_ips'],
['auto_floating_ip','floating_ip_pools'],
['floating_ips','floating_ip_pools'],
['image_id','image_name'],
['flavor_id','flavor_ram'],
],
)
if not HAS_NOVACLIENT:
module.fail_json(msg='python-novaclient is required for this module')
nova = nova_client.Client(module.params['login_username'],
module.params['login_password'],
module.params['login_tenant_name'],
module.params['auth_url'],
region_name=module.params['region_name'],
service_type='compute')
try:
nova.authenticate()
except exceptions.Unauthorized as e:
module.fail_json(msg = "Invalid OpenStack Nova credentials.: %s" % e.message)
except exceptions.AuthorizationFailure as e:
module.fail_json(msg = "Unable to authorize user: %s" % e.message)
if module.params['state'] == 'present':
if not module.params['image_id'] and not module.params['image_name']:
module.fail_json( msg = "Parameter 'image_id' or `image_name` is required if state == 'present'")
else:
_get_server_state(module, nova)
_create_server(module, nova)
if module.params['state'] == 'absent':
_get_server_state(module, nova)
_delete_server(module, nova)
# this is magic, see lib/ansible/module_common.py
from ansible.module_utils.basic import *
from ansible.module_utils.openstack import *
if __name__ == '__main__':
main()

View file

@ -1,567 +0,0 @@
- name: configure overcloud from undercloud
hosts: newcloud_undercloud
user: root
gather_facts: True
vars_files:
- /srv/web/infra/ansible/vars/global.yml
- "/srv/private/ansible/vars.yml"
- /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
- /srv/web/infra/ansible/vars/newcloud.yml
- /srv/private/ansible/files/openstack/overcloudrc.yml
tasks:
- name: setup auth/connection vars
set_fact:
os_cloud:
auth:
auth_url: http://192.168.20.51:5000//v3
username: admin
password: "{{ OS_PASSWORD }}"
project_name: admin
project_domain_name: default
user_domain_name: default
auth_type: v3password
region_name: regionOne
auth_version: 3
identity_api_version: 3
- name: create non-standard flavor
os_nova_flavor:
cloud: "{{ os_cloud }}"
name: "{{item.name}}"
ram: "{{item.ram}}"
disk: "{{item.disk}}"
vcpus: "{{item.vcpus}}"
swap: "{{item.swap}}"
ephemeral: 0
with_items:
- { name: m1.builder, ram: 5120, disk: 50, vcpus: 2, swap: 5120 }
- { name: ms2.builder, ram: 5120, disk: 20, vcpus: 2, swap: 100000 }
- { name: m2.prepare_builder, ram: 5000, disk: 16, vcpus: 2, swap: 0 }
# same as m.* but with swap
- { name: ms1.tiny, ram: 512, disk: 1, vcpus: 1, swap: 512 }
- { name: ms1.small, ram: 2048, disk: 20, vcpus: 1, swap: 2048 }
- { name: ms1.medium, ram: 4096, disk: 40, vcpus: 2, swap: 4096 }
- { name: ms1.medium.bigswap, ram: 4096, disk: 40, vcpus: 2, swap: 40000 }
- { name: ms1.large, ram: 8192, disk: 50, vcpus: 4, swap: 4096 }
- { name: ms1.xlarge, ram: 16384, disk: 160, vcpus: 8, swap: 16384 }
# inspired by http://aws.amazon.com/ec2/instance-types/
- { name: c4.large, ram: 3072, disk: 0, vcpus: 2, swap: 0 }
- { name: c4.xlarge, ram: 7168, disk: 0, vcpus: 4, swap: 0 }
- { name: c4.2xlarge, ram: 14336, disk: 0, vcpus: 8, swap: 0 }
- { name: r3.large, ram: 16384, disk: 32, vcpus: 2, swap: 16384 }
- name: download images
get_url:
dest: "/var/tmp/{{ item.imagename }}"
url: "{{ item.url }}"
with_items:
- { imagename: Fedora-Cloud-Base-28-1.1.ppc64le.qcow2,
url: "https://dl.fedoraproject.org/pub/fedora-secondary/releases/28/Cloud/ppc64le/images/Fedora-Cloud-Base-28-1.1.ppc64le.qcow2" }
- { imagename: Fedora-Cloud-Base-28-1.1.x86_64.qcow2,
url: "https://dl.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2" }
- { imagename: Fedora-Cloud-Base-29-1.2.x86_64.qcow2,
url: "https://dl.fedoraproject.org/pub/fedora/linux/releases/29/Cloud/x86_64/images/Fedora-Cloud-Base-29-1.2.x86_64.qcow2" }
- name: Add the images
os_image:
cloud: "{{ os_cloud }}"
name: "{{ item.name }}"
disk_format: qcow2
is_public: True
filename: "{{ item.filename }}"
with_items:
- { name: Fedora-Cloud-Base-28-1.1.ppc64le, filename: /var/tmp/Fedora-Cloud-Base-28-1.1.ppc64le.qcow2 }
- { name: Fedora-Cloud-Base-28-1.1.x86_64, filename: /var/tmp/Fedora-Cloud-Base-28-1.1.x86_64.qcow2 }
- { name: Fedora-Cloud-Base-29-1.2.x86_64, filename: /var/tmp/Fedora-Cloud-Base-29-1.2.x86_64.qcow2 }
- name: Create tenants
os_project:
cloud: "{{ os_cloud }}"
name: "{{ item.name }}"
description: "{{ item.desc }}"
state: present
enabled: True
domain_id: default
with_items:
- { name: persistent, desc: "persistent instances" }
- { name: qa, desc: "developmnet and test-day applications of QA" }
- { name: transient, desc: 'transient instances' }
- { name: infrastructure, desc: "one off instances for infrastructure folks to test or check something (proof-of-concept)" }
- { name: copr, desc: 'Space for Copr builders' }
- { name: coprdev, desc: 'Development version of Copr' }
- { name: pythonbots, desc: 'project for python build bot users - twisted, etc' }
- { name: openshift, desc: 'Tenant for openshift deployment' }
- { name: maintainertest, desc: 'Tenant for maintainer test machines' }
- { name: aos-ci-cd, desc: 'Tenant for aos-ci-cd' }
##### NETWORK ####
# http://docs.openstack.org/havana/install-guide/install/apt/content/install-neutron.configure-networks.html
#
# NEW:
# network is 38.145.48.0/23
# gateway is 38.145.49.254
# leave 38.145.49.250-253 unused for dcops
# leave 38.145.49.231-249 unused for future testing
#
# OLD:
# external network is a class C: 209.132.184.0/24
# 209.132.184.1 to .25 - reserved for hardware.
# 209.132.184.26 to .30 - reserver for test cloud external ips
# 209.132.184.31 to .69 - icehouse cloud
# 209.132.184.70 to .89 - reserved for arm03 SOCs
# 209.132.184.90 to .251 - folsom cloud
#
- name: Create an external network
os_network:
cloud: "{{ os_cloud }}"
name: external
provider_network_type: flat
provider_physical_network: datacentre
external: true
shared: true
register: EXTERNAL_ID
- name: Create an external subnet
os_subnet:
cloud: "{{ os_cloud }}"
name: external-subnet
network_name: external
cidr: 38.145.48.0/23
allocation_pool_start: 38.145.48.1
allocation_pool_end: 38.145.49.230
gateway_ip: 38.145.49.254
enable_dhcp: false
register: EXTERNAL_SUBNET_ID
#- shell: source /root/keystonerc_admin && nova floating-ip-create external
# when: packstack_sucessfully_finished.stat.exists == False
# 172.16.0.1/16 -- 172.22.0.1/16 - free (can be split to /20)
# 172.23.0.1/16 - free (but used by old cloud)
# 172.24.0.1/24 - RESERVED it is used internally for OS
# 172.24.1.0/24 -- 172.24.255.0/24 - likely free (?)
# 172.25.0.1/20 - Cloudintern (172.25.0.1 - 172.25.15.254)
# 172.25.16.1/20 - infrastructure (172.25.16.1 - 172.25.31.254)
# 172.25.32.1/20 - persistent (172.25.32.1 - 172.25.47.254)
# 172.25.48.1/20 - transient (172.25.48.1 - 172.25.63.254)
# 172.25.64.1/20 - scratch (172.25.64.1 - 172.25.79.254)
# 172.25.80.1/20 - copr (172.25.80.1 - 172.25.95.254)
# 172.25.96.1/20 - cloudsig (172.25.96.1 - 172.25.111.254)
# 172.25.112.1/20 - qa (172.25.112.1 - 172.25.127.254)
# 172.25.128.1/20 - pythonbots (172.25.128.1 - 172.25.143.254)
# 172.25.144.1/20 - coprdev (172.25.144.1 - 172.25.159.254)
# 172.25.160.1/20 -- 172.25.240.1/20 - free
# 172.26.0.1/16 -- 172.31.0.1/16 - free (can be split to /20)
- name: Create a router for all tenants
os_router:
cloud: "{{ os_cloud }}"
project: "{{ item }}"
name: "ext-to-{{ item }}"
network: "external"
with_items: "{{all_projects}}"
- name: Create a private network for all tenants
os_network:
cloud: "{{ os_cloud }}"
project: "{{ item.name }}"
name: "{{ item.name }}-net"
shared: "{{ item.shared }}"
with_items:
- { name: copr, shared: true }
- { name: coprdev, shared: true }
- { name: infrastructure, shared: false }
- { name: persistent, shared: false }
- { name: pythonbots, shared: false }
- { name: transient, shared: false }
- { name: openshift, shared: false }
- { name: maintainertest, shared: false }
- { name: aos-ci-cd, shared: false }
- name: Create a subnet for all tenants
os_subnet:
cloud: "{{ os_cloud }}"
project: "{{ item.name }}"
network_name: "{{ item.name }}-net"
name: "{{ item.name }}-subnet"
cidr: "{{ item.cidr }}"
gateway_ip: "{{ item.gateway }}"
dns_nameservers: "66.35.62.163,140.211.169.201"
with_items:
- { name: copr, cidr: '172.25.80.1/20', gateway: '172.25.80.1' }
- { name: coprdev, cidr: '172.25.144.1/20', gateway: '172.25.144.1' }
- { name: infrastructure, cidr: '172.25.16.1/20', gateway: '172.25.16.1' }
- { name: persistent, cidr: '172.25.32.1/20', gateway: '172.25.32.1' }
- { name: pythonbots, cidr: '172.25.128.1/20', gateway: '172.25.128.1' }
- { name: transient, cidr: '172.25.48.1/20', gateway: '172.25.48.1' }
- { name: openshift, cidr: '172.25.160.1/20', gateway: '172.25.160.1' }
- { name: maintainertest, cidr: '172.25.176.1/20', gateway: '172.25.176.1' }
- { name: aos-ci-cd, cidr: '172.25.180.1/20', gateway: '172.25.180.1' }
- name: "Connect routers interface to the TENANT-subnet"
os_router:
cloud: "{{ os_cloud }}"
project: "{{ item }}"
name: "ext-to-{{ item }}"
interfaces: ["{{ item }}-subnet"]
with_items: "{{all_projects}}"
#################
# Security Groups
################
- name: "Change the quota of quotas"
os_quota:
cloud: "{{os_cloud}}"
name: "{{item}}"
security_group: 100
security_group_rule: 100
with_items: "{{all_projects}}"
- name: "Create 'ssh-anywhere' security group"
os_security_group:
cloud: "{{ os_cloud }}"
name: 'ssh-anywhere-{{item}}'
description: "allow ssh from anywhere"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: "Add rules to security group ( ssh-anywhere )"
os_security_group_rule:
security_group: 'ssh-anywhere-{{item}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
port_range_min: "22"
port_range_max: "22"
ethertype: "IPv4"
protocol: "tcp"
remote_ip_prefix: "0.0.0.0/0"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: "Allow nagios checks"
os_security_group:
cloud: "{{ os_cloud }}"
state: "present"
name: 'allow-nagios-{{item}}'
description: "allow nagios checks"
project: "{{item}}"
with_items:
- persistent
- name: Add rule to new security group (nagios)
os_security_group_rule:
security_group: 'allow-nagios-{{item}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
port_range_min: "5666"
port_range_max: "5666"
ethertype: "IPv4"
protocol: "tcp"
remote_ip_prefix: "209.132.181.35/32"
project: "{{item}}"
with_items:
- persistent
- name: "Create 'ssh-from-persistent' security group"
os_security_group:
cloud: "{{ os_cloud }}"
state: "present"
name: 'ssh-from-persistent-{{item}}'
description: "allow ssh from persistent"
project: "{{item}}"
with_items:
- copr
- coprdev
- name: add rule to new security group (ssh-from-persistent)
os_security_group_rule:
security_group: 'ssh-from-persistent-{{item}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
port_range_min: "22"
port_range_max: "22"
ethertype: "IPv4"
protocol: "tcp"
remote_ip_prefix: "172.25.32.1/20"
project: "{{item}}"
with_items:
- copr
- coprdev
- name: "Create 'ssh-internal' security group"
os_security_group:
state: "present"
cloud: "{{ os_cloud }}"
name: 'ssh-internal-{{item.name}}'
description: "allow ssh from {{item.name}}-network"
project: "{{ item.name }}"
with_items:
- { name: copr, prefix: '172.25.80.1/20' }
- { name: coprdev, prefix: '172.25.80.1/20' }
- { name: infrastructure, prefix: "172.25.16.1/20" }
- { name: persistent, prefix: "172.25.32.1/20" }
- { name: pythonbots, prefix: '172.25.128.1/20' }
- { name: transient, prefix: '172.25.48.1/20' }
- { name: openshift, prefix: '172.25.160.1/20' }
- { name: maintainertest, prefix: '172.25.180.1/20' }
- { name: aos-ci-cd, prefix: '172.25.200.1/20' }
- name: add rule to new security group (ssh-internal)
os_security_group_rule:
security_group: 'ssh-internal-{{item.name}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
port_range_min: "22"
port_range_max: "22"
ethertype: "IPv4"
protocol: "tcp"
remote_ip_prefix: "{{ item.prefix }}"
project: "{{item.name}}"
with_items:
- { name: copr, prefix: '172.25.80.1/20' }
- { name: coprdev, prefix: '172.25.80.1/20' }
- { name: infrastructure, prefix: "172.25.16.1/20" }
- { name: persistent, prefix: "172.25.32.1/20" }
- { name: pythonbots, prefix: '172.25.128.1/20' }
- { name: transient, prefix: '172.25.48.1/20' }
- { name: openshift, prefix: '172.25.160.1/20' }
- { name: maintainertest, prefix: '172.25.180.1/20' }
- { name: aos-ci-cd, prefix: '172.25.200.1/20' }
- name: "Create 'web-80-anywhere' security group"
os_security_group:
state: "present"
name: 'web-80-anywhere-{{item}}'
cloud: "{{ os_cloud }}"
description: "allow web-80 from anywhere"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: add rule to new security group (web-80-anywhere)
os_security_group_rule:
security_group: 'web-80-anywhere-{{item}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
port_range_min: "80"
port_range_max: "80"
ethertype: "IPv4"
protocol: "tcp"
remote_ip_prefix: "0.0.0.0/0"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: "Create 'web-443-anywhere' security group"
os_security_group:
state: "present"
name: 'web-443-anywhere-{{item}}'
cloud: "{{ os_cloud }}"
description: "allow web-443 from anywhere"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: add rule to new security group (web-443-anywhere)
os_security_group_rule:
security_group: 'web-443-anywhere-{{item}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
port_range_min: "443"
port_range_max: "443"
ethertype: "IPv4"
protocol: "tcp"
remote_ip_prefix: "0.0.0.0/0"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: "Create 'oci-registry-5000-anywhere' security group"
os_security_group:
state: "present"
name: 'oci-registry-5000-anywhere-{{item}}'
cloud: "{{ os_cloud }}"
description: "allow oci-registry-5000 from anywhere"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: add rule to new security group (oci-registry-5000-anywhere)
os_security_group_rule:
security_group: 'oci-registry-5000-anywhere-{{item}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
port_range_min: "5000"
port_range_max: "5000"
ethertype: "IPv4"
protocol: "tcp"
remote_ip_prefix: "0.0.0.0/0"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: "Create 'wide-open' security group"
os_security_group:
state: "present"
name: 'wide-open-{{item}}'
cloud: "{{ os_cloud }}"
description: "allow anything from anywhere"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: add rule to new security group (wide-open/tcp)
os_security_group_rule:
security_group: 'wide-open-{{item}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
port_range_min: "1"
port_range_max: "65535"
ethertype: "IPv4"
protocol: "tcp"
remote_ip_prefix: "0.0.0.0/0"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: add rule to new security group (wide-open/udp)
os_security_group_rule:
security_group: 'wide-open-{{item}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
port_range_min: "1"
port_range_max: "65535"
ethertype: "IPv4"
protocol: "udp"
remote_ip_prefix: "0.0.0.0/0"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: "Create 'ALL ICMP' security group"
os_security_group:
state: "present"
name: 'all-icmp-{{item}}'
cloud: "{{ os_cloud }}"
description: "allow all ICMP traffic"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: add rule to new security group (all-icmp)
os_security_group_rule:
security_group: 'all-icmp-{{item}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
ethertype: "IPv4"
protocol: "icmp"
remote_ip_prefix: "0.0.0.0/0"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: "Create 'keygen-persistent' security group"
os_security_group:
state: "present"
name: 'keygen-persistent-{{item}}'
cloud: "{{ os_cloud }}"
description: "rules for copr-keygen"
project: "{{item}}"
with_items:
- copr
- coprdev
- name: add rule to new security group (keygen-persistent/5167)
os_security_group_rule:
security_group: 'keygen-persistent-{{item}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
port_range_min: "5167"
port_range_max: "5167"
ethertype: "IPv4"
protocol: "tcp"
remote_ip_prefix: "172.25.32.1/20"
project: "{{item}}"
with_items:
- copr
- coprdev
- name: add rule to new security group (keygen-persistent/80)
os_security_group_rule:
security_group: 'keygen-persistent-{{item}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
port_range_min: "80"
port_range_max: "80"
ethertype: "IPv4"
protocol: "tcp"
remote_ip_prefix: "172.25.32.1/20"
project: "{{item}}"
with_items:
- copr
- coprdev
- name: "Create 'pg-5432-anywhere' security group"
os_security_group:
state: "present"
name: 'pg-5432-anywhere-{{item}}'
cloud: "{{ os_cloud }}"
description: "allow postgresql-5432 from anywhere"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: add rule to new security group (pg-5432-anywhere)
os_security_group_rule:
security_group: 'pg-5432-anywhere-{{item}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
port_range_min: "5432"
port_range_max: "5432"
ethertype: "IPv4"
protocol: "tcp"
remote_ip_prefix: "0.0.0.0/0"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: "Create 'fedmsg-relay-persistent' security group"
os_security_group:
state: "present"
name: 'fedmsg-relay-persistent-{{item}}'
cloud: "{{ os_cloud }}"
description: "allow incoming 2003 and 4001 from internal network"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: add rule to new security group (fedmsg-relay-persistent/2003)
os_security_group_rule:
security_group: 'fedmsg-relay-persistent-{{item}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
port_range_min: "2003"
port_range_max: "2003"
ethertype: "IPv4"
protocol: "tcp"
remote_ip_prefix: "172.25.80.1/16"
project: "{{item}}"
with_items: "{{all_projects}}"
- name: add rule to new security group (fedmsg-relay-persistent/4001)
os_security_group_rule:
security_group: 'fedmsg-relay-persistent-{{item}}'
cloud: "{{ os_cloud }}"
direction: "ingress"
port_range_min: "4001"
port_range_max: "4001"
ethertype: "IPv4"
protocol: "tcp"
remote_ip_prefix: "172.25.80.1/16"
project: "{{item}}"
with_items: "{{all_projects}}"
#########
# quotas
#########
- name: set quotas for copr
os_quota:
cloud: "{{ os_cloud }}"
cores: "{{ item.cores }}"
floating_ips: "{{ item.floating_ips }}"
instances: "{{ item.instances }}"
name: "{{ item.name }}"
security_group: "{{ item.security_group }}"
with_items:
- { name: copr, cores: 100, floating_ips: 10, instances: 50, ram: 350000, security_group: 15 }
- { name: coprdev, cores: 80, floating_ips: 10, instances: 40, ram: 300000, security_group: 15 }
- { name: persistent, cores: 175, floating_ips: 50, instances: 60, ram: 300000, security_group: 15 }
- { name: transient, cores: 70, floating_ips: 10, instances: 30, ram: 150000, security_group: 15 }

View file

@ -1,77 +0,0 @@
#
# setup a transient instance in the Fedora infrastructure private cloud
#
# This playbook is used to spin up a transient instance for someone to test something.
# In particular transient instances will all be terminated at least by the next
# maint window for the cloud, but ideally people will terminate instances they
# are done using.
#
# If you have an application or longer term item that should always be around
# please use the persistent playbook instead.
#
# You MUST pass a name to it, ie: -e 'name=somethingdescriptive'
# You can optionally override defaults by passing any of the following:
# image=imagename (default is centos70_x86_64)
# instance_type=some instance type (default is m1.small)
# root_auth_users='user1 user2 user3' (default is sysadmin-main group)
#
# Note: if you run this playbook with the same name= multiple times
# openstack is smart enough to just return the current ip of that instance
# and go on. This way you can re-run if you want to reconfigure it without
# reprovisioning it.
#
# Example command:
# transient_cloud_instance.yml --extra-vars="name='foo' image='Fedora-Cloud-Base-20141203-21.x86_64'"
#
- name: check/create instance
hosts: batcave01.phx2.fedoraproject.org
user: root
gather_facts: False
vars_files:
- /srv/web/infra/ansible/vars/global.yml
- /srv/private/ansible/vars.yml
- /srv/web/infra/ansible/vars/fedora-cloud.yml
- /srv/private/ansible/files/openstack/passwords.yml
vars:
image: "{{ centos70_x86_64 }}"
instance_type: m1.small
tasks:
- name: fail when name is not provided
fail: msg="Please specify the name of the instance"
when: name is not defined
- import_tasks: "{{ tasks_path }}/transient_cloud.yml"
- name: gather facts
setup:
check_mode: no
ignore_errors: True
register: facts
- name: install python2 and dnf stuff
raw: dnf -y install python-dnf libselinux-python
when: facts is failed
- name: provision instance
hosts: tmp_just_created
gather_facts: True
environment:
ANSIBLE_HOST_KEY_CHECKING: False
vars_files:
- /srv/web/infra/ansible/vars/global.yml
- "/srv/private/ansible/vars.yml"
- /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
tasks:
- name: install cloud-utils
package: name=cloud-utils state=present
when: ansible_cmdline.ostree is not defined
- import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
handlers:
- import_tasks: "{{ handlers_path }}/restart_services.yml"

View file

@ -1,84 +0,0 @@
#
# setup a transient instance in the Fedora infrastructure private cloud
#
# This playbook is used to spin up a transient instance for someone to test something.
# In particular transient instances will all be terminated at least by the next
# maint window for the cloud, but ideally people will terminate instances they
# are done using.
#
# If you have an application or longer term item that should always be around
# please use the persistent playbook instead.
#
# You MUST pass a name to it, ie: -e 'name=somethingdescriptive'
# You can optionally override defaults by passing any of the following:
# image=imagename (default is centos70_x86_64)
# instance_type=some instance type (default is m1.small)
# root_auth_users='user1 user2 user3' (default is sysadmin-main group)
#
# Note: if you run this playbook with the same name= multiple times
# openstack is smart enough to just return the current ip of that instance
# and go on. This way you can re-run if you want to reconfigure it without
# reprovisioning it.
#
# Example command:
# transient_cloud_instance.yml --extra-vars="name='foo' image='Fedora-Cloud-Base-20141203-21.x86_64'"
#
- name: check/create instance
hosts: batcave01.phx2.fedoraproject.org
user: root
gather_facts: False
vars_files:
- /srv/web/infra/ansible/vars/global.yml
- /srv/private/ansible/vars.yml
- /srv/web/infra/ansible/vars/fedora-cloud.yml
- /srv/private/ansible/files/openstack/passwords.yml
vars:
image: "{{ centos70_x86_64 }}"
instance_type: m1.small
tasks:
- name: fail when name is not provided
fail: msg="Please specify the name of the instance"
when: name is not defined
- import_tasks: "{{ tasks_path }}/transient_newcloud.yml"
- name: Install Pythonic stuff.
hosts: tmp_just_created
gather_facts: False
environment:
ANSIBLE_HOST_KEY_CHECKING: False
tasks:
- name: gather facts
setup:
check_mode: no
ignore_errors: True
register: facts
- name: install python2 and dnf stuff
raw: dnf -y install python-dnf libselinux-python
when: facts is failed
- name: provision instance
hosts: tmp_just_created
gather_facts: True
environment:
ANSIBLE_HOST_KEY_CHECKING: False
vars_files:
- /srv/web/infra/ansible/vars/global.yml
- "/srv/private/ansible/vars.yml"
- /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
tasks:
- name: install cloud-utils
package: name=cloud-utils state=present
when: ansible_cmdline.ostree is not defined
- import_tasks: "{{ tasks_path }}/cloud_setup_basic.yml"
handlers:
- import_tasks: "{{ handlers_path }}/restart_services.yml"