DC move: iad => rdu3, 10.3. => 10.16.
And remove some obsolete things. Signed-off-by: Nils Philippsen <nils@redhat.com>
This commit is contained in:
parent
f3756ceb83
commit
b4afb2f945
83 changed files with 386 additions and 429 deletions
|
@ -44,7 +44,7 @@ as they don't have access to much of anything (yet).
|
|||
|
||||
To clear a token, admin should:
|
||||
|
||||
* login to ipa01.iad2.fedoraproject.org
|
||||
* login to ipa01.rdu3.fedoraproject.org
|
||||
* kinit admin@FEDORAPROJECT.ORG (enter the admin password)
|
||||
* ipa otptoken-find --owner <username>
|
||||
* ipa otptoken-del <token uuid from previous step>
|
||||
|
|
|
@ -44,7 +44,7 @@ ticket (who looked, what they saw, etc). Then the account can be
|
|||
disabled.:
|
||||
|
||||
....
|
||||
Login to ipa01.iad2.fedoraproject.org
|
||||
Login to ipa01.rdu3.fedoraproject.org
|
||||
kinit admin@FEDORAPROJECT.ORG
|
||||
ipa user-disable LOGIN
|
||||
....
|
||||
|
|
|
@ -19,10 +19,10 @@ Contact::
|
|||
Persons::
|
||||
zlopez
|
||||
Location::
|
||||
iad2.fedoraproject.org
|
||||
rdu3.fedoraproject.org
|
||||
Servers::
|
||||
* *Production* - os-master01.iad2.fedoraproject.org
|
||||
* *Staging* - os-master01.stg.iad2.fedoraproject.org
|
||||
* *Production* - os-master01.rdu3.fedoraproject.org
|
||||
* *Staging* - os-master01.stg.rdu3.fedoraproject.org
|
||||
Purpose::
|
||||
Map upstream releases to Fedora packages.
|
||||
|
||||
|
@ -63,7 +63,7 @@ documentation].
|
|||
=== Deploying
|
||||
|
||||
Staging deployment of Anitya is deployed in OpenShift on
|
||||
os-master01.stg.iad2.fedoraproject.org.
|
||||
os-master01.stg.rdu3.fedoraproject.org.
|
||||
|
||||
To deploy staging instance of Anitya you need to push changes to staging
|
||||
branch on https://github.com/fedora-infra/anitya[Anitya GitHub]. GitHub
|
||||
|
@ -71,7 +71,7 @@ webhook will then automatically deploy a new version of Anitya on
|
|||
staging.
|
||||
|
||||
Production deployment of Anitya is deployed in OpenShift on
|
||||
os-master01.iad2.fedoraproject.org.
|
||||
os-master01.rdu3.fedoraproject.org.
|
||||
|
||||
To deploy production instance of Anitya you need to push changes to
|
||||
production branch on https://github.com/fedora-infra/anitya[Anitya
|
||||
|
@ -82,7 +82,7 @@ Anitya on production.
|
|||
|
||||
To deploy the new configuration, you need
|
||||
xref:sshaccess.adoc[ssh
|
||||
access] to batcave01.iad2.fedoraproject.org and
|
||||
access] to batcave01.rdu3.fedoraproject.org and
|
||||
xref:ansible.adoc[permissions
|
||||
to run the Ansible playbook].
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ Owner::
|
|||
Contact::
|
||||
admin@fedoraproject.org
|
||||
Location::
|
||||
iad2
|
||||
rdu3
|
||||
Servers::
|
||||
bastion01, bastion02
|
||||
Purpose::
|
||||
|
@ -15,7 +15,7 @@ Purpose::
|
|||
|
||||
== Description
|
||||
|
||||
There are 2 primary bastion hosts in the _iad2_ datacenter. One will be
|
||||
There are 2 primary bastion hosts in the _rdu3_ datacenter. One will be
|
||||
active at any given time and the second will be a hot spare, ready to
|
||||
take over. Switching between bastion hosts is currently a manual process
|
||||
that requires changes in ansible.
|
||||
|
@ -29,10 +29,10 @@ The active bastion host performs the following functions:
|
|||
* Outgoing smtp from fedora servers. This includes email aliases,
|
||||
mailing list posts, build and commit notices, mailing list posts, etc.
|
||||
|
||||
* Incoming smtp from servers in _iad2_ or on the fedora vpn. Incoming mail
|
||||
* Incoming smtp from servers in _rdu3_ or on the fedora vpn. Incoming mail
|
||||
directly from the outside is NOT accepted or forwarded.
|
||||
|
||||
* ssh access to all _iad2/vpn_ connected servers.
|
||||
* ssh access to all _rdu3/vpn_ connected servers.
|
||||
|
||||
* openvpn hub. This is the hub that all vpn clients connect to and talk
|
||||
to each other via. Taking down or stopping this service will be a major
|
||||
|
|
|
@ -172,7 +172,7 @@ It can be useful to verify correct version is available on the backend,
|
|||
i.e. for staging run
|
||||
|
||||
....
|
||||
ssh bodhi-backend01.stg.iad2.fedoraproject.org
|
||||
ssh bodhi-backend01.stg.rdu3.fedoraproject.org
|
||||
....
|
||||
|
||||
....
|
||||
|
|
|
@ -26,12 +26,12 @@ Owner::
|
|||
Contact::
|
||||
#fedora-admin
|
||||
Location::
|
||||
iad2
|
||||
rdu3
|
||||
Servers::
|
||||
* bodhi-backend01.iad2.fedoraproject.org (composer)
|
||||
* bodhi-backend01.rdu3.fedoraproject.org (composer)
|
||||
* bodhi.fedoraproject.org (web front end and backend task workers for
|
||||
non-compose tasks)
|
||||
* bodhi-backend01.stg.iad2.fedoraproject.org (staging composer)
|
||||
* bodhi-backend01.stg.rdu3.fedoraproject.org (staging composer)
|
||||
* bodhi.fedoraproject.org (staging web front end and backend task
|
||||
workers for non-compose tasks)
|
||||
Purpose::
|
||||
|
|
|
@ -60,7 +60,7 @@ a new container image:
|
|||
* update the `fcos_cincinnati_git_sha` playbook variable in
|
||||
`roles/openshift-apps/coreos-cincinnati/vars/production.yml`
|
||||
* commit and push the update to the `fedora-infra/ansible` repository
|
||||
* SSH to `batcave01.iad2.fedoraproject.org`
|
||||
* SSH to `batcave01.rdu3.fedoraproject.org`
|
||||
* run `sudo rbac-playbook openshift-apps/coreos-cincinnati.yml` using
|
||||
your FAS password and your second-factor OTP
|
||||
* schedule a new build by running
|
||||
|
|
|
@ -26,7 +26,7 @@ Owner::
|
|||
Contact::
|
||||
#fedora-admin, sysadmin-main, sysadmin-dba group
|
||||
Location::
|
||||
iad2
|
||||
rdu3
|
||||
Servers::
|
||||
sb01, db03, db-fas01, db-datanommer02, db-koji01, db-s390-koji01,
|
||||
db-arm-koji01, db-ppc-koji01, db-qa01, dbqastg01
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
Debuginfod is the software that lies behind the service at
|
||||
https://debuginfod.fedoraproject.org/ and
|
||||
https://debuginfod.stg.fedoraproject.org/ . These services run on 1 VM
|
||||
each in the stg and prod infrastructure at IAD2.
|
||||
each in the stg and prod infrastructure at RDU3.
|
||||
|
||||
== Contact Information
|
||||
|
||||
|
@ -114,7 +114,7 @@ time in the last columns. These can be useful in tracking down possible
|
|||
abuse. :
|
||||
|
||||
....
|
||||
Jun 28 22:36:43 debuginfod01 debuginfod[381551]: [Mon 28 Jun 2021 10:36:43 PM GMT] (381551/2413727): 10.3.163.75:43776 UA:elfutils/0.185,Linux/x86_64,fedora/35 XFF:*elided* GET /buildid/90910c1963bbcf700c0c0c06ee3bf4c5cc831d3a/debuginfo 200 335440 0+0ms
|
||||
Jun 28 22:36:43 debuginfod01 debuginfod[381551]: [Mon 28 Jun 2021 10:36:43 PM GMT] (381551/2413727): 10.16.163.75:43776 UA:elfutils/0.185,Linux/x86_64,fedora/35 XFF:*elided* GET /buildid/90910c1963bbcf700c0c0c06ee3bf4c5cc831d3a/debuginfo 200 335440 0+0ms
|
||||
....
|
||||
|
||||
The lines related to prometheus /metrics are usually no big deal.
|
||||
|
|
|
@ -302,7 +302,7 @@ prefix of `logging.stats`.
|
|||
|
||||
== How is it deployed?
|
||||
|
||||
All of this runs on `log01.iad2.fedoraproject.org` and is deployed through the
|
||||
All of this runs on `log01.rdu3.fedoraproject.org` and is deployed through the
|
||||
`web-data-analysis` role and the `groups/logserver.yml` playbook,
|
||||
respectively.
|
||||
|
||||
|
|
|
@ -17,10 +17,10 @@ ns05.fedoraproject.org::
|
|||
hosted at internetx (ipv6 enabled)
|
||||
ns13.rdu2.fedoraproject.org::
|
||||
in rdu2, internal to rdu2.
|
||||
ns01.iad2.fedoraproject.org::
|
||||
in iad2, internal to iad2.
|
||||
ns02.iad2.fedoraproject.org::
|
||||
in iad2, internal to iad2.
|
||||
ns01.rdu3.fedoraproject.org::
|
||||
in rdu3, internal to rdu3.
|
||||
ns02.rdu3.fedoraproject.org::
|
||||
in rdu3, internal to rdu3.
|
||||
|
||||
== Contents
|
||||
|
||||
|
@ -45,7 +45,7 @@ Contact:::
|
|||
Location:::
|
||||
ServerBeach and ibiblio and internetx and phx2.
|
||||
Servers:::
|
||||
ns02, ns05, ns13.rdu2, ns01.iad2, ns02.iad2
|
||||
ns02, ns05, ns13.rdu2, ns01.rdu3, ns02.rdu3
|
||||
Purpose:::
|
||||
Provides DNS to our users
|
||||
|
||||
|
@ -314,15 +314,15 @@ Any machine that is not on our vpn or has not yet joined the vpn should
|
|||
*NOT* have the vpn.fedoraproject.org search until after it has
|
||||
been added to the vpn (if it ever does)
|
||||
====
|
||||
iad2::
|
||||
rdu3::
|
||||
....
|
||||
search iad2.fedoraproject.org vpn.fedoraproject.org fedoraproject.org
|
||||
search rdu3.fedoraproject.org vpn.fedoraproject.org fedoraproject.org
|
||||
....
|
||||
iad2 in the QA network:::
|
||||
rdu3 in the QA network:::
|
||||
....
|
||||
search qa.fedoraproject.org vpn.fedoraproject.org iad2.fedoraproject.org fedoraproject.org
|
||||
search qa.fedoraproject.org vpn.fedoraproject.org rdu3.fedoraproject.org fedoraproject.org
|
||||
....
|
||||
Non-iad2::
|
||||
Non-rdu3::
|
||||
....
|
||||
search vpn.fedoraproject.org fedoraproject.org
|
||||
....
|
||||
|
@ -330,5 +330,5 @@ search vpn.fedoraproject.org fedoraproject.org
|
|||
The idea here is that we can, when need be, setup local domains to
|
||||
contact instead of having to go over the VPN directly but still have
|
||||
sane configs. For example if we tell the proxy server to hit "app1" and
|
||||
that box is in _iad2_, it will go directly to app1, if its not, it will go
|
||||
that box is in _rdu3_, it will go directly to app1, if its not, it will go
|
||||
over the vpn to app1.
|
||||
|
|
|
@ -29,14 +29,14 @@ To perform this procedure, you may need to have sysadmin-main access. In the fut
|
|||
|
||||
.Firstly, access the management console:
|
||||
. Ensure you are connected to the official Red Hat VPN.
|
||||
. Identify the server in question. For this SOP, we will use `bvmhost-x86-01.stg.iad2.fedoraproject.org` as an example.
|
||||
. To access the management console, append `.mgmt` to the hostname: `bvmhost-x86-01-stg.mgmt.iad2.fedoraproject.org`.
|
||||
. Identify the server in question. For this SOP, we will use `bvmhost-x86-01.stg.rdu3.fedoraproject.org` as an example.
|
||||
. To access the management console, append `.mgmt` to the hostname: `bvmhost-x86-01-stg.mgmt.rdu3.fedoraproject.org`.
|
||||
. Obtain the IP address by pinging the server from `batcave01`:
|
||||
+
|
||||
[source,bash]
|
||||
----
|
||||
ssh batcave01.iad2.fedoraproject.org
|
||||
ping bvmhost-x86-01-stg.mgmt.iad2.fedoraproject.org
|
||||
ssh batcave01.rdu3.fedoraproject.org
|
||||
ping bvmhost-x86-01-stg.mgmt.rdu3.fedoraproject.org
|
||||
----
|
||||
|
||||
. Visit the IP address in a web browser. The management console uses HTTPS, so accept the self-signed certificate:
|
||||
|
|
|
@ -1,42 +0,0 @@
|
|||
= FAS-OpenID
|
||||
|
||||
FAS-OpenID is the OpenID server of Fedora infrastructure.
|
||||
|
||||
Live instance is at https://id.fedoraproject.org/ Staging instance is at
|
||||
https://id.stg.fedoraproject.org/
|
||||
|
||||
== Contact Information
|
||||
|
||||
Owner::
|
||||
Patrick Uiterwijk (puiterwijk)
|
||||
Contact::
|
||||
#fedora-admin, #fedora-apps, #fedora-noc
|
||||
Location::
|
||||
openid0\{1,2}.iad2.fedoraproject.org openid01.stg.fedoraproject.org
|
||||
Purpose::
|
||||
Authentication & Authorization
|
||||
|
||||
== Trusted roots
|
||||
|
||||
FAS-OpenID has a set of "trusted roots", which contains websites which
|
||||
are always trusted, and thus FAS-OpenID will not show the Approve/Reject
|
||||
form to the user when they login to any such site.
|
||||
|
||||
As a policy, we will only add websites to this list which Fedora
|
||||
Infrastructure controls. If anyone ever ask to add a website to this
|
||||
list, just answer with this default message:
|
||||
|
||||
....
|
||||
We only add websites we (Fedora Infrastructure) maintain to this list.
|
||||
|
||||
This feature was put in because it wouldn't make sense to ask for permission
|
||||
to send data to the same set of servers that it already came from.
|
||||
|
||||
Also, if we were to add external websites, we would need to judge their
|
||||
privacy policy etc.
|
||||
|
||||
Also, people might start complaining that we added site X but not their site,
|
||||
maybe causing us "political" issues later down the road.
|
||||
|
||||
As a result, we do NOT add external websites.
|
||||
....
|
|
@ -31,11 +31,11 @@ Every instance of each service on each host has its own cert and private
|
|||
key, signed by the CA. By convention, we name the certs
|
||||
`<service>-<fqdn>.\{crt,key}` For instance, bodhi has the following certs:
|
||||
|
||||
* bodhi-app01.iad2.fedoraproject.org
|
||||
* bodhi-app02.iad2.fedoraproject.org
|
||||
* bodhi-app03.iad2.fedoraproject.org
|
||||
* bodhi-app01.stg.iad2.fedoraproject.org
|
||||
* bodhi-app02.stg.iad2.fedoraproject.org
|
||||
* bodhi-app01.rdu3.fedoraproject.org
|
||||
* bodhi-app02.rdu3.fedoraproject.org
|
||||
* bodhi-app03.rdu3.fedoraproject.org
|
||||
* bodhi-app01.stg.rdu3.fedoraproject.org
|
||||
* bodhi-app02.stg.rdu3.fedoraproject.org
|
||||
* more
|
||||
|
||||
Scripts to generate new keys, sign them, and revoke them live in the
|
||||
|
@ -60,7 +60,7 @@ The attempt here is to minimize the number of potential attack vectors.
|
|||
Each private key should be readable only by the service that needs it.
|
||||
bodhi runs under mod_wsgi in apache and should run as its own unique
|
||||
bodhi user (not as apache). The permissions for
|
||||
its _iad2.fedoraproject.org_ private_key, when deployed by ansible, should
|
||||
its _rdu3.fedoraproject.org_ private_key, when deployed by ansible, should
|
||||
be read-only for that local bodhi user.
|
||||
|
||||
For more information on how fedmsg uses these certs see
|
||||
|
@ -105,15 +105,15 @@ $ ./build-and-sign-key <service>-<fqdn>
|
|||
....
|
||||
|
||||
For instance, if we bring up a new app host,
|
||||
_app10.iad2.fedoraproject.org_, we'll need to generate a new cert/key pair
|
||||
_app10.rdu3.fedoraproject.org_, we'll need to generate a new cert/key pair
|
||||
for each fedmsg-enabled service that will be running on it, so you'd
|
||||
run:
|
||||
|
||||
....
|
||||
$ source ./vars
|
||||
$ ./build-and-sign-key shell-app10.iad2.fedoraproject.org
|
||||
$ ./build-and-sign-key bodhi-app10.iad2.fedoraproject.org
|
||||
$ ./build-and-sign-key mediawiki-app10.iad2.fedoraproject.org
|
||||
$ ./build-and-sign-key shell-app10.rdu3.fedoraproject.org
|
||||
$ ./build-and-sign-key bodhi-app10.rdu3.fedoraproject.org
|
||||
$ ./build-and-sign-key mediawiki-app10.rdu3.fedoraproject.org
|
||||
....
|
||||
|
||||
Just creating the keys isn't quite enough, there are four more things
|
||||
|
@ -131,9 +131,9 @@ to be blown away and recreated, the new service-hosts will be included.
|
|||
For the examples above, you would need to add to the list:
|
||||
|
||||
....
|
||||
shell-app10.iad2.fedoraproject.org
|
||||
bodhi-app10.iad2.fedoraproject.org
|
||||
mediawiki-app10.iad2.fedoraproject.org
|
||||
shell-app10.rdu3.fedoraproject.org
|
||||
bodhi-app10.rdu3.fedoraproject.org
|
||||
mediawiki-app10.rdu3.fedoraproject.org
|
||||
....
|
||||
|
||||
You need to ensure that the keys are distributed to the host with the
|
||||
|
|
|
@ -15,7 +15,7 @@ Contact::
|
|||
Persons::
|
||||
nirik
|
||||
Servers::
|
||||
batcave01.iad2.fedoraproject.org Various application servers, which
|
||||
batcave01.rdu3.fedoraproject.org Various application servers, which
|
||||
will run scripts to delete data.
|
||||
Purpose::
|
||||
Respond to Delete requests.
|
||||
|
@ -106,5 +106,5 @@ You also need to add the host that the script should run on to the
|
|||
|
||||
....
|
||||
[gdpr_delete]
|
||||
fedocal01.iad2.fedoraproject.org
|
||||
fedocal01.rdu3.fedoraproject.org
|
||||
....
|
||||
|
|
|
@ -15,7 +15,7 @@ Contact::
|
|||
Persons::
|
||||
bowlofeggs
|
||||
Servers::
|
||||
batcave01.iad2.fedoraproject.org Various application servers, which
|
||||
batcave01.rdu3.fedoraproject.org Various application servers, which
|
||||
will run scripts to collect SAR data.
|
||||
Purpose::
|
||||
Respond to SARs.
|
||||
|
@ -122,7 +122,7 @@ You also need to add the host that the script should run on to the
|
|||
|
||||
....
|
||||
[sar]
|
||||
bodhi-backend02.iad2.fedoraproject.org
|
||||
bodhi-backend02.rdu3.fedoraproject.org
|
||||
....
|
||||
|
||||
=== Variables for OpenShift apps
|
||||
|
|
|
@ -45,7 +45,7 @@ We expect to grow these over time to new use cases (rawhide compose gating, etc.
|
|||
|
||||
== Observing Greenwave Behavior
|
||||
|
||||
Login to `os-master01.iad2.fedoraproject.org` as `root` (or,
|
||||
Login to `os-master01.rdu3.fedoraproject.org` as `root` (or,
|
||||
authenticate remotely with openshift using
|
||||
`oc login https://os.fedoraproject.org`), and run:
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ Owner::
|
|||
Contact::
|
||||
#fedora-admin, sysadmin-main
|
||||
Location::
|
||||
IAD2, Tummy, ibiblio, Telia, OSUOSL
|
||||
RDU3, Tummy, ibiblio, Telia, OSUOSL
|
||||
Servers::
|
||||
All xen servers, kvm/libvirt servers.
|
||||
Purpose::
|
||||
|
|
|
@ -62,9 +62,9 @@ subtraction of specific nodes when we need them.:
|
|||
....
|
||||
listen fpo-wiki 0.0.0.0:10001
|
||||
balance roundrobin
|
||||
server app1 app1.fedora.iad2.redhat.com:80 check inter 2s rise 2 fall 5
|
||||
server app2 app2.fedora.iad2.redhat.com:80 check inter 2s rise 2 fall 5
|
||||
server app4 app4.fedora.iad2.redhat.com:80 backup check inter 2s rise 2 fall 5
|
||||
server app1 app1.fedora.rdu3.redhat.com:80 check inter 2s rise 2 fall 5
|
||||
server app2 app2.fedora.rdu3.redhat.com:80 check inter 2s rise 2 fall 5
|
||||
server app4 app4.fedora.rdu3.redhat.com:80 backup check inter 2s rise 2 fall 5
|
||||
option httpchk GET /wiki/Infrastructure
|
||||
....
|
||||
|
||||
|
@ -77,13 +77,13 @@ one. Just check the config file for the lowest open port above 10001.
|
|||
* The next line _balance roundrobin_ says to use round robin balancing.
|
||||
* The server lines each add a new node to the balancer farm. In this
|
||||
case the wiki is being served from app1, app2 and app4. If the wiki is
|
||||
available at http://app1.fedora.iad2.redhat.com/wiki/ Then this
|
||||
available at http://app1.fedora.rdu3.redhat.com/wiki/ Then this
|
||||
config would be used in conjunction with "RewriteRule ^/wiki/(.*)
|
||||
http://localhost:10001/wiki/$1 [P,L]".
|
||||
* _server_ means we're adding a new node to the farm
|
||||
* _app1_ is the worker name, it is analagous to fpo-wiki but should::
|
||||
match shorthostname of the node to make it easy to follow.
|
||||
* _app1.fedora.iad2.redhat.com:80_ is the hostname and port to be
|
||||
* _app1.fedora.rdu3.redhat.com:80_ is the hostname and port to be
|
||||
contacted.
|
||||
* _check_ means to check via bottom line "option httpchk GET
|
||||
/wiki/Infrastructure" which will use /wiki/Infrastructure to verify the
|
||||
|
|
|
@ -6,7 +6,7 @@ This SOP shows some of the steps required to troubleshoot and diagnose a power i
|
|||
|
||||
Symptoms:
|
||||
- This server is not responding at all, and will not power on.
|
||||
- To get to mgmt of RDU2-CC devices it’s a bit trickier than IAD2. We have a private management vlan there, but it’s only reachable via cloud-noc-os01.rdu-cc.fedoraproject.org. I usually use the ‘sshuttle’ package/command/app to transparently forward my traffic to devices on that network. That looks something like: `sshuttle 172.23.1.0/24 -r cloud-noc-os01.rdu-cc.fedoraproject.org`
|
||||
- To get to mgmt of RDU2-CC devices it’s a bit trickier than RDU3. We have a private management vlan there, but it’s only reachable via cloud-noc-os01.rdu-cc.fedoraproject.org. I usually use the ‘sshuttle’ package/command/app to transparently forward my traffic to devices on that network. That looks something like: `sshuttle 172.23.1.0/24 -r cloud-noc-os01.rdu-cc.fedoraproject.org`
|
||||
- The devices are all in the 172.23.1 network. There’s a list of them in `ansible-private/docs/rdu-networks.txt` but this host is: `172.23.1.105`.
|
||||
- In the Bitwarden Vault, the management password can be obtained.
|
||||
- Logs show issues with voltages not being in the correct range.
|
||||
|
@ -33,7 +33,7 @@ Purpose::
|
|||
=== Troubleshooting Steps
|
||||
|
||||
.Connect to the management VLAN for the RDU2-CC network:
|
||||
This is only required because this server is not in IAD2 datacenter. Use sshuttle to make a connection to the 172.23.1.0/24 (from your laptop directly, not from the batcave01 to the management network). `sshuttle 172.23.1.0/24 -r cloud-noc-os01.rdu-cc.fedoraproject.org`
|
||||
This is only required because this server is not in RDU3 datacenter. Use sshuttle to make a connection to the 172.23.1.0/24 (from your laptop directly, not from the batcave01 to the management network). `sshuttle 172.23.1.0/24 -r cloud-noc-os01.rdu-cc.fedoraproject.org`
|
||||
|
||||
.SSH to the batcave01 and retrieve the ip address for this machine
|
||||
Ssh to the batcave01, access the ansible-private repo and read the IP address for this machine from the `docs/rdu-networks.txt`
|
||||
|
|
|
@ -67,7 +67,7 @@ the-new-hotness on production.
|
|||
|
||||
To deploy the new configuration, you need
|
||||
xref:sshaccess.adoc[ssh
|
||||
access] to _batcave01.iad2.fedoraproject.org_ and
|
||||
access] to _batcave01.rdu3.fedoraproject.org_ and
|
||||
xref:ansible.adoc[permissions
|
||||
to run the Ansible playbook].
|
||||
|
||||
|
|
|
@ -96,7 +96,6 @@ xref:developer_guide:sops.adoc[Developing Standard Operating Procedures].
|
|||
* xref:docs.fedoraproject.org.adoc[Docs]
|
||||
* xref:externally-hosted-services.adoc[Externally Hosted Services]
|
||||
* xref:failedharddrive.adoc[Replacing Failed Hard Drives]
|
||||
* xref:fas-openid.adoc[FAS-OpenID]
|
||||
* xref:fedmsg-certs.adoc[fedmsg (Fedora Messaging) Certs, Keys, and CA]
|
||||
* xref:fedocal.adoc[Fedocal]
|
||||
* xref:fedora-releases.adoc[Fedora Release Infrastructure]
|
||||
|
|
|
@ -16,7 +16,7 @@ Contact::
|
|||
Location::
|
||||
Phoenix
|
||||
Servers::
|
||||
batcave01.iad2.fedoraproject.org, batcave-comm01.qa.fedoraproject.org
|
||||
batcave01.rdu3.fedoraproject.org, batcave-comm01.qa.fedoraproject.org
|
||||
|
||||
== Steps
|
||||
|
||||
|
|
|
@ -12,12 +12,12 @@ Primary upstream contact::
|
|||
Alexander Bokovoy - FAS: abbra
|
||||
|
||||
Servers::
|
||||
* ipa01.iad2.fedoraproject.org
|
||||
* ipa02.iad2.fedoraproject.org
|
||||
* ipa03.iad2.fedoraproject.org
|
||||
* ipa01.stg.iad2.fedoraproject.org
|
||||
* ipa02.stg.iad2.fedoraproject.org
|
||||
* ipa03.stg.iad2.fedoraproject.org
|
||||
* ipa01.rdu3.fedoraproject.org
|
||||
* ipa02.rdu3.fedoraproject.org
|
||||
* ipa03.rdu3.fedoraproject.org
|
||||
* ipa01.stg.rdu3.fedoraproject.org
|
||||
* ipa02.stg.rdu3.fedoraproject.org
|
||||
* ipa03.stg.rdu3.fedoraproject.org
|
||||
|
||||
URL::
|
||||
* link:https://id.fedoraproject.org/ipa/ui[]
|
||||
|
|
|
@ -25,9 +25,9 @@ Backup upstream contact::
|
|||
Simo Sorce - FAS: simo (irc: simo) Howard Johnson - FAS: merlinthp
|
||||
(irc: MerlinTHP) Rob Crittenden - FAS: rcritten (irc: rcrit)
|
||||
Servers::
|
||||
* ipsilon01.iad2.fedoraproject.org
|
||||
* ipsilon02.iad2.fedoraproject.org
|
||||
* ipsilion01.stg.iad2.fedoraproject.org
|
||||
* ipsilon01.rdu3.fedoraproject.org
|
||||
* ipsilon02.rdu3.fedoraproject.org
|
||||
* ipsilion01.stg.rdu3.fedoraproject.org
|
||||
Purpose::
|
||||
Ipsilon is our central authentication service that is used to
|
||||
authenticate users agains FAS. It is seperate from FAS.
|
||||
|
|
|
@ -68,7 +68,7 @@ wget https://infrastructure.fedoraproject.org/repo/rhel/RHEL7-x86_64/images/pxeb
|
|||
wget https://infrastructure.fedoraproject.org/repo/rhel/RHEL7-x86_64/images/pxeboot/initrd.img -O /boot/initrd-install.img
|
||||
....
|
||||
|
||||
For iad2 hosts:
|
||||
For rdu3 hosts:
|
||||
|
||||
....
|
||||
grubby --add-kernel=/boot/vmlinuz-install \
|
||||
|
@ -81,7 +81,7 @@ grubby --add-kernel=/boot/vmlinuz-install \
|
|||
|
||||
(You will need to setup the br1 device if any after install)
|
||||
|
||||
For non iad2 hosts:
|
||||
For non rdu3 hosts:
|
||||
|
||||
....
|
||||
grubby --add-kernel=/boot/vmlinuz-install \
|
||||
|
|
|
@ -27,7 +27,7 @@ $ koji tag-build do-not-archive-yet build1 build2 ...
|
|||
Then update the archive policy which is available in releng repo
|
||||
(https://pagure.io/releng/blob/main/f/koji-archive-policy)
|
||||
|
||||
Run the following from _compose-x86-01.iad2.fedoraproject.org_
|
||||
Run the following from _compose-x86-01.rdu3.fedoraproject.org_
|
||||
|
||||
....
|
||||
$ cd $ wget https://pagure.io/releng/raw/master/f/koji-archive-policy
|
||||
|
|
|
@ -125,7 +125,7 @@ If the openshift-ansible playbook fails it can be easier to run it
|
|||
directly from osbs-control01 and use the verbose mode.
|
||||
|
||||
....
|
||||
$ ssh osbs-control01.iad2.fedoraproject.org
|
||||
$ ssh osbs-control01.rdu3.fedoraproject.org
|
||||
$ sudo -i
|
||||
# cd /root/openshift-ansible
|
||||
# ansible-playbook -i cluster-inventory playbooks/prerequisites.yml
|
||||
|
@ -143,7 +143,7 @@ When this is done we need to get the new koji service token and update
|
|||
its value in the private repository
|
||||
|
||||
....
|
||||
$ ssh osbs-master01.iad2.fedoraproject.org
|
||||
$ ssh osbs-master01.rdu3.fedoraproject.org
|
||||
$ sudo -i
|
||||
# oc -n osbs-fedora sa get-token koji
|
||||
dsjflksfkgjgkjfdl ....
|
||||
|
|
|
@ -14,7 +14,7 @@ Purpose::
|
|||
== Description
|
||||
|
||||
Mailing list services for Fedora projects are located on the
|
||||
mailman01.iad2.fedoraproject.org server.
|
||||
mailman01.rdu3.fedoraproject.org server.
|
||||
|
||||
== Common Tasks
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ Owner:::
|
|||
Contact:::
|
||||
#fedora-admin, Red Hat ticket
|
||||
Servers:::
|
||||
server[1-5].download.iad2.redhat.com
|
||||
server[1-5].download.rdu3.redhat.com
|
||||
Purpose:::
|
||||
Provides the master mirrors for Fedora distribution
|
||||
|
||||
|
@ -46,11 +46,11 @@ The load balancers then balance between the below Fedora IPs on the
|
|||
rsync servers:
|
||||
|
||||
....
|
||||
10.8.24.21 (fedora1.download.iad2.redhat.com) - server1.download.iad2.redhat.com
|
||||
10.8.24.22 (fedora2.download.iad2.redhat.com) - server2.download.iad2.redhat.com
|
||||
10.8.24.23 (fedora3.download.iad2.redhat.com) - server3.download.iad2.redhat.com
|
||||
10.8.24.24 (fedora4.download.iad2.redhat.com) - server4.download.iad2.redhat.com
|
||||
10.8.24.25 (fedora5.download.iad2.redhat.com) - server5.download.iad2.redhat.com
|
||||
10.8.24.21 (fedora1.download.rdu3.redhat.com) - server1.download.rdu3.redhat.com
|
||||
10.8.24.22 (fedora2.download.rdu3.redhat.com) - server2.download.rdu3.redhat.com
|
||||
10.8.24.23 (fedora3.download.rdu3.redhat.com) - server3.download.rdu3.redhat.com
|
||||
10.8.24.24 (fedora4.download.rdu3.redhat.com) - server4.download.rdu3.redhat.com
|
||||
10.8.24.25 (fedora5.download.rdu3.redhat.com) - server5.download.rdu3.redhat.com
|
||||
....
|
||||
|
||||
== RDU I2 Master Mirror Setup
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
= Netapp Infrastructure SOP
|
||||
|
||||
Provides primary mirrors and additional storage in IAD2
|
||||
Provides primary mirrors and additional storage in RDU3
|
||||
|
||||
== Contents
|
||||
|
||||
|
@ -8,7 +8,7 @@ Provides primary mirrors and additional storage in IAD2
|
|||
* <<_description>>
|
||||
* <<_public_mirrors>>
|
||||
** <<_snapshots>>
|
||||
* <<_iad2_nfs_storage>>
|
||||
* <<_rdu3_nfs_storage>>
|
||||
** <<_access>>
|
||||
** <<_snapshots>>
|
||||
* <<_iscsi>>
|
||||
|
@ -24,25 +24,25 @@ Contact::
|
|||
Servers::
|
||||
batcave01, virt servers, application servers, builders, releng boxes
|
||||
Purpose::
|
||||
Provides primary mirrors and additional storage in IAD2
|
||||
Provides primary mirrors and additional storage in RDU3
|
||||
|
||||
== Description
|
||||
|
||||
At present we have three netapps in our infrastructure. One in TPA, RDU
|
||||
and IAD2. For purposes of visualization its easiest to think of us as
|
||||
having 4 netapps, 1 TPA, 1 RDU and 1 IAD2 for public mirrors. And an
|
||||
additional 1 in IAD2 used for additional storage not related to the
|
||||
and RDU3. For purposes of visualization its easiest to think of us as
|
||||
having 4 netapps, 1 TPA, 1 RDU and 1 RDU3 for public mirrors. And an
|
||||
additional 1 in RDU3 used for additional storage not related to the
|
||||
public mirrors.
|
||||
|
||||
== Public Mirrors
|
||||
|
||||
The netapps are our primary public mirrors. The canonical location for
|
||||
the mirrors is currently in IAD2. From there it gets synced to RDU and
|
||||
the mirrors is currently in RDU3. From there it gets synced to RDU and
|
||||
TPA.
|
||||
|
||||
=== Snapshots
|
||||
|
||||
Snapshots on the IAD2 netapp are taken hourly. Unfortunately the way it
|
||||
Snapshots on the RDU3 netapp are taken hourly. Unfortunately the way it
|
||||
is setup only Red Hat employees can access this mirror (this is
|
||||
scheduled to change when PHX becomes the canonical location but that
|
||||
will take time to setup and deploy). The snapshots are available, for
|
||||
|
@ -52,12 +52,12 @@ example, on wallace in:
|
|||
/var/ftp/download.fedora.redhat.com/.snapshot/hourly.0
|
||||
....
|
||||
|
||||
== IAD2 NFS Storage
|
||||
== RDU3 NFS Storage
|
||||
|
||||
There is a great deal of storage in IAD2 over NFS from the netapp there.
|
||||
There is a great deal of storage in RDU3 over NFS from the netapp there.
|
||||
This storage includes the public mirror. The majority of this storage is
|
||||
koji however there are a few gig worth of storage that goes to wiki
|
||||
attachments and other storage needs we have in IAD2.
|
||||
attachments and other storage needs we have in RDU3.
|
||||
|
||||
You can access all of the nfs share shares at:
|
||||
|
||||
|
@ -68,7 +68,7 @@ batcave01:/mnt/fedora
|
|||
or:
|
||||
|
||||
....
|
||||
ntap-fedora-a.storage.iad2.redhat.com:/vol/fedora/
|
||||
ntap-fedora-a.storage.rdu3.redhat.com:/vol/fedora/
|
||||
....
|
||||
|
||||
=== Access
|
||||
|
@ -120,7 +120,7 @@ On reboots sometimes the iscsi share is not remounted. This should be
|
|||
automated in the future but for now run:
|
||||
|
||||
....
|
||||
iscsiadm -m discovery -tst -p ntap-fedora-b.storage.iad2.redhat.com:3260
|
||||
iscsiadm -m discovery -tst -p ntap-fedora-b.storage.rdu3.redhat.com:3260
|
||||
sleep 1
|
||||
iscsiadm -m node -T iqn.1992-08.com.netapp:sn.118047036 -p 10.5.88.21:3260 -l
|
||||
sleep 1
|
||||
|
|
|
@ -17,16 +17,16 @@ been recently added to the data center/network that you want:
|
|||
|
||||
....
|
||||
git grep badges-web01
|
||||
built/126.5.10.in-addr.arpa:69 IN PTR badges-web01.stg.iad2.fedoraproject.org.
|
||||
built/126.5.10.in-addr.arpa:69 IN PTR badges-web01.stg.rdu3.fedoraproject.org.
|
||||
[...lots of other stuff in built/ ignore these as they'll be generated later...]
|
||||
master/126.5.10.in-addr.arpa:69 IN PTR badges-web01.stg.iad2.fedoraproject.org.
|
||||
master/126.5.10.in-addr.arpa:101 IN PTR badges-web01.iad2.fedoraproject.org.
|
||||
master/126.5.10.in-addr.arpa:102 IN PTR badges-web02.iad2.fedoraproject.org.
|
||||
master/126.5.10.in-addr.arpa:69 IN PTR badges-web01.stg.rdu3.fedoraproject.org.
|
||||
master/126.5.10.in-addr.arpa:101 IN PTR badges-web01.rdu3.fedoraproject.org.
|
||||
master/126.5.10.in-addr.arpa:102 IN PTR badges-web02.rdu3.fedoraproject.org.
|
||||
master/168.192.in-addr.arpa:109.1 IN PTR badges-web01.vpn.fedoraproject.org
|
||||
master/168.192.in-addr.arpa:110.1 IN PTR badges-web02.vpn.fedoraproject.org
|
||||
master/iad2.fedoraproject.org:badges-web01.stg IN A 10.5.126.69
|
||||
master/iad2.fedoraproject.org:badges-web01 IN A 10.5.126.101
|
||||
master/iad2.fedoraproject.org:badges-web02 IN A 10.5.126.102
|
||||
master/rdu3.fedoraproject.org:badges-web01.stg IN A 10.5.126.69
|
||||
master/rdu3.fedoraproject.org:badges-web01 IN A 10.5.126.101
|
||||
master/rdu3.fedoraproject.org:badges-web02 IN A 10.5.126.102
|
||||
master/vpn.fedoraproject.org:badges-web01 IN A 192.168.1.109
|
||||
master/vpn.fedoraproject.org:badges-web02 IN A 192.168.1.110
|
||||
....
|
||||
|
@ -36,9 +36,9 @@ those files are for the host on the IAD network. The other two are for
|
|||
the host to be able to talk over the VPN. Although the VPN is not always
|
||||
needed, the common case is that the host will need it. (If any clients
|
||||
_need to connect to it via the proxy servers_ or it is not hosted in
|
||||
IAD2 it will need a VPN connection). An common exception is here the
|
||||
RDU3 it will need a VPN connection). An common exception is here the
|
||||
staging environment: since we only have one proxy server in staging and
|
||||
it is in IAD2, a VPN connection is not typically needed for staging
|
||||
it is in RDU3, a VPN connection is not typically needed for staging
|
||||
hosts.
|
||||
|
||||
Edit the zone file for the reverse lookup first (the *in-addr.arpa file)
|
||||
|
@ -55,13 +55,13 @@ in stg into production:
|
|||
-106 IN PTR unused.
|
||||
-107 IN PTR unused.
|
||||
-108 IN PTR unused.
|
||||
+105 IN PTR elections01.stg.iad2.fedoraproject.org.
|
||||
+106 IN PTR elections02.stg.iad2.fedoraproject.org.
|
||||
+107 IN PTR elections01.iad2.fedoraproject.org.
|
||||
+108 IN PTR elections02.iad2.fedoraproject.org.
|
||||
+105 IN PTR elections01.stg.rdu3.fedoraproject.org.
|
||||
+106 IN PTR elections02.stg.rdu3.fedoraproject.org.
|
||||
+107 IN PTR elections01.rdu3.fedoraproject.org.
|
||||
+108 IN PTR elections02.rdu3.fedoraproject.org.
|
||||
....
|
||||
|
||||
Edit the forward domain (iad2.fedoraproject.org in our example) next:
|
||||
Edit the forward domain (rdu3.fedoraproject.org in our example) next:
|
||||
|
||||
....
|
||||
elections01.stg IN A 10.5.126.105
|
||||
|
@ -71,8 +71,8 @@ elections02 IN A 10.5.126.108
|
|||
....
|
||||
|
||||
Repeat these two steps if you need to make them available on the VPN.
|
||||
Note: if your stg hosts are in IAD2, you don't need to configure VPN for
|
||||
them as all our stg proxy servers are in IAD2.
|
||||
Note: if your stg hosts are in RDU3, you don't need to configure VPN for
|
||||
them as all our stg proxy servers are in RDU3.
|
||||
|
||||
Also remember to update the Serial at the top of all zone files.
|
||||
|
||||
|
@ -115,11 +115,11 @@ to have valid SSL Certs. These are currently stored in the private repo:
|
|||
git clone /srv/git/ansible-private && chmod 0700 ansible-private
|
||||
cd ansible-private/files/2fa-certs
|
||||
. ./vars
|
||||
./build-and-sign-key $FQDN # ex: elections01.stg.iad2.fedoraproject.org
|
||||
./build-and-sign-key $FQDN # ex: elections01.stg.rdu3.fedoraproject.org
|
||||
....
|
||||
|
||||
The `$FQDN` should be the iad2 domain name if it's in iad2, vpn if not in
|
||||
iad2, and if it has no vpn and is not in iad2 we should add it to the
|
||||
The `$FQDN` should be the rdu3 domain name if it's in rdu3, vpn if not in
|
||||
rdu3, and if it has no vpn and is not in rdu3 we should add it to the
|
||||
vpn.:
|
||||
|
||||
....
|
||||
|
@ -141,11 +141,11 @@ stored in the private repo:
|
|||
|
||||
....
|
||||
cd ansible-private/files/vpn/
|
||||
./addhost.sh $FQDN # ex: zabbix01.iad2.fedoraproject.org
|
||||
./addhost.sh $FQDN # ex: zabbix01.rdu3.fedoraproject.org
|
||||
....
|
||||
|
||||
The `$FQDN` should be the iad2 domain name if it's in iad2, and just
|
||||
fedoraproject.org if it's not in IAD2 (note that there is never .vpn in
|
||||
The `$FQDN` should be the rdu3 domain name if it's in rdu3, and just
|
||||
fedoraproject.org if it's not in RDU3 (note that there is never .vpn in
|
||||
the FQDN in the openvpn keys). Now commit and push.:
|
||||
|
||||
....
|
||||
|
@ -178,26 +178,26 @@ create things like this:
|
|||
|
||||
....
|
||||
[elections]
|
||||
elections01.iad2.fedoraproject.org
|
||||
elections02.iad2.fedoraproject.org
|
||||
elections01.rdu3.fedoraproject.org
|
||||
elections02.rdu3.fedoraproject.org
|
||||
|
||||
[elections-stg]
|
||||
elections01.stg.iad2.fedoraproject.org
|
||||
elections02.stg.iad2.fedoraproject.org
|
||||
elections01.stg.rdu3.fedoraproject.org
|
||||
elections02.stg.rdu3.fedoraproject.org
|
||||
|
||||
[... find the staging group and add there: ...]
|
||||
|
||||
[staging]
|
||||
db-fas01.stg.iad2.fedoraproject.org
|
||||
elections01.stg.iad2.fedoraproject.org
|
||||
electionst02.stg.iad2.fedoraproject.org
|
||||
db-fas01.stg.rdu3.fedoraproject.org
|
||||
elections01.stg.rdu3.fedoraproject.org
|
||||
electionst02.stg.rdu3.fedoraproject.org
|
||||
....
|
||||
|
||||
The hosts should use their fully qualified domain names here. The rules
|
||||
are slightly different than for 2fa certs. If the host is in IAD2, use
|
||||
the .iad2.fedoraproject.org domain name. If they aren't in IAD2, then
|
||||
are slightly different than for 2fa certs. If the host is in RDU3, use
|
||||
the .rdu3.fedoraproject.org domain name. If they aren't in RDU3, then
|
||||
they usually just have .fedoraproject.org as their domain name. (If in
|
||||
doubt about a not-in-IAD2 host, just ask).
|
||||
doubt about a not-in-RDU3 host, just ask).
|
||||
|
||||
=== VPN config
|
||||
|
||||
|
@ -209,7 +209,7 @@ ifconfig-push 192.168.1.X 192.168.0.X
|
|||
....
|
||||
|
||||
Where X is the last octet of the DNS IP address assigned to the host, so
|
||||
for example for _elections01.iad2.fedoraproject.org_ that would be:
|
||||
for example for _elections01.rdu3.fedoraproject.org_ that would be:
|
||||
|
||||
....
|
||||
ifconfig-push 192.168.1.44 192.168.0.44
|
||||
|
@ -248,7 +248,7 @@ claimed in the dns repo:
|
|||
|
||||
....
|
||||
cd ~/ansible/inventory/host_vars
|
||||
cp badges-web01.stg.iad2.fedoraproject.org elections01.stg.iad2.fedoraproject.org
|
||||
cp badges-web01.stg.rdu3.fedoraproject.org elections01.stg.rdu3.fedoraproject.org
|
||||
<edit appropriately>
|
||||
....
|
||||
|
||||
|
|
|
@ -17,14 +17,14 @@ Contact::
|
|||
Persons::
|
||||
.oncall
|
||||
Servers::
|
||||
* os-master01.iad2.fedoraproject.org
|
||||
* os-master02.iad2.fedoraproject.org
|
||||
* os-master03.iad2.fedoraproject.org
|
||||
* os-node01.iad2.fedoraproject.org
|
||||
* os-node02.iad2.fedoraproject.org
|
||||
* os-node03.iad2.fedoraproject.org
|
||||
* os-node04.iad2.fedoraproject.org
|
||||
* os-node05.iad2.fedoraproject.org
|
||||
* os-master01.rdu3.fedoraproject.org
|
||||
* os-master02.rdu3.fedoraproject.org
|
||||
* os-master03.rdu3.fedoraproject.org
|
||||
* os-node01.rdu3.fedoraproject.org
|
||||
* os-node02.rdu3.fedoraproject.org
|
||||
* os-node03.rdu3.fedoraproject.org
|
||||
* os-node04.rdu3.fedoraproject.org
|
||||
* os-node05.rdu3.fedoraproject.org
|
||||
Purpose::
|
||||
Run Fedora Infrastructure applications
|
||||
|
||||
|
|
|
@ -89,10 +89,10 @@ search vpn.fedoraproject.org fedoraproject.org
|
|||
for external hosts and:
|
||||
|
||||
....
|
||||
search iad2.fedoraproject.org vpn.fedoraproject.org fedoraproject.org
|
||||
search rdu3.fedoraproject.org vpn.fedoraproject.org fedoraproject.org
|
||||
....
|
||||
|
||||
for IAD2 hosts.
|
||||
for RDU3 hosts.
|
||||
|
||||
== Remove a host
|
||||
|
||||
|
@ -116,6 +116,6 @@ git push
|
|||
|
||||
== TODO
|
||||
|
||||
Deploy an additional VPN server outside of IAD2. OpenVPN does support
|
||||
Deploy an additional VPN server outside of RDU3. OpenVPN does support
|
||||
failover automatically so if configured properly, when the primary VPN
|
||||
server goes down all hosts should connect to the next host in the list.
|
||||
|
|
|
@ -12,7 +12,7 @@ Owner::
|
|||
Contact::
|
||||
#fedora-admin, #fedora-noc or admin@fedoraproject.org
|
||||
Server(s)::
|
||||
sundries01.iad2.fedoraproject.org
|
||||
sundries01.rdu3.fedoraproject.org
|
||||
Purpose::
|
||||
To explain the overall function of this page, where, and how it gets
|
||||
its information.
|
||||
|
|
|
@ -28,7 +28,7 @@ signing key.
|
|||
. Remove builder from koji:
|
||||
+
|
||||
....
|
||||
koji disable-host bkernel01.iad2.fedoraproject.org
|
||||
koji disable-host bkernel01.rdu3.fedoraproject.org
|
||||
....
|
||||
. Make sure all builds have completed.
|
||||
. Stop existing processes:
|
||||
|
@ -54,8 +54,8 @@ pesign-client -t "OpenSC Card (Fedora Signer)" -u
|
|||
remove other builder:
|
||||
+
|
||||
....
|
||||
koji enable-host bkernel01.iad2.fedoraproject.org
|
||||
koji disable-host bkernel02.iad2.fedoraproject.org
|
||||
koji enable-host bkernel01.rdu3.fedoraproject.org
|
||||
koji disable-host bkernel02.rdu3.fedoraproject.org
|
||||
....
|
||||
. Have a commiter send a build of pesign-test-app and make sure it's
|
||||
signed correctly.
|
||||
|
|
|
@ -15,8 +15,8 @@ Fedora Infrastructure Team
|
|||
|
||||
=== Servers
|
||||
|
||||
* rabbitmq0[1-3].iad2.fedoraproject.org
|
||||
* rabbitmq0[1-3].stg.iad2.fedoraproject.org
|
||||
* rabbitmq0[1-3].rdu3.fedoraproject.org
|
||||
* rabbitmq0[1-3].stg.rdu3.fedoraproject.org
|
||||
|
||||
=== Purpose
|
||||
|
||||
|
@ -101,8 +101,8 @@ It should not return the empty array (`[]`) but something like:
|
|||
{upstream,<<"pubsub-to-public_pubsub">>},
|
||||
{id,<<"b40208be0a999cc93a78eb9e41531618f96d4cb2">>},
|
||||
{status,running},
|
||||
{local_connection,<<"<rabbit@rabbitmq01.iad2.fedoraproject.org.2.8709.481>">>},
|
||||
{uri,<<"amqps://rabbitmq01.iad2.fedoraproject.org/%2Fpubsub">>},
|
||||
{local_connection,<<"<rabbit@rabbitmq01.rdu3.fedoraproject.org.2.8709.481>">>},
|
||||
{uri,<<"amqps://rabbitmq01.rdu3.fedoraproject.org/%2Fpubsub">>},
|
||||
{timestamp,{{2020,3,11},{16,45,18}}}],
|
||||
[{exchange,<<"zmq.topic">>},
|
||||
{upstream_exchange,<<"zmq.topic">>},
|
||||
|
@ -111,8 +111,8 @@ It should not return the empty array (`[]`) but something like:
|
|||
{upstream,<<"pubsub-to-public_pubsub">>},
|
||||
{id,<<"c1e7747425938349520c60dda5671b2758e210b8">>},
|
||||
{status,running},
|
||||
{local_connection,<<"<rabbit@rabbitmq01.iad2.fedoraproject.org.2.8718.481>">>},
|
||||
{uri,<<"amqps://rabbitmq01.iad2.fedoraproject.org/%2Fpubsub">>},
|
||||
{local_connection,<<"<rabbit@rabbitmq01.rdu3.fedoraproject.org.2.8718.481>">>},
|
||||
{uri,<<"amqps://rabbitmq01.rdu3.fedoraproject.org/%2Fpubsub">>},
|
||||
{timestamp,{{2020,3,11},{16,45,17}}}]]
|
||||
....
|
||||
|
||||
|
|
|
@ -16,11 +16,11 @@ Contact::
|
|||
Persons::
|
||||
bowlofeggs cverna puiterwijk
|
||||
Servers::
|
||||
* oci-candidate-registry01.iad2.fedoraproject.org
|
||||
* oci-candidate-registry01.stg.iad2.fedoraproject.org
|
||||
* oci-registry01.iad2.fedoraproject.org
|
||||
* oci-registry01.stg.iad2.fedoraproject.org
|
||||
* oci-registry02.iad2.fedoraproject.org
|
||||
* oci-candidate-registry01.rdu3.fedoraproject.org
|
||||
* oci-candidate-registry01.stg.rdu3.fedoraproject.org
|
||||
* oci-registry01.rdu3.fedoraproject.org
|
||||
* oci-registry01.stg.rdu3.fedoraproject.org
|
||||
* oci-registry02.rdu3.fedoraproject.org
|
||||
Purpose::
|
||||
Serve Fedora's container images
|
||||
|
||||
|
|
|
@ -17,27 +17,27 @@ eg:
|
|||
+
|
||||
----
|
||||
[ocp_workers]
|
||||
worker01.ocp.iad2.fedoraproject.org
|
||||
worker02.ocp.iad2.fedoraproject.org
|
||||
worker03.ocp.iad2.fedoraproject.org
|
||||
worker01.ocp.rdu3.fedoraproject.org
|
||||
worker02.ocp.rdu3.fedoraproject.org
|
||||
worker03.ocp.rdu3.fedoraproject.org
|
||||
|
||||
|
||||
[ocp_workers_stg]
|
||||
worker01.ocp.stg.iad2.fedoraproject.org
|
||||
worker02.ocp.stg.iad2.fedoraproject.org
|
||||
worker03.ocp.stg.iad2.fedoraproject.org
|
||||
worker04.ocp.stg.iad2.fedoraproject.org
|
||||
worker05.ocp.stg.iad2.fedoraproject.org
|
||||
worker01.ocp.stg.rdu3.fedoraproject.org
|
||||
worker02.ocp.stg.rdu3.fedoraproject.org
|
||||
worker03.ocp.stg.rdu3.fedoraproject.org
|
||||
worker04.ocp.stg.rdu3.fedoraproject.org
|
||||
worker05.ocp.stg.rdu3.fedoraproject.org
|
||||
----
|
||||
|
||||
2. Add the new hostvars for each new host being added, see the following examples for `VM` vs `baremetal` hosts.
|
||||
+
|
||||
----
|
||||
# control plane VM
|
||||
inventory/host_vars/ocp01.ocp.iad2.fedoraproject.org
|
||||
inventory/host_vars/ocp01.ocp.rdu3.fedoraproject.org
|
||||
|
||||
# compute baremetal
|
||||
inventory/host_vars/worker01.ocp.iad2.fedoraproject.org
|
||||
inventory/host_vars/worker01.ocp.rdu3.fedoraproject.org
|
||||
----
|
||||
|
||||
3. If the nodes are `compute` or `worker` nodes, they must be also added to the following group_vars `proxies` for prod, `proxies_stg` for staging
|
||||
|
@ -47,15 +47,15 @@ inventory/group_vars/proxies:ocp_nodes:
|
|||
inventory/group_vars/proxies_stg:ocp_nodes_stg:
|
||||
----
|
||||
|
||||
4. Changes must be made to the `roles/dhcp_server/files/dhcpd.conf.noc01.iad2.fedoraproject.org` file for DHCP to ensure that the node will receive an IP address based on its MAC address, and tells the node to reach out to the `next-server` where it can find the UEFI boot configuration.
|
||||
4. Changes must be made to the `roles/dhcp_server/files/dhcpd.conf.noc01.rdu3.fedoraproject.org` file for DHCP to ensure that the node will receive an IP address based on its MAC address, and tells the node to reach out to the `next-server` where it can find the UEFI boot configuration.
|
||||
+
|
||||
----
|
||||
host worker01-ocp { # UPDATE THIS
|
||||
hardware ethernet 68:05:CA:CE:A3:C9; # UPDATE THIS
|
||||
fixed-address 10.3.163.123; # UPDATE THIS
|
||||
fixed-address 10.16.163.123; # UPDATE THIS
|
||||
filename "uefi/grubx64.efi";
|
||||
next-server 10.3.163.10;
|
||||
option routers 10.3.163.254;
|
||||
next-server 10.16.163.10;
|
||||
option routers 10.16.163.254;
|
||||
option subnet-mask 255.255.255.0;
|
||||
}
|
||||
----
|
||||
|
@ -65,10 +65,10 @@ host worker01-ocp { # UPDATE THIS
|
|||
See the following examples for the `worker01.ocp` nodes for production and staging.
|
||||
+
|
||||
----
|
||||
master/163.3.10.in-addr.arpa:123 IN PTR worker01.ocp.iad2.fedoraproject.org.
|
||||
master/166.3.10.in-addr.arpa:118 IN PTR worker01.ocp.stg.iad2.fedoraproject.org.
|
||||
master/iad2.fedoraproject.org:worker01.ocp IN A 10.3.163.123
|
||||
master/stg.iad2.fedoraproject.org:worker01.ocp IN A 10.3.166.118
|
||||
master/163.3.10.in-addr.arpa:123 IN PTR worker01.ocp.rdu3.fedoraproject.org.
|
||||
master/166.3.10.in-addr.arpa:118 IN PTR worker01.ocp.stg.rdu3.fedoraproject.org.
|
||||
master/rdu3.fedoraproject.org:worker01.ocp IN A 10.16.163.123
|
||||
master/stg.rdu3.fedoraproject.org:worker01.ocp IN A 10.16.166.118
|
||||
----
|
||||
|
||||
6. Run the playbook to update the haproxy config to monitor the new nodes, and add it to the load balancer.
|
||||
|
@ -82,13 +82,13 @@ sudo rbac-playbook groups/proxies.yml -t 'haproxy,httpd'
|
|||
+
|
||||
----
|
||||
menuentry 'RHCOS 4.8 worker staging' {
|
||||
linuxefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-kernel-x86_64 ip=dhcp nameserver=10.3.163.33 coreos.inst.install_dev=/dev/sda
|
||||
coreos.live.rootfs_url=http://10.3.166.50/rhcos/rhcos-4.8.2-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://10.3.166.50/rhcos/worker.ign
|
||||
linuxefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-kernel-x86_64 ip=dhcp nameserver=10.16.163.33 coreos.inst.install_dev=/dev/sda
|
||||
coreos.live.rootfs_url=http://10.16.166.50/rhcos/rhcos-4.8.2-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://10.16.166.50/rhcos/worker.ign
|
||||
initrdefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-initramfs.x86_64.img
|
||||
}
|
||||
menuentry 'RHCOS 4.8 worker production' {
|
||||
linuxefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-kernel-x86_64 ip=dhcp nameserver=10.3.163.33 coreos.inst.install_dev=/dev/sda
|
||||
coreos.live.rootfs_url=http://10.3.163.65/rhcos/rhcos-4.8.2-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://10.3.163.65/rhcos/worker.ign
|
||||
linuxefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-kernel-x86_64 ip=dhcp nameserver=10.16.163.33 coreos.inst.install_dev=/dev/sda
|
||||
coreos.live.rootfs_url=http://10.16.163.65/rhcos/rhcos-4.8.2-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://10.16.163.65/rhcos/worker.ign
|
||||
initrdefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-initramfs.x86_64.img
|
||||
}
|
||||
----
|
||||
|
|
|
@ -19,7 +19,7 @@ This SOP should be used in the following scenario:
|
|||
2. From within the Openshift webconsole, or via cli search for all "LocalVolumeDiscovery" objects.
|
||||
+
|
||||
----
|
||||
[root@os-control01 ~][PROD-IAD2]# oc get localvolumediscovery --all-namespaces
|
||||
[root@os-control01 ~][PROD-RDU3]# oc get localvolumediscovery --all-namespaces
|
||||
NAMESPACE NAME AGE
|
||||
openshift-local-storage auto-discover-devices 167d
|
||||
----
|
||||
|
@ -43,11 +43,11 @@ spec:
|
|||
- key: kubernetes.io/hostname
|
||||
operator: In
|
||||
values:
|
||||
- worker01.ocp.iad2.fedoraproject.org
|
||||
- worker02.ocp.iad2.fedoraproject.org
|
||||
- worker03.ocp.iad2.fedoraproject.org
|
||||
- worker04.ocp.iad2.fedoraproject.org
|
||||
- worker05.ocp.iad2.fedoraproject.org
|
||||
- worker01.ocp.rdu3.fedoraproject.org
|
||||
- worker02.ocp.rdu3.fedoraproject.org
|
||||
- worker03.ocp.rdu3.fedoraproject.org
|
||||
- worker04.ocp.rdu3.fedoraproject.org
|
||||
- worker05.ocp.rdu3.fedoraproject.org
|
||||
...
|
||||
----
|
||||
+
|
||||
|
@ -75,11 +75,11 @@ spec:
|
|||
- key: kubernetes.io/hostname
|
||||
operator: In
|
||||
values:
|
||||
- worker01.ocp.iad2.fedoraproject.org
|
||||
- worker02.ocp.iad2.fedoraproject.org
|
||||
- worker03.ocp.iad2.fedoraproject.org
|
||||
- worker04.ocp.iad2.fedoraproject.org
|
||||
- worker05.ocp.iad2.fedoraproject.org
|
||||
- worker01.ocp.rdu3.fedoraproject.org
|
||||
- worker02.ocp.rdu3.fedoraproject.org
|
||||
- worker03.ocp.rdu3.fedoraproject.org
|
||||
- worker04.ocp.rdu3.fedoraproject.org
|
||||
- worker05.ocp.rdu3.fedoraproject.org
|
||||
...
|
||||
----
|
||||
+
|
||||
|
@ -88,7 +88,7 @@ Write and save the change.
|
|||
4. Add the `cluster.ocs.openshift.io/openshift-storage` label to the new node:
|
||||
+
|
||||
----
|
||||
oc label no worker05.ocp.iad2.fedoraproject.org cluster.ocs.openshift.io/openshift-storage=''
|
||||
oc label no worker05.ocp.rdu3.fedoraproject.org cluster.ocs.openshift.io/openshift-storage=''
|
||||
----
|
||||
|
||||
5. From the Openshift Web console visit `Storage, OpenShift Data Foundation`, then in the `Storage Systems` sub menu, click the 3 dot menu on the right beside the `ocs-storagecluster-storage` object. Choose `Add Capacity` option. From the popup menu that appears, ensure that the storage class `local-block` is selected in the list. Finally confirm with add.
|
||||
|
|
|
@ -30,13 +30,13 @@ The following machines are those which are relevant to Releng.
|
|||
----
|
||||
machines:
|
||||
[releng_compose]
|
||||
compose-x86-01.iad2.fedoraproject.org
|
||||
compose-branched01.iad2.fedoraproject.org
|
||||
compose-rawhide01.iad2.fedoraproject.org
|
||||
compose-iot01.iad2.fedoraproject.org
|
||||
compose-x86-01.rdu3.fedoraproject.org
|
||||
compose-branched01.rdu3.fedoraproject.org
|
||||
compose-rawhide01.rdu3.fedoraproject.org
|
||||
compose-iot01.rdu3.fedoraproject.org
|
||||
|
||||
[releng_compose_stg]
|
||||
compose-x86-01.stg.iad2.fedoraproject.org
|
||||
compose-x86-01.stg.rdu3.fedoraproject.org
|
||||
----
|
||||
|
||||
First install the Zabbix agent on these `releng_compose:releng_compose_stg` hosts via the `zabbix/zabbix_agent` ansible role [11]. We targetted the `groups/releng-compose.yml` playbook as this is responsible for targetting these hosts.
|
||||
|
@ -68,13 +68,13 @@ Cronjobs are installed on the releng hosts via the following ansible task[13]. T
|
|||
|
||||
|
||||
- 1: ftbfs weekly cron job `"ftbfs.cron" /etc/cron.weekly/ on compose-x86-01`
|
||||
- 2: branched compose cron `"branched" /etc/cron.d/branched on compose-branched01.iad2`
|
||||
- 3: rawhide compose cron `"rawhide" etc/cron.d/rawhide on compose-rawhide01.iad2`
|
||||
- 4: cloud-updates compose cron `"cloud-updates" /etc/cron.d/cloud-updates on compose-x86-01.iad2`
|
||||
- 5: container-updates compose cron `"container-updates" /etc/cron.d/container-updates on compose-x86-01.iad2`
|
||||
- 6: clean-amis cron `"clean-amis.j2" /etc/cron.d/clean-amis on compose-x86-01.iad2`
|
||||
- 7: rawhide-iot compose cron `"rawhide-iot" /etc/cron.d/rawhide-iot on compose-iot-01.iad2`
|
||||
- 8: sig_policy cron `"sig_policy.j2" /etc/cron.d/sig_policy on compose-x86-01.iad2'`
|
||||
- 2: branched compose cron `"branched" /etc/cron.d/branched on compose-branched01.rdu3`
|
||||
- 3: rawhide compose cron `"rawhide" etc/cron.d/rawhide on compose-rawhide01.rdu3`
|
||||
- 4: cloud-updates compose cron `"cloud-updates" /etc/cron.d/cloud-updates on compose-x86-01.rdu3`
|
||||
- 5: container-updates compose cron `"container-updates" /etc/cron.d/container-updates on compose-x86-01.rdu3`
|
||||
- 6: clean-amis cron `"clean-amis.j2" /etc/cron.d/clean-amis on compose-x86-01.rdu3`
|
||||
- 7: rawhide-iot compose cron `"rawhide-iot" /etc/cron.d/rawhide-iot on compose-iot-01.rdu3`
|
||||
- 8: sig_policy cron `"sig_policy.j2" /etc/cron.d/sig_policy on compose-x86-01.rdu3'`
|
||||
|
||||
Need at least one Zabbix check per cronjob. The Zabbix check should do the following.
|
||||
|
||||
|
|
|
@ -21,7 +21,7 @@ The following is a sample configuration to install a baremetal OCP4 worker in th
|
|||
|
||||
----
|
||||
menuentry 'RHCOS 4.8 worker staging' {
|
||||
linuxefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-kernel-x86_64 ip=dhcp nameserver=10.3.163.33 coreos.inst.install_dev=/dev/sda coreos.live.rootfs_url=http://10.3.166.50/rhcos/rhcos-4.8.2-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://10.3.166.50/rhcos/worker.ign
|
||||
linuxefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-kernel-x86_64 ip=dhcp nameserver=10.16.163.33 coreos.inst.install_dev=/dev/sda coreos.live.rootfs_url=http://10.16.166.50/rhcos/rhcos-4.8.2-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://10.16.166.50/rhcos/worker.ign
|
||||
initrdefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-initramfs.x86_64.img
|
||||
}
|
||||
----
|
||||
|
|
|
@ -44,7 +44,7 @@ spec:
|
|||
capacity:
|
||||
storage: 100Gi
|
||||
nfs:
|
||||
server: 10.3.162.11
|
||||
server: 10.16.162.11
|
||||
path: /ocp_prod_registry
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
|
|
|
@ -21,12 +21,12 @@ To connect to `idrac`, you must be connected to the Red Hat VPN. Next find the m
|
|||
On the `batcave01` instance, in the dns configuration, the following bare metal machines make up the production/staging OCP4 worker nodes.
|
||||
|
||||
----
|
||||
oshift-dell01 IN A 10.3.160.180 # worker01 prod
|
||||
oshift-dell02 IN A 10.3.160.181 # worker02 prod
|
||||
oshift-dell03 IN A 10.3.160.182 # worker03 prod
|
||||
oshift-dell04 IN A 10.3.160.183 # worker01 staging
|
||||
oshift-dell05 IN A 10.3.160.184 # worker02 staging
|
||||
oshift-dell06 IN A 10.3.160.185 # worker03 staging
|
||||
oshift-dell01 IN A 10.16.160.180 # worker01 prod
|
||||
oshift-dell02 IN A 10.16.160.181 # worker02 prod
|
||||
oshift-dell03 IN A 10.16.160.182 # worker03 prod
|
||||
oshift-dell04 IN A 10.16.160.183 # worker01 staging
|
||||
oshift-dell05 IN A 10.16.160.184 # worker02 staging
|
||||
oshift-dell06 IN A 10.16.160.185 # worker03 staging
|
||||
----
|
||||
|
||||
Login to the `idrac` interface that corresponds with each worker, one at a time. Ensure the node is booting via harddrive, then power it on.
|
||||
|
|
|
@ -138,7 +138,7 @@ If there are VMs used for some of the roles, make sure to leave it in.
|
|||
|
||||
|
||||
==== Baremetal
|
||||
At this point we can switch on the baremetal nodes and begin the PXE/UEFI boot process. The baremetal nodes should via DHCP/DNS have the configuration necessary to reach out to the `noc01.iad2.fedoraproject.org` server and retrieve the UEFI boot configuration via PXE.
|
||||
At this point we can switch on the baremetal nodes and begin the PXE/UEFI boot process. The baremetal nodes should via DHCP/DNS have the configuration necessary to reach out to the `noc01.rdu3.fedoraproject.org` server and retrieve the UEFI boot configuration via PXE.
|
||||
|
||||
Once booted up, you should visit the management console for this node, and manually choose the UEFI configuration appropriate for its role.
|
||||
|
||||
|
@ -187,14 +187,14 @@ This should look something like this once completed:
|
|||
----
|
||||
[root@os-control01 ocp4][STG]= oc get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ocp01.ocp.stg.iad2.fedoraproject.org Ready master 34d v1.21.1+9807387
|
||||
ocp02.ocp.stg.iad2.fedoraproject.org Ready master 34d v1.21.1+9807387
|
||||
ocp03.ocp.stg.iad2.fedoraproject.org Ready master 34d v1.21.1+9807387
|
||||
worker01.ocp.stg.iad2.fedoraproject.org Ready worker 21d v1.21.1+9807387
|
||||
worker02.ocp.stg.iad2.fedoraproject.org Ready worker 20d v1.21.1+9807387
|
||||
worker03.ocp.stg.iad2.fedoraproject.org Ready worker 20d v1.21.1+9807387
|
||||
worker04.ocp.stg.iad2.fedoraproject.org Ready worker 34d v1.21.1+9807387
|
||||
worker05.ocp.stg.iad2.fedoraproject.org Ready worker 34d v1.21.1+9807387
|
||||
ocp01.ocp.stg.rdu3.fedoraproject.org Ready master 34d v1.21.1+9807387
|
||||
ocp02.ocp.stg.rdu3.fedoraproject.org Ready master 34d v1.21.1+9807387
|
||||
ocp03.ocp.stg.rdu3.fedoraproject.org Ready master 34d v1.21.1+9807387
|
||||
worker01.ocp.stg.rdu3.fedoraproject.org Ready worker 21d v1.21.1+9807387
|
||||
worker02.ocp.stg.rdu3.fedoraproject.org Ready worker 20d v1.21.1+9807387
|
||||
worker03.ocp.stg.rdu3.fedoraproject.org Ready worker 20d v1.21.1+9807387
|
||||
worker04.ocp.stg.rdu3.fedoraproject.org Ready worker 34d v1.21.1+9807387
|
||||
worker05.ocp.stg.rdu3.fedoraproject.org Ready worker 34d v1.21.1+9807387
|
||||
----
|
||||
|
||||
At this point the cluster is basically up and running.
|
||||
|
|
|
@ -13,7 +13,7 @@ This can be retrieved once the cluster control plane has been installed, from th
|
|||
oc get configmap kube-root-ca.crt -o yaml -n openshift-ingress
|
||||
----
|
||||
|
||||
Extract this CACERT in full, and commit it to ansible at: `https://pagure.io/fedora-infra/ansible/blob/main/f/roles/haproxy/files/ocp.<ENV>-iad2.pem`
|
||||
Extract this CACERT in full, and commit it to ansible at: `https://pagure.io/fedora-infra/ansible/blob/main/f/roles/haproxy/files/ocp.<ENV>-rdu3.pem`
|
||||
|
||||
To deploy this cert, one must be apart of the `sysadmin-noc` group. Run the following playbook:
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ Contact::
|
|||
Location::
|
||||
All
|
||||
Servers::
|
||||
All IAD2 and VPN Fedora machines
|
||||
All RDU3 and VPN Fedora machines
|
||||
Purpose::
|
||||
Access via ssh to Fedora project machines.
|
||||
|
||||
|
@ -67,10 +67,10 @@ Host bastion.fedoraproject.org
|
|||
User FAS_USERNAME (all lowercase)
|
||||
ProxyCommand none
|
||||
ForwardAgent no
|
||||
Host *.iad2.fedoraproject.org *.qa.fedoraproject.org 10.3.160.* 10.3.161.* 10.3.163.* 10.3.165.* 10.3.167.* 10.3.171.* *.vpn.fedoraproject.org
|
||||
Host *.rdu3.fedoraproject.org *.qa.fedoraproject.org 10.16.160.* 10.16.161.* 10.16.163.* 10.16.165.* 10.16.167.* 10.16.171.* *.vpn.fedoraproject.org
|
||||
ProxyJump bastion.fedoraproject.org
|
||||
Host batcave01
|
||||
HostName %h.iad2.fedoraproject.org
|
||||
HostName %h.rdu3.fedoraproject.org
|
||||
....
|
||||
+
|
||||
Note that there are 2 bastion servers: bastion01.fedoraproject.org
|
||||
|
@ -106,7 +106,7 @@ To have SSH access to the servers, you’ll first need to add your public key to
|
|||
You can configure Putty the same way by doing this:
|
||||
|
||||
[arabic, start=0]
|
||||
. In the session section type _batcave01.iad2.fedoraproject.org_ port 22
|
||||
. In the session section type _batcave01.rdu3.fedoraproject.org_ port 22
|
||||
. In Connection:Data enter your FAS_USERNAME
|
||||
. In Connection:Proxy add the proxy settings
|
||||
|
||||
|
@ -125,7 +125,7 @@ authentication you have used on FAS profile
|
|||
|
||||
You can use openssh from any terminal to access machines you are granted access to:
|
||||
|
||||
'ssh batcave01.iad2.fedoraproject.org'
|
||||
'ssh batcave01.rdu3.fedoraproject.org'
|
||||
|
||||
It's important to use the fully qualified domain name of the host you are trying
|
||||
to access so that the certificate matches correctly. Otherwise you may get a
|
||||
|
|
|
@ -107,7 +107,7 @@ pushes the changes live to https://status.fedoraproject.org
|
|||
[arabic]
|
||||
. Run certbot to generate certificate and have it signed by LetsEncrypt
|
||||
(you can run this command anywhere certbot is installed, you can use
|
||||
your laptop or _certgetter01.iad2.fedoraproject.org_):
|
||||
your laptop or _certgetter01.rdu3.fedoraproject.org_):
|
||||
+
|
||||
....
|
||||
rm -rf ~/certbot
|
||||
|
|
|
@ -14,7 +14,7 @@ Owner:::
|
|||
Contact:::
|
||||
#fedora-admin, sysadmin-main
|
||||
Servers:::
|
||||
log01.iad2.fedoraproject.org
|
||||
log01.rdu3.fedoraproject.org
|
||||
Purpose:::
|
||||
Provides our central logs and reporting
|
||||
|
||||
|
@ -40,7 +40,7 @@ outputted to `/srv/web/epylog/merged`
|
|||
+
|
||||
This path requires a username and a password to access. To add your
|
||||
username and password you must first join the sysadmin-logs group then
|
||||
login to `log01.iad2.fedoraproject.org` and run this command:
|
||||
login to `log01.rdu3.fedoraproject.org` and run this command:
|
||||
+
|
||||
....
|
||||
htpasswd -m /srv/web/epylog/.htpasswd $your_username
|
||||
|
|
|
@ -23,7 +23,7 @@ follow their instructions completely.
|
|||
+
|
||||
____
|
||||
[upperalpha]
|
||||
.. Log into _batcave01.iad2.fedoraproject.org_
|
||||
.. Log into _batcave01.rdu3.fedoraproject.org_
|
||||
.. search for the hostname in the file `/var/log/virthost-lists.out`:
|
||||
+
|
||||
....
|
||||
|
|
|
@ -114,6 +114,6 @@ values are correct:
|
|||
+
|
||||
....
|
||||
volgroup: /dev/vg_guests
|
||||
vmhost: virthost??.iad2.fedoraproject.org
|
||||
vmhost: virthost??.rdu3.fedoraproject.org
|
||||
....
|
||||
. Run the `noc.yml` ansible playbook to update nagios.
|
||||
|
|
|
@ -48,7 +48,7 @@ https://pagure.io/waiverdb/issue/77
|
|||
|
||||
== Observing WaiverDB Behavior
|
||||
|
||||
Login to _os-master01.iad2.fedoraproject.org_ as
|
||||
Login to _os-master01.rdu3.fedoraproject.org_ as
|
||||
_root_ (or authenticate remotely with openshift using
|
||||
`oc login https://os.fedoraproject.org`, and run:
|
||||
|
||||
|
|
|
@ -80,10 +80,10 @@ https://os.fedoraproject.org[Openshift webconsole] or by using the
|
|||
openshift command line:
|
||||
|
||||
....
|
||||
$ oc login os-master01.iad2.fedoraproject.org
|
||||
$ oc login os-master01.rdu3.fedoraproject.org
|
||||
You must obtain an API token by visiting https://os.fedoraproject.org/oauth/token/request
|
||||
|
||||
$ oc login os-master01.iad2.fedoraproject.org --token=<Your token here>
|
||||
$ oc login os-master01.rdu3.fedoraproject.org --token=<Your token here>
|
||||
$ oc -n asknot get pods
|
||||
asknot-28-bfj52 1/1 Running 522 28d
|
||||
$ oc logs asknot-28-bfj52
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue