DC move: iad => rdu3, 10.3. => 10.16.

And remove some obsolete things.

Signed-off-by: Nils Philippsen <nils@redhat.com>
This commit is contained in:
Nils Philippsen 2025-07-04 11:55:02 +02:00
parent f3756ceb83
commit b4afb2f945
83 changed files with 386 additions and 429 deletions

View file

@ -48,7 +48,7 @@ for ansible.
* Our docs aren't all using the same template. See the link:https://pagure.io/infra-docs-fpo[] repo and propose patches
to update documents to use the same templates as the rest.
* Look over our logs on log01.iad2.fedoraproject.org in /var/log/merged/ and track down issues and propose solutions to
* Look over our logs on log01.rdu3.fedoraproject.org in /var/log/merged/ and track down issues and propose solutions to
them. Be sure to discuss in meeting or in a issue whatever you find.
== easyfix tickets
@ -68,7 +68,7 @@ there and close it or rework it as needed. You can add to the next meeting agend
* Make patch for fix from git ansible repo
Most any task will require changes to ansible. You can check this out on batcave01.iad2.fedoraproject.org
Most any task will require changes to ansible. You can check this out on batcave01.rdu3.fedoraproject.org
(just "git clone /git/ansible" there) and make edits to your local copy. Then you can create a PR with your changes into link:https://pagure.io/fedora-infra/ansible[]
== Matrix Tips

View file

@ -33,7 +33,7 @@ and will stop to work when the mdapi is not available.
in Fedora infrastructure. It alerts members of Fedora Infra about any potential problem happening
in infrastructure.
|Nagios is essential for monitoring infrastructure and without it Fedora infra team will be in dark.
|`noc01.iad2.fedoraproject.org` - internal `noc02.fedoraproject.org` - external
|`noc01.rdu3.fedoraproject.org` - internal `noc02.fedoraproject.org` - external
|https://pagure.io/fedora-ci/monitor-gating[monitor-gating]
|Monitor gating is service that runs dummy package through whole gating process. It files issue
@ -58,7 +58,7 @@ and trigger tasks based on those messages.
|Ansible repository containing all deployment playbooks for Fedora Infrastructure needs to be hosted on
both Pagure and directly on batcave (entry machine for fedora infra) in case the Pagure wouldn't work.
mirror_from_pagure is taking care of that.
|`batcave01.iad2.fedoraproject.org`
|`batcave01.rdu3.fedoraproject.org`
|===
== Authentication
@ -75,13 +75,13 @@ mirror_from_pagure is taking care of that.
|https://www.freeipa.org/page/Main_Page[IPA]
|FreeIPA is an identity management service which handles authentication of Fedora users in Fedora ecosystem.
|Without FreeIPA nobody would be able to authenticate with any Fedora service.
|`ipa01.iad2.fedoraproject.org` `ipa02.iad2.fedoraproject.org` `ipa03.iad2.fedoraproject.org`
|`ipa01.rdu3.fedoraproject.org` `ipa02.rdu3.fedoraproject.org` `ipa03.rdu3.fedoraproject.org`
|https://ipsilon-project.org/[Ipsilon]
|Ipsilon is handling Single Sign-On (SSO) in Fedora ecosystem.
|Without Ipsilon SSO in Fedora wouldn't work. Plenty of web apps in Fedora using SSO as a main authentication
system.
|`ipsilon01.iad2.fedoraproject.org` `ipsilon02.iad2.fedoraproject.org`
|`ipsilon01.rdu3.fedoraproject.org` `ipsilon02.rdu3.fedoraproject.org`
|https://github.com/fedora-infra/fasjson[fasjson]
|FASJson is a gateway that allows to query data from FreeIPA.
@ -126,7 +126,7 @@ wouldn't be possible. This would render test pipelines unusable.
|https://pagure.io/koji/[Koji builders]
|Koji builders are machines of various architectures used by Koji to build the artifacts.
|Without koji builders no artifact could be built.
|`buildvm-{x86,a64,ppc64le,a32}-\{01-XX}.iad2.fedoraproject.org` `buildvm-s390x-\{01-XX}.s390.fedoraproject.org`
|`buildvm-{x86,a64,ppc64le,a32}-\{01-XX}.rdu3.fedoraproject.org` `buildvm-s390x-\{01-XX}.s390.fedoraproject.org`
|https://pagure.io/greenwave[greenwave]
|Greenwave is a component that decides whether the package can pass gating or not.
@ -136,27 +136,27 @@ wouldn't be possible. This would render test pipelines unusable.
|https://pagure.io/koji/[Koji]
|Koji is a build system handling artifact building.
|Without Koji we wouldn't be able to build any artifact.
|`koji0\{1-2}.iad2.fedoraproject.org`
|`koji0\{1-2}.rdu3.fedoraproject.org`
|https://github.com/fedora-infra/bodhi[Bodhi]
|Bodhi is a system that manages package updates for Fedora distribution.
|Without Bodhi packagers couldn't submit new updates for existing packages.
|`bodhi-backend01.iad2.fedoraproject.org`
|`bodhi-backend01.rdu3.fedoraproject.org`
|https://pagure.io/robosignatory[robosignatory]
|Fedora messaging consumer that automatically signs artifacts.
|Without Robosignatory no artifact would be automatically signed.
|`sign-bridge01.iad2.fedoraproject.org`
|`sign-bridge01.rdu3.fedoraproject.org`
|https://pagure.io/releng/tag2distrepo[tag2distrepo]
|Koji plugin that automatically generates dist repositories on tag operations.
|Without tag2distrepo packagers wouldn't be able to create repositories on specific tag.
|`koji0\{1-2}.iad2.fedoraproject.org`
|`koji0\{1-2}.rdu3.fedoraproject.org`
|https://pagure.io/sigul[sigul]
|Component that does signing of the artifacts. Called by robosignatory.
|Without sigul nothing in Fedora could be signed.
|`sign-bridge01.iad2.fedoraproject.org`
|`sign-bridge01.rdu3.fedoraproject.org`
|https://github.com/fedora-infra/koschei[Koschei]
|Koschei is a software for running a service for scratch-rebuilding RPM packages in Koji instance
@ -168,7 +168,7 @@ when their build-dependencies change or after some time elapse.
|Pagure-dist-git is a plugin for Pagure which forms the base for web interface of Fedora
https://src.fedoraproject.org/[dist-git].
|Without pagure-dist-git there wouldn't be any web interface for dist-git for Fedora.
|`pkgs01.iad2.fedoraproject.org`
|`pkgs01.rdu3.fedoraproject.org`
|https://github.com/release-engineering/dist-git[dist-git]
|Dist-git is used to initialize distribution git repository for Fedora.
@ -185,7 +185,7 @@ https://src.fedoraproject.org/[dist-git].
|RabbitMQ is a message broker used by fedora messaging. It assures that the message
is delivered from publisher to consumer.
|Without it the messages will not be delivered and most of the infra will stop working.
|`rabbitmq0\{1-3}.iad2.fedoraproject.org`
|`rabbitmq0\{1-3}.rdu3.fedoraproject.org`
|https://github.com/fedora-infra/fedora-messaging[fedora messaging]
|Python library for working with fedora messaging system. It helps you create fedora
@ -203,7 +203,7 @@ with fedora messages and affect whole Fedora infrastructure.
|https://wiki.list.org/Mailman3[Mailman3]
|GNU Mailman 3 is a set of apps used by Fedora to manage all their mailing lists.
|Without Mailman3 mailing lists and archives wouldn't work.
|`mailman01.iad2.fedoraproject.org`
|`mailman01.rdu3.fedoraproject.org`
|https://pagure.io/pagure[Pagure]
|Pagure is a git forge used by Fedora project. It is a main component of Fedora dist-git as well.
@ -214,13 +214,13 @@ with fedora messages and affect whole Fedora infrastructure.
|Mediawiki is used by Fedora as their choice of Wikipedia-like web server. It's powering
https://fedoraproject.org/wiki[Fedora wiki pages].
|Without wiki Fedora wiki pages wouldn't run.
|`wiki0\{1-2}.iad2.fedoraproject.org`
|`wiki0\{1-2}.rdu3.fedoraproject.org`
|https://github.com/fedora-infra/fmn[FMN]
|FMN (FedMSG Notifications) is an application listening for messages in Fedora infra and based on the
message sends notifications to users in Fedora projects.
|Without FMN no notifications will be sent in Fedora Infra.
|`notifs-web01.iad2.fedoraproject.org` `notifs-backend01.iad2.fedoraproject.org`
|`notifs-web01.rdu3.fedoraproject.org` `notifs-backend01.rdu3.fedoraproject.org`
|===
== Fedora Release
@ -232,7 +232,7 @@ message sends notifications to users in Fedora projects.
|Pungi is a tool that creates composes of Fedora. It makes sure that all required
packages are included in the compose and the compose is available after finishing.
|Without pungi it would be much harder to create composes of Fedora.
|`compose-x86-01.iad2.fedoraproject.org` `compose-branched01.iad2.fedoraproject.org` `compose-rawhide01.iad2.fedoraproject.org` `compose-iot01.iad2.fedoraproject.org`
|`compose-x86-01.rdu3.fedoraproject.org` `compose-branched01.rdu3.fedoraproject.org` `compose-rawhide01.rdu3.fedoraproject.org` `compose-iot01.rdu3.fedoraproject.org`
|https://github.com/fedora-infra/mirrormanager2[mirrormanager]
|Mirrormanager is used to manage all the mirrors that are providing fedora packages.

View file

@ -115,7 +115,7 @@ Outside of Fedora Infrastructure to fix.
* Libera.chat IRC https://libera.chat/
* Matrix https://chat.fedoraproject.org/
* Message tagging service
* Network connectivity to IAD2/RDU2
* Network connectivity to RDU3, RDU2-CC
* OpenQA
* Paste https://paste.fedoraproject.org/
* Retrace https://retrace.fedoraproject.org

View file

@ -41,7 +41,7 @@ view logs and request debugging containers from os-control01 or your local machi
example, to view the logs of a deployment in staging:
....
$ ssh os-control01.iad2.fedoraproject.org
$ ssh os-control01.rdu3.fedoraproject.org
$ oc login api.ocp.fedoraproject.org:6443
You must obtain an API token by visiting https://oauth-openshift.apps.ocp.fedoraproject.org/oauth/token/request

View file

@ -23,8 +23,8 @@ NOTE: Should these best practices be something maintained by the kube-sig? If so
=== Fedora Infra Clusters
Fedora Infra currently manages the following three Openshift clusters:
- Staging (Self Hosted in IAD2, deploy apps via ansible): https://console-openshift-console.apps.ocp.stg.fedoraproject.org/
- Production (Self Hosted in IAD2, deploy apps via ansible): https://console-openshift-console.apps.ocp.fedoraproject.org/
- Staging (Self Hosted in RDU3, deploy apps via ansible): https://console-openshift-console.apps.ocp.stg.fedoraproject.org/
- Production (Self Hosted in RDU3, deploy apps via ansible): https://console-openshift-console.apps.ocp.fedoraproject.org/
- Communishift (RH Openshift Dedicated deployed in AWS, apps deployed by individual app maintainers in various ways): https://console-openshift-console.apps.fedora.cj14.p1.openshiftapps.com/
Access to the clusters is managed via the Fedora account system (FAS). All Fedora users may authenticate, but access to each project is managed on an app per app basis. Open a ticket at [7] requesting access to a particular app, but ensure you first get approval from the existing app owners.

View file

@ -5,7 +5,7 @@ using ssh.
This can be done using:
----
ssh -L 15672:localhost:15672 rabbitmq01.iad2.fedoraproject.org
ssh -L 15672:localhost:15672 rabbitmq01.rdu3.fedoraproject.org
----
You can then visit: http://localhost:15672/ to see the web UI for rabbitmq.

View file

@ -8,11 +8,11 @@ archives section (`/pub/archive/fedora/linux`)
== Steps Involved
[arabic]
. log into batcave01.iad2.fedoraproject.org and ssh to bodhi-backend01
. log into batcave01.rdu3.fedoraproject.org and ssh to bodhi-backend01
+
[source]
----
$ sudo -i ssh root@bodhi-backend01.iad2.fedoraproject.org
$ sudo -i ssh root@bodhi-backend01.rdu3.fedoraproject.org
# su - ftpsync
----
@ -120,7 +120,7 @@ to get a DBA to update the backend to fix items.
+
[source]
----
$ sudo -i ssh bodhi-backend01.iad2.fedoraproject.org
$ sudo -i ssh bodhi-backend01.rdu3.fedoraproject.org
$ cd /pub/fedora/linux
$ cd releases/21
$ ls # make sure you have stuff here

View file

@ -2,9 +2,9 @@
Robosignatory in production does not have ssh enabled so we cannot connect to the box to
check the logs.
However we can use log1.iad2.fedoraproject.org to check the logs of the service.
However we can use log1.rdu3.fedoraproject.org to check the logs of the service.
----
$ ssh log01.iad2.fedoraproject.org
$ grep autosign01.iad2.fedoraproject.org /var/log/merged/messages.log
$ ssh log01.rdu3.fedoraproject.org
$ grep autosign01.rdu3.fedoraproject.org /var/log/merged/messages.log
----

View file

@ -1,6 +1,6 @@
= Creating a new mailing list
. Log into mailman01.iad2.fedoraproject.org
. Log into mailman01.rdu3.fedoraproject.org
. Run the following command:
+
----

View file

@ -9,7 +9,7 @@ different types of groups in Fedora.
Dist-git does not allow anyone to create groups, only admins can do it via the
`pagure-admin` CLI tool.
To create a group, you can simply run the command on pkgs01.iad2.fedoraproject.org:
To create a group, you can simply run the command on pkgs01.rdu3.fedoraproject.org:
+
----
pagure-admin new-group <group_name> <username of requester> \

View file

@ -3,7 +3,7 @@
A playbook is available to destroy a virtual instance.
----
sudo -i ansible-playbook -vvv /srv/web/infra/ansible/playbooks/destroy_virt_inst.yml -e target=osbs-master01.stg.iad2.fedoraproject.org
sudo -i ansible-playbook -vvv /srv/web/infra/ansible/playbooks/destroy_virt_inst.yml -e target=osbs-master01.stg.rdu3.fedoraproject.org
----
In some cases the instance that you are trying to delete was not in a clean state. You could then try to run the following:
@ -11,18 +11,18 @@ In some cases the instance that you are trying to delete was not in a clean stat
. Undefine the instance
+
----
sudo -i ssh virthost04.stg.iad2.fedoraproject.org 'virsh undefine osbs-node02.stg.phx2.fedoraproject.org'
sudo -i ssh virthost04.stg.rdu3.fedoraproject.org 'virsh undefine osbs-node02.stg.phx2.fedoraproject.org'
----
. Remove the logical volume
+
----
sudo -i ssh virthost04.stg.iad2.fedoraproject.org 'lvremove /dev/vg_guests/bodhi-backend01.phx2.fedoraproject.org'
sudo -i ssh virthost04.stg.rdu3.fedoraproject.org 'lvremove /dev/vg_guests/bodhi-backend01.phx2.fedoraproject.org'
----
To connect to a virtual instance console you need to first ssh to the virthost box. For example
----
sudo -i ssh virthost04.stg.iad2.fedoraproject.org
(virthost04.stg.iad2.fedoraproject.org) virsh console osbs-node02.stg.iad2.fedoraproject.org
sudo -i ssh virthost04.stg.rdu3.fedoraproject.org
(virthost04.stg.rdu3.fedoraproject.org) virsh console osbs-node02.stg.rdu3.fedoraproject.org
----

View file

@ -14,5 +14,5 @@ following actions.
=== Command line
. ssh to `ipa01.iad2.fedoraproject.org`
. ssh to `ipa01.rdu3.fedoraproject.org`
. run `ipa user-disable <user>`

View file

@ -7,7 +7,7 @@ accessible to a very limited set of people.
====
. Check the status of robosignatory:
.. Log into `autosign01.stg.iad2.fedoraproject.org`
.. Log into `autosign01.stg.rdu3.fedoraproject.org`
.. Check the logs:
+
----
@ -21,7 +21,7 @@ systemctl restart fm-consumer@robosignatory.service
----
. Check the status of the signing-vault
.. Log into `sign-vault01.stg.iad2.fedoraproject.org`
.. Log into `sign-vault01.stg.rdu3.fedoraproject.org`
.. Check the status of sigul server:
+
----
@ -35,7 +35,7 @@ sigul_server -dvv
----
. Check the status of the signing-bridge
.. Log into `sign-bridge01.stg.iad2.fedoraproject.org`
.. Log into `sign-bridge01.stg.rdu3.fedoraproject.org`
.. Check the status of the sigul bridge:
+
----

View file

@ -23,17 +23,17 @@ The output will look somthing like this:
root GuestVolGroup00 -wi-ao---- <58.59g
docker-pool vg-docker twi-a-t--- 58.32g 42.00 18.44
os-node02.stg.iad2.fedoraproject.org | CHANGED | rc=0 >>
os-node02.stg.rdu3.fedoraproject.org | CHANGED | rc=0 >>
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root GuestVolGroup00 -wi-ao---- <58.59g
docker-pool vg-docker twi-a-t--- <48.60g 32.37 14.81
os-node01.stg.iad2.fedoraproject.org | CHANGED | rc=0 >>
os-node01.stg.rdu3.fedoraproject.org | CHANGED | rc=0 >>
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root GuestVolGroup00 -wi-ao---- <58.59g
docker-pool vg-docker twi-a-t--- 58.32g 40.75 17.38
os-node04.stg.iad2.fedoraproject.org | CHANGED | rc=0 >>
os-node04.stg.rdu3.fedoraproject.org | CHANGED | rc=0 >>
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root GuestVolGroup00 -wi-ao---- <58.59g
docker-pool vg-docker twi-a-t--- 58.32g 90.32 28.35

View file

@ -30,87 +30,87 @@ You need to have the following information handy to follow the process.
.IDRAC Management Controllers FQDN/IP Mapping (Recorded at 30 Sep 2024)
|==================================================================================================
| | FQDN | MGMT FQDN
| 1 | autosign02.iad2.fedoraproject.org | autosign02.mgmt.iad2.fedoraproject.org
| 2 | backup01.iad2.fedoraproject.org | backup01.mgmt.iad2.fedoraproject.org
| 3 | bkernel01.iad2.fedoraproject.org | bkernel01.mgmt.iad2.fedoraproject.org
| 4 | bkernel02.iad2.fedoraproject.org | bkernel02.mgmt.iad2.fedoraproject.org
| 5 | buildhw-x86-01.iad2.fedoraproject.org | buildhw-x86-01.mgmt.iad2.fedoraproject.org
| 6 | buildhw-x86-02.iad2.fedoraproject.org | buildhw-x86-02.mgmt.iad2.fedoraproject.org
| 7 | buildhw-x86-03.iad2.fedoraproject.org | buildhw-x86-03.mgmt.iad2.fedoraproject.org
| 8 | buildhw-x86-04.iad2.fedoraproject.org | buildhw-x86-04.mgmt.iad2.fedoraproject.org
| 9 | buildhw-x86-05.iad2.fedoraproject.org | buildhw-x86-05.mgmt.iad2.fedoraproject.org
| 10 | buildhw-x86-06.iad2.fedoraproject.org | buildhw-x86-06.mgmt.iad2.fedoraproject.org
| 11 | buildhw-x86-07.iad2.fedoraproject.org | buildhw-x86-07.mgmt.iad2.fedoraproject.org
| 12 | buildhw-x86-08.iad2.fedoraproject.org | buildhw-x86-08.mgmt.iad2.fedoraproject.org
| 13 | buildhw-x86-09.iad2.fedoraproject.org | buildhw-x86-09.mgmt.iad2.fedoraproject.org
| 14 | buildhw-x86-10.iad2.fedoraproject.org | buildhw-x86-10.mgmt.iad2.fedoraproject.org
| 15 | buildhw-x86-11.iad2.fedoraproject.org | buildhw-x86-11.mgmt.iad2.fedoraproject.org
| 16 | buildhw-x86-12.iad2.fedoraproject.org | buildhw-x86-12.mgmt.iad2.fedoraproject.org
| 17 | buildhw-x86-13.iad2.fedoraproject.org | buildhw-x86-13.mgmt.iad2.fedoraproject.org
| 18 | buildhw-x86-14.iad2.fedoraproject.org | buildhw-x86-14.mgmt.iad2.fedoraproject.org
| 19 | buildhw-x86-15.iad2.fedoraproject.org | buildhw-x86-15.mgmt.iad2.fedoraproject.org
| 20 | buildhw-x86-16.iad2.fedoraproject.org | buildhw-x86-16.mgmt.iad2.fedoraproject.org
| 21 | bvmhost-x86-01.iad2.fedoraproject.org | bvmhost-x86-01.mgmt.iad2.fedoraproject.org
| 22 | bvmhost-x86-01.stg.iad2.fedoraproject.org | bvmhost-x86-01.stg.mgmt.iad2.fedoraproject.org
| 23 | bvmhost-x86-02.iad2.fedoraproject.org | bvmhost-x86-02.mgmt.iad2.fedoraproject.org
| 24 | bvmhost-x86-02.stg.iad2.fedoraproject.org | bvmhost-x86-02.stg.mgmt.iad2.fedoraproject.org
| 25 | bvmhost-x86-03.iad2.fedoraproject.org | bvmhost-x86-03.mgmt.iad2.fedoraproject.org
| 26 | bvmhost-x86-03.stg.iad2.fedoraproject.org | bvmhost-x86-03.stg.mgmt.iad2.fedoraproject.org
| 27 | bvmhost-x86-04.iad2.fedoraproject.org | bvmhost-x86-04.mgmt.iad2.fedoraproject.org
| 28 | bvmhost-x86-05.iad2.fedoraproject.org | bvmhost-x86-05.mgmt.iad2.fedoraproject.org
| 29 | bvmhost-x86-05.stg.iad2.fedoraproject.org | bvmhost-x86-05.stg.mgmt.iad2.fedoraproject.org
| 30 | bvmhost-x86-06.iad2.fedoraproject.org | bvmhost-x86-06.mgmt.iad2.fedoraproject.org
| 31 | bvmhost-x86-07.iad2.fedoraproject.org | bvmhost-x86-07.mgmt.iad2.fedoraproject.org
| 32 | bvmhost-x86-08.iad2.fedoraproject.org | bvmhost-x86-08.mgmt.iad2.fedoraproject.org
| 1 | autosign02.rdu3.fedoraproject.org | autosign02.mgmt.rdu3.fedoraproject.org
| 2 | backup01.rdu3.fedoraproject.org | backup01.mgmt.rdu3.fedoraproject.org
| 3 | bkernel01.rdu3.fedoraproject.org | bkernel01.mgmt.rdu3.fedoraproject.org
| 4 | bkernel02.rdu3.fedoraproject.org | bkernel02.mgmt.rdu3.fedoraproject.org
| 5 | buildhw-x86-01.rdu3.fedoraproject.org | buildhw-x86-01.mgmt.rdu3.fedoraproject.org
| 6 | buildhw-x86-02.rdu3.fedoraproject.org | buildhw-x86-02.mgmt.rdu3.fedoraproject.org
| 7 | buildhw-x86-03.rdu3.fedoraproject.org | buildhw-x86-03.mgmt.rdu3.fedoraproject.org
| 8 | buildhw-x86-04.rdu3.fedoraproject.org | buildhw-x86-04.mgmt.rdu3.fedoraproject.org
| 9 | buildhw-x86-05.rdu3.fedoraproject.org | buildhw-x86-05.mgmt.rdu3.fedoraproject.org
| 10 | buildhw-x86-06.rdu3.fedoraproject.org | buildhw-x86-06.mgmt.rdu3.fedoraproject.org
| 11 | buildhw-x86-07.rdu3.fedoraproject.org | buildhw-x86-07.mgmt.rdu3.fedoraproject.org
| 12 | buildhw-x86-08.rdu3.fedoraproject.org | buildhw-x86-08.mgmt.rdu3.fedoraproject.org
| 13 | buildhw-x86-09.rdu3.fedoraproject.org | buildhw-x86-09.mgmt.rdu3.fedoraproject.org
| 14 | buildhw-x86-10.rdu3.fedoraproject.org | buildhw-x86-10.mgmt.rdu3.fedoraproject.org
| 15 | buildhw-x86-11.rdu3.fedoraproject.org | buildhw-x86-11.mgmt.rdu3.fedoraproject.org
| 16 | buildhw-x86-12.rdu3.fedoraproject.org | buildhw-x86-12.mgmt.rdu3.fedoraproject.org
| 17 | buildhw-x86-13.rdu3.fedoraproject.org | buildhw-x86-13.mgmt.rdu3.fedoraproject.org
| 18 | buildhw-x86-14.rdu3.fedoraproject.org | buildhw-x86-14.mgmt.rdu3.fedoraproject.org
| 19 | buildhw-x86-15.rdu3.fedoraproject.org | buildhw-x86-15.mgmt.rdu3.fedoraproject.org
| 20 | buildhw-x86-16.rdu3.fedoraproject.org | buildhw-x86-16.mgmt.rdu3.fedoraproject.org
| 21 | bvmhost-x86-01.rdu3.fedoraproject.org | bvmhost-x86-01.mgmt.rdu3.fedoraproject.org
| 22 | bvmhost-x86-01.stg.rdu3.fedoraproject.org | bvmhost-x86-01.stg.mgmt.rdu3.fedoraproject.org
| 23 | bvmhost-x86-02.rdu3.fedoraproject.org | bvmhost-x86-02.mgmt.rdu3.fedoraproject.org
| 24 | bvmhost-x86-02.stg.rdu3.fedoraproject.org | bvmhost-x86-02.stg.mgmt.rdu3.fedoraproject.org
| 25 | bvmhost-x86-03.rdu3.fedoraproject.org | bvmhost-x86-03.mgmt.rdu3.fedoraproject.org
| 26 | bvmhost-x86-03.stg.rdu3.fedoraproject.org | bvmhost-x86-03.stg.mgmt.rdu3.fedoraproject.org
| 27 | bvmhost-x86-04.rdu3.fedoraproject.org | bvmhost-x86-04.mgmt.rdu3.fedoraproject.org
| 28 | bvmhost-x86-05.rdu3.fedoraproject.org | bvmhost-x86-05.mgmt.rdu3.fedoraproject.org
| 29 | bvmhost-x86-05.stg.rdu3.fedoraproject.org | bvmhost-x86-05.stg.mgmt.rdu3.fedoraproject.org
| 30 | bvmhost-x86-06.rdu3.fedoraproject.org | bvmhost-x86-06.mgmt.rdu3.fedoraproject.org
| 31 | bvmhost-x86-07.rdu3.fedoraproject.org | bvmhost-x86-07.mgmt.rdu3.fedoraproject.org
| 32 | bvmhost-x86-08.rdu3.fedoraproject.org | bvmhost-x86-08.mgmt.rdu3.fedoraproject.org
| 33 | ibiblio02.fedoraproject.org | ibiblio02.fedoraproject.org
| 34 | ibiblio05.fedoraproject.org | ibiblio05.fedoraproject.org
| 35 | kernel01.iad2.fedoraproject.org | kernel01.mgmt.iad2.fedoraproject.org
| 36 | kernel02.iad2.fedoraproject.org | kernel02.mgmt.iad2.fedoraproject.org
| 37 | openqa-x86-worker01.iad2.fedoraproject.org | openqa-x86-worker01.mgmt.iad2.fedoraproject.org
| 38 | openqa-x86-worker02.iad2.fedoraproject.org | openqa-x86-worker02.mgmt.iad2.fedoraproject.org
| 39 | openqa-x86-worker03.iad2.fedoraproject.org | openqa-x86-worker03.mgmt.iad2.fedoraproject.org
| 40 | openqa-x86-worker04.iad2.fedoraproject.org | openqa-x86-worker04.mgmt.iad2.fedoraproject.org
| 41 | openqa-x86-worker05.iad2.fedoraproject.org | openqa-x86-worker05.mgmt.iad2.fedoraproject.org
| 42 | openqa-x86-worker06.iad2.fedoraproject.org | openqa-x86-worker06.mgmt.iad2.fedoraproject.org
| 35 | kernel01.rdu3.fedoraproject.org | kernel01.mgmt.rdu3.fedoraproject.org
| 36 | kernel02.rdu3.fedoraproject.org | kernel02.mgmt.rdu3.fedoraproject.org
| 37 | openqa-x86-worker01.rdu3.fedoraproject.org | openqa-x86-worker01.mgmt.rdu3.fedoraproject.org
| 38 | openqa-x86-worker02.rdu3.fedoraproject.org | openqa-x86-worker02.mgmt.rdu3.fedoraproject.org
| 39 | openqa-x86-worker03.rdu3.fedoraproject.org | openqa-x86-worker03.mgmt.rdu3.fedoraproject.org
| 40 | openqa-x86-worker04.rdu3.fedoraproject.org | openqa-x86-worker04.mgmt.rdu3.fedoraproject.org
| 41 | openqa-x86-worker05.rdu3.fedoraproject.org | openqa-x86-worker05.mgmt.rdu3.fedoraproject.org
| 42 | openqa-x86-worker06.rdu3.fedoraproject.org | openqa-x86-worker06.mgmt.rdu3.fedoraproject.org
| 43 | osuosl02.fedoraproject.org | osuosl02.fedoraproject.org
| 44 | qvmhost-x86-01.iad2.fedoraproject.org | qvmhost-x86-01.mgmt.iad2.fedoraproject.org
| 45 | qvmhost-x86-02.iad2.fedoraproject.org | qvmhost-x86-02.mgmt.iad2.fedoraproject.org
| 46 | sign-vault01.iad2.fedoraproject.org | sign-vault01.mgmt.iad2.fedoraproject.org
| 47 | sign-vault02.iad2.fedoraproject.org | sign-vault02.mgmt.iad2.fedoraproject.org
| 44 | qvmhost-x86-01.rdu3.fedoraproject.org | qvmhost-x86-01.mgmt.rdu3.fedoraproject.org
| 45 | qvmhost-x86-02.rdu3.fedoraproject.org | qvmhost-x86-02.mgmt.rdu3.fedoraproject.org
| 46 | sign-vault01.rdu3.fedoraproject.org | sign-vault01.mgmt.rdu3.fedoraproject.org
| 47 | sign-vault02.rdu3.fedoraproject.org | sign-vault02.mgmt.rdu3.fedoraproject.org
| 48 | virthost-cc-rdu02.fedoraproject.org | virthost-cc-rdu02.fedoraproject.org
| 49 | vmhost-x86-01.iad2.fedoraproject.org | vmhost-x86-01.mgmt.iad2.fedoraproject.org
| 50 | vmhost-x86-01.stg.iad2.fedoraproject.org | vmhost-x86-01.stg.mgmt.iad2.fedoraproject.org
| 51 | vmhost-x86-02.iad2.fedoraproject.org | vmhost-x86-02.mgmt.iad2.fedoraproject.org
| 52 | vmhost-x86-02.stg.iad2.fedoraproject.org | vmhost-x86-02.stg.mgmt.iad2.fedoraproject.org
| 53 | vmhost-x86-03.iad2.fedoraproject.org | vmhost-x86-03.mgmt.iad2.fedoraproject.org
| 54 | vmhost-x86-04.iad2.fedoraproject.org | vmhost-x86-04.mgmt.iad2.fedoraproject.org
| 55 | vmhost-x86-05.iad2.fedoraproject.org | vmhost-x86-05.mgmt.iad2.fedoraproject.org
| 56 | vmhost-x86-05.stg.iad2.fedoraproject.org | vmhost-x86-05.stg.mgmt.iad2.fedoraproject.org
| 57 | vmhost-x86-06.iad2.fedoraproject.org | vmhost-x86-06.mgmt.iad2.fedoraproject.org
| 58 | vmhost-x86-06.stg.iad2.fedoraproject.org | vmhost-x86-06.stg.mgmt.iad2.fedoraproject.org
| 59 | vmhost-x86-07.iad2.fedoraproject.org | vmhost-x86-07.mgmt.iad2.fedoraproject.org
| 60 | vmhost-x86-07.stg.iad2.fedoraproject.org | vmhost-x86-07.stg.mgmt.iad2.fedoraproject.org
| 61 | vmhost-x86-08.iad2.fedoraproject.org | vmhost-x86-08.mgmt.iad2.fedoraproject.org
| 62 | vmhost-x86-08.stg.iad2.fedoraproject.org | vmhost-x86-08.stg.mgmt.iad2.fedoraproject.org
| 63 | vmhost-x86-09.stg.iad2.fedoraproject.org | vmhost-x86-09.stg.mgmt.iad2.fedoraproject.org
| 64 | vmhost-x86-11.stg.iad2.fedoraproject.org | vmhost-x86-11.stg.mgmt.iad2.fedoraproject.org
| 65 | vmhost-x86-12.stg.iad2.fedoraproject.org | vmhost-x86-12.stg.mgmt.iad2.fedoraproject.org
| 49 | vmhost-x86-01.rdu3.fedoraproject.org | vmhost-x86-01.mgmt.rdu3.fedoraproject.org
| 50 | vmhost-x86-01.stg.rdu3.fedoraproject.org | vmhost-x86-01.stg.mgmt.rdu3.fedoraproject.org
| 51 | vmhost-x86-02.rdu3.fedoraproject.org | vmhost-x86-02.mgmt.rdu3.fedoraproject.org
| 52 | vmhost-x86-02.stg.rdu3.fedoraproject.org | vmhost-x86-02.stg.mgmt.rdu3.fedoraproject.org
| 53 | vmhost-x86-03.rdu3.fedoraproject.org | vmhost-x86-03.mgmt.rdu3.fedoraproject.org
| 54 | vmhost-x86-04.rdu3.fedoraproject.org | vmhost-x86-04.mgmt.rdu3.fedoraproject.org
| 55 | vmhost-x86-05.rdu3.fedoraproject.org | vmhost-x86-05.mgmt.rdu3.fedoraproject.org
| 56 | vmhost-x86-05.stg.rdu3.fedoraproject.org | vmhost-x86-05.stg.mgmt.rdu3.fedoraproject.org
| 57 | vmhost-x86-06.rdu3.fedoraproject.org | vmhost-x86-06.mgmt.rdu3.fedoraproject.org
| 58 | vmhost-x86-06.stg.rdu3.fedoraproject.org | vmhost-x86-06.stg.mgmt.rdu3.fedoraproject.org
| 59 | vmhost-x86-07.rdu3.fedoraproject.org | vmhost-x86-07.mgmt.rdu3.fedoraproject.org
| 60 | vmhost-x86-07.stg.rdu3.fedoraproject.org | vmhost-x86-07.stg.mgmt.rdu3.fedoraproject.org
| 61 | vmhost-x86-08.rdu3.fedoraproject.org | vmhost-x86-08.mgmt.rdu3.fedoraproject.org
| 62 | vmhost-x86-08.stg.rdu3.fedoraproject.org | vmhost-x86-08.stg.mgmt.rdu3.fedoraproject.org
| 63 | vmhost-x86-09.stg.rdu3.fedoraproject.org | vmhost-x86-09.stg.mgmt.rdu3.fedoraproject.org
| 64 | vmhost-x86-11.stg.rdu3.fedoraproject.org | vmhost-x86-11.stg.mgmt.rdu3.fedoraproject.org
| 65 | vmhost-x86-12.stg.rdu3.fedoraproject.org | vmhost-x86-12.stg.mgmt.rdu3.fedoraproject.org
| 66 | vmhost-x86-cc01.rdu-cc.fedoraproject.org | vmhost-x86-cc01.rdu-cc.fedoraproject.org
| 67 | vmhost-x86-cc02.rdu-cc.fedoraproject.org | vmhost-x86-cc02.rdu-cc.fedoraproject.org
| 68 | vmhost-x86-cc03.rdu-cc.fedoraproject.org | vmhost-x86-cc03.rdu-cc.fedoraproject.org
| 69 | vmhost-x86-cc05.rdu-cc.fedoraproject.org | vmhost-x86-cc05.rdu-cc.fedoraproject.org
| 70 | vmhost-x86-cc06.rdu-cc.fedoraproject.org | vmhost-x86-cc06.rdu-cc.fedoraproject.org
| 71 | worker02.ocp.iad2.fedoraproject.org | worker02.ocp.mgmt.iad2.fedoraproject.org
| 72 | worker04.iad2.fedoraproject.org | worker04.mgmt.iad2.fedoraproject.org
| 73 | worker04-stg.ocp.iad2.fedoraproject.org | worker04-stg.ocp.mgmt.iad2.fedoraproject.org
| 74 | worker04.ocp.iad2.fedoraproject.org | worker04.ocp.mgmt.iad2.fedoraproject.org
| 75 | worker05.iad2.fedoraproject.org | worker05.mgmt.iad2.fedoraproject.org
| 76 | worker05.ocp.iad2.fedoraproject.org | worker05.ocp.mgmt.iad2.fedoraproject.org
| 77 | worker06.ocp.iad2.fedoraproject.org | worker06.ocp.mgmt.iad2.fedoraproject.org
| 71 | worker02.ocp.rdu3.fedoraproject.org | worker02.ocp.mgmt.rdu3.fedoraproject.org
| 72 | worker04.rdu3.fedoraproject.org | worker04.mgmt.rdu3.fedoraproject.org
| 73 | worker04-stg.ocp.rdu3.fedoraproject.org | worker04-stg.ocp.mgmt.rdu3.fedoraproject.org
| 74 | worker04.ocp.rdu3.fedoraproject.org | worker04.ocp.mgmt.rdu3.fedoraproject.org
| 75 | worker05.rdu3.fedoraproject.org | worker05.mgmt.rdu3.fedoraproject.org
| 76 | worker05.ocp.rdu3.fedoraproject.org | worker05.ocp.mgmt.rdu3.fedoraproject.org
| 77 | worker06.ocp.rdu3.fedoraproject.org | worker06.ocp.mgmt.rdu3.fedoraproject.org
|==================================================================================================
4. For this instance, we would be performing firmware upgrade on the iDRAC
management controller of the FQDN `autosign02.iad2.fedoraproject.org`.
management controller of the FQDN `autosign02.rdu3.fedoraproject.org`.
5. Ping the management FQDN from the `batcave01` session to obtain the internal
IP address of the same `a.b.c.d` and open it up in web browser.

View file

@ -57,7 +57,7 @@ NOTE: For now it's not easy to find all the projects the user is watching.
=== Command line
. ssh to ipa01.iad2.fedoraproject.org
. ssh to ipa01.rdu3.fedoraproject.org
. run `ipa user-disable <user>`

View file

@ -14,7 +14,7 @@ scripts/generate-oidc-token osbs -e 365 -s https://id.fedoraproject.org/scope/gr
Follow the instructions given by the script and run the SQL command on the ipsilon database server:
----
ssh db-fas01.iad2.fedoraproject.org
ssh db-fas01.rdu3.fedoraproject.org
sudo -u postgres -i ipsilon
ipsilon=# BEGIN;
....

View file

@ -51,7 +51,7 @@ python ../releng/scripts/distgit-branch-unused.py <branch>
.. Go to pkgs01 as root
+
----
ssh pkgs01.iad2.fedoraproject.org
ssh pkgs01.rdu3.fedoraproject.org
----
.. Go to the git repository:

View file

@ -1,6 +1,6 @@
= How to restart the server in datacenter
This guide is for restarting machines in IAD2 datacenter using `ipmitool`.
This guide is for restarting machines in RDU3 datacenter using `ipmitool`.
This only applies to bare hardware instances
(see link:https://pagure.io/fedora-infra/ansible/blob/main/f/inventory/hardware[ansible inventory]).
@ -13,13 +13,13 @@ passwords to be able to follow this guide.
. Login to noc01
+
----
$ ssh noc01.iad2.fedoraproject.org
$ ssh noc01.rdu3.fedoraproject.org
----
+
. Restart machine
+
----
$ ipmitool -U admin -H <host>.mgmt.iad2.fedoraproject.org -I lanplus chassis power reset
$ ipmitool -U admin -H <host>.mgmt.rdu3.fedoraproject.org -I lanplus chassis power reset
----
[NOTE]
@ -33,7 +33,7 @@ example `vmhost-x86-01.stg` will be `vmhost-x86-01-stg`.
[NOTE]
====
For access to console use
`ipmitool -U admin -H <host>.mgmt.iad2.fedoraproject.org -I lanplus sol activate`.
`ipmitool -U admin -H <host>.mgmt.rdu3.fedoraproject.org -I lanplus sol activate`.
To exit the console you need to use `~.` This is a SSH disconnect sequence, so you need to add as
many `~` as ssh jump hosts used.

View file

@ -97,7 +97,7 @@ $ screen
+
The first time any user account executes a compose the pungi-fedora git
repository must be cloned. The compose candidate script that invokes
pungi should be run from `compose-x86-01.iad2.fedoraproject.org`.
pungi should be run from `compose-x86-01.rdu3.fedoraproject.org`.
+
....
$ git clone ssh://git@pagure.io/pungi-fedora.git

View file

@ -14,7 +14,7 @@ These are to be running on `bodhi-backend01` machine.
[source, bash]
----
$ ssh bodhi-backend01.iad2.fedoraproject.org
$ ssh bodhi-backend01.rdu3.fedoraproject.org
----
[source, bash]
@ -475,7 +475,7 @@ You may wish to do this in a tempoary directory to make cleaning it up easy.
=== Koji
Log into koji02.iad2.fedoraproject.org by way of bastion.fedoraproject.org.
Log into koji02.rdu3.fedoraproject.org by way of bastion.fedoraproject.org.
Verify that ``/etc/koji-gc/koji-gc.conf`` has the new key in it.

View file

@ -86,7 +86,7 @@ should increment each time a new compose is created.
. Log into the compose backend
+
....
$ ssh compose-x86-01.iad2.fedoraproject.org
$ ssh compose-x86-01.rdu3.fedoraproject.org
....
. Open a screen session
+
@ -97,7 +97,7 @@ $ screen
+
The first time any user account executes a compose the pungi-fedora git
repository must be cloned. The compose candidate script that invokes
pungi should be run from `compose-x86-01.iad2.fedoraproject.org`.
pungi should be run from `compose-x86-01.rdu3.fedoraproject.org`.
+
....
$ git clone ssh://git@pagure.io/pungi-fedora.git

View file

@ -169,7 +169,7 @@ The `mass-rebuild.py` script takes care of:
+
____
....
$ ssh compose-branched01.iad2.fedoraproject.org
$ ssh compose-branched01.rdu3.fedoraproject.org
....
____
. Start a terminal multiplexer (this ensures if user gets interrupted due to various reasons, the script can continue in a tmux session)
@ -203,7 +203,7 @@ publicly available URLs for stakeholders to monitor.
+
____
....
$ ssh compose-x86-02.iad2.fedoraproject.org / compose-branched01.iad2.fedoraproject.org:22
$ ssh compose-x86-02.rdu3.fedoraproject.org / compose-branched01.rdu3.fedoraproject.org:22
....
____
. Start a terminal multiplexer (this ensures if user gets interrupted due to various reasons, the script can continue in a tmux session)

View file

@ -68,7 +68,7 @@ a sidetag will leave it in a weird state where it cannot be removed.
* Purge from disk the signed copies of rpms that are signed with the
EOL'd release key. To achieve this, add the release key to
*koji_cleanup_signed.py* script in https://pagure.io/releng[releng] repo
and run the script on compose-branched01.iad2.fedoraproject.org:
and run the script on compose-branched01.rdu3.fedoraproject.org:
....
tmux
@ -178,7 +178,7 @@ Otherwise, alert the quality team to do so.
+
____
....
ssh bodhi-backend01.iad2.fedoraproject.org
ssh bodhi-backend01.rdu3.fedoraproject.org
sudo su
su - ftpsync
....

View file

@ -53,7 +53,7 @@ Remove release candidates and unnecessary composes that are no longer needed to
[discrete]
=== Steps
. SSH into `bodhi-backend01.iad2.fedoraproject.org` or any server where Koji is mounted.
. SSH into `bodhi-backend01.rdu3.fedoraproject.org` or any server where Koji is mounted.
. Perform the following:
** Remove all directories related to Beta and Final RCs in `/pub/alt/stage/`.
** Clean up old composes by removing all but the latest in `/mnt/koji/compose/branched/` and `/mnt/koji/compose/{branched}/`.

View file

@ -81,8 +81,8 @@ The recommended method to achieve this is by adding firewall rules to both koji0
[source,bash,subs="attributes"]
----
iptables -I INPUT -m tcp -p tcp --dport 80 -s proxy01.iad2.fedoraproject.org -j REJECT
iptables -I INPUT -m tcp -p tcp --dport 80 -s proxy10.iad2.fedoraproject.org -j REJECT
iptables -I INPUT -m tcp -p tcp --dport 80 -s proxy01.rdu3.fedoraproject.org -j REJECT
iptables -I INPUT -m tcp -p tcp --dport 80 -s proxy10.rdu3.fedoraproject.org -j REJECT
----
These commands reject incoming traffic on port 80 from the specified proxies, preventing external submissions. Internal connections routed via proxy101 and proxy110 will continue to function as expected.
@ -91,8 +91,8 @@ To reverse the firewall changes and allow external submissions again, use:
[source,bash,subs="attributes"]
----
iptables -D INPUT -m tcp -p tcp --dport 80 -s proxy01.iad2.fedoraproject.org -j REJECT
iptables -D INPUT -m tcp -p tcp --dport 80 -s proxy10.iad2.fedoraproject.org -j REJECT
iptables -D INPUT -m tcp -p tcp --dport 80 -s proxy01.rdu3.fedoraproject.org -j REJECT
iptables -D INPUT -m tcp -p tcp --dport 80 -s proxy10.rdu3.fedoraproject.org -j REJECT
----

View file

@ -58,14 +58,14 @@ tags where [.title-ref]#xx# represents the fedora release)
`signingkeyid`: The short hash of the key for this Fedora branch.
The composes are stored under `/srv/odcs/private/` dir on
`odcs-backend-releng01.iad2.fedoraproject.org`
`odcs-backend-releng01.rdu3.fedoraproject.org`
==== Pull the compose to your local machine
We need to extract the rpms and tar them to send them to Cisco. In order
to that, first of all we need to pull the compose to our local machine.
===== Move the compose to your home dir on odcs-backend-releng01.iad2.fedoraproject.org
===== Move the compose to your home dir on odcs-backend-releng01.rdu3.fedoraproject.org
Since the compose is owned by [.title-ref]#odcs-server# pull it into
your home dir
@ -83,7 +83,7 @@ local machine into a temp working dir
....
$ mkdir openh264-20200813
$ scp -rv odcs-backend-releng01.iad2.fedoraproject.org:/home/fedora/mohanboddu/32-openh264/ openh264-20200813/
$ scp -rv odcs-backend-releng01.rdu3.fedoraproject.org:/home/fedora/mohanboddu/32-openh264/ openh264-20200813/
....
===== Make the changes needed
@ -117,8 +117,8 @@ CDN, verify them by using curl. For example:
$ curl -I http://ciscobinary.openh264.org/openh264-2.1.1-1.fc32.x86_64.rpm
....
Now push these composes to *sundries01.iad2.fedoraproject.org* and
*mm-backend01.iad2.fedoraproject.org*
Now push these composes to *sundries01.rdu3.fedoraproject.org* and
*mm-backend01.rdu3.fedoraproject.org*
On sundries01 we need to sync to a directory that is owned by _apache_,
so first we sync to the home directory on sundries01. Same with
@ -127,14 +127,14 @@ mm-backend01 as the directory is owned by _root_.
Create a temp working directory on sundries01
....
$ ssh sundries01.iad2.fedoraproject.org
$ ssh sundries01.rdu3.fedoraproject.org
$ mkdir openh264-20200825
....
Create a temp working directory on mm-backend01
....
$ ssh mm-backend01.iad2.fedoraproject.org
$ ssh mm-backend01.rdu3.fedoraproject.org
$ mkdir openh264-20200825
....
@ -142,8 +142,8 @@ Then from your local machine, sync the compose
....
$ cd openh264-20200825
$ rsync -avhHP 32-openh264 sundries01.iad2.fedoraproject.org:/home/fedora/mohanboddu/openh264-20200825
$ rsync -avhHP 32-openh264 mm-backend01.iad2.fedoraproject.org:/home/fedora/mohanboddu/openh264-20200825
$ rsync -avhHP 32-openh264 sundries01.rdu3.fedoraproject.org:/home/fedora/mohanboddu/openh264-20200825
$ rsync -avhHP 32-openh264 mm-backend01.rdu3.fedoraproject.org:/home/fedora/mohanboddu/openh264-20200825
....
On sundries01
@ -166,7 +166,7 @@ Normally that should be it, but in some cases you may want to push
things out faster than normal, and here's a few things you can do to do
that:
On mm-backend01.iad2.fedoraproject.org you can run:
On mm-backend01.rdu3.fedoraproject.org you can run:
....
# sudo -u mirrormanager /usr/local/bin/umdl-required codecs /var/log/mirrormanager/umdl-required.log
@ -175,7 +175,7 @@ On mm-backend01.iad2.fedoraproject.org you can run:
This will have mirrormanager scan the codecs dir and update it if it's
changed.
On batcave01.iad2.fedoraproject.org you can use ansible to force all the
On batcave01.rdu3.fedoraproject.org you can use ansible to force all the
proxies to sync the codec content from sundries01:
....

View file

@ -51,7 +51,7 @@ python ../releng/scripts/distgit-branch-unused.py <branch>
.. Go to pkgs01 as root
+
----
ssh pkgs01.iad2.fedoraproject.org
ssh pkgs01.rdu3.fedoraproject.org
----
.. Go to the git repository:

View file

@ -9,14 +9,14 @@ This SOP provides instructions for sysadmin-main level users to remove unwanted
* `sysadmin-main`
== Machine
* `pkgs01.iad2.fedoraproject.org`
* `pkgs01.rdu3.fedoraproject.org`
== Steps to access the machine
. SSH into `batcave01.iad2.fedoraproject.org`:
. SSH into `batcave01.rdu3.fedoraproject.org`:
+
----
ssh batcave01.iad2.fedoraproject.org
ssh batcave01.rdu3.fedoraproject.org
----
. Switch to the root user:
@ -25,11 +25,11 @@ This SOP provides instructions for sysadmin-main level users to remove unwanted
sudo su
----
. SSH into `pkgs01.iad2.fedoraproject.org`:
. SSH into `pkgs01.rdu3.fedoraproject.org`:
+
----
ssh pkgs01.iad2.fedoraproject.org
ssh pkgs01.rdu3.fedoraproject.org
----
== Removing unwanted tarballs from dist-git

View file

@ -24,7 +24,7 @@ unblocked as part of the unretirement request.
+
____
....
$ ssh compose-branched01.iad2.fedoraproject.org
$ ssh compose-branched01.rdu3.fedoraproject.org
....
____
. Clone the dist-git package using the the proper release engineering

View file

@ -44,7 +44,7 @@ as they don't have access to much of anything (yet).
To clear a token, admin should:
* login to ipa01.iad2.fedoraproject.org
* login to ipa01.rdu3.fedoraproject.org
* kinit admin@FEDORAPROJECT.ORG (enter the admin password)
* ipa otptoken-find --owner <username>
* ipa otptoken-del <token uuid from previous step>

View file

@ -44,7 +44,7 @@ ticket (who looked, what they saw, etc). Then the account can be
disabled.:
....
Login to ipa01.iad2.fedoraproject.org
Login to ipa01.rdu3.fedoraproject.org
kinit admin@FEDORAPROJECT.ORG
ipa user-disable LOGIN
....

View file

@ -19,10 +19,10 @@ Contact::
Persons::
zlopez
Location::
iad2.fedoraproject.org
rdu3.fedoraproject.org
Servers::
* *Production* - os-master01.iad2.fedoraproject.org
* *Staging* - os-master01.stg.iad2.fedoraproject.org
* *Production* - os-master01.rdu3.fedoraproject.org
* *Staging* - os-master01.stg.rdu3.fedoraproject.org
Purpose::
Map upstream releases to Fedora packages.
@ -63,7 +63,7 @@ documentation].
=== Deploying
Staging deployment of Anitya is deployed in OpenShift on
os-master01.stg.iad2.fedoraproject.org.
os-master01.stg.rdu3.fedoraproject.org.
To deploy staging instance of Anitya you need to push changes to staging
branch on https://github.com/fedora-infra/anitya[Anitya GitHub]. GitHub
@ -71,7 +71,7 @@ webhook will then automatically deploy a new version of Anitya on
staging.
Production deployment of Anitya is deployed in OpenShift on
os-master01.iad2.fedoraproject.org.
os-master01.rdu3.fedoraproject.org.
To deploy production instance of Anitya you need to push changes to
production branch on https://github.com/fedora-infra/anitya[Anitya
@ -82,7 +82,7 @@ Anitya on production.
To deploy the new configuration, you need
xref:sshaccess.adoc[ssh
access] to batcave01.iad2.fedoraproject.org and
access] to batcave01.rdu3.fedoraproject.org and
xref:ansible.adoc[permissions
to run the Ansible playbook].

View file

@ -7,7 +7,7 @@ Owner::
Contact::
admin@fedoraproject.org
Location::
iad2
rdu3
Servers::
bastion01, bastion02
Purpose::
@ -15,7 +15,7 @@ Purpose::
== Description
There are 2 primary bastion hosts in the _iad2_ datacenter. One will be
There are 2 primary bastion hosts in the _rdu3_ datacenter. One will be
active at any given time and the second will be a hot spare, ready to
take over. Switching between bastion hosts is currently a manual process
that requires changes in ansible.
@ -29,10 +29,10 @@ The active bastion host performs the following functions:
* Outgoing smtp from fedora servers. This includes email aliases,
mailing list posts, build and commit notices, mailing list posts, etc.
* Incoming smtp from servers in _iad2_ or on the fedora vpn. Incoming mail
* Incoming smtp from servers in _rdu3_ or on the fedora vpn. Incoming mail
directly from the outside is NOT accepted or forwarded.
* ssh access to all _iad2/vpn_ connected servers.
* ssh access to all _rdu3/vpn_ connected servers.
* openvpn hub. This is the hub that all vpn clients connect to and talk
to each other via. Taking down or stopping this service will be a major

View file

@ -172,7 +172,7 @@ It can be useful to verify correct version is available on the backend,
i.e. for staging run
....
ssh bodhi-backend01.stg.iad2.fedoraproject.org
ssh bodhi-backend01.stg.rdu3.fedoraproject.org
....
....

View file

@ -26,12 +26,12 @@ Owner::
Contact::
#fedora-admin
Location::
iad2
rdu3
Servers::
* bodhi-backend01.iad2.fedoraproject.org (composer)
* bodhi-backend01.rdu3.fedoraproject.org (composer)
* bodhi.fedoraproject.org (web front end and backend task workers for
non-compose tasks)
* bodhi-backend01.stg.iad2.fedoraproject.org (staging composer)
* bodhi-backend01.stg.rdu3.fedoraproject.org (staging composer)
* bodhi.fedoraproject.org (staging web front end and backend task
workers for non-compose tasks)
Purpose::

View file

@ -60,7 +60,7 @@ a new container image:
* update the `fcos_cincinnati_git_sha` playbook variable in
`roles/openshift-apps/coreos-cincinnati/vars/production.yml`
* commit and push the update to the `fedora-infra/ansible` repository
* SSH to `batcave01.iad2.fedoraproject.org`
* SSH to `batcave01.rdu3.fedoraproject.org`
* run `sudo rbac-playbook openshift-apps/coreos-cincinnati.yml` using
your FAS password and your second-factor OTP
* schedule a new build by running

View file

@ -26,7 +26,7 @@ Owner::
Contact::
#fedora-admin, sysadmin-main, sysadmin-dba group
Location::
iad2
rdu3
Servers::
sb01, db03, db-fas01, db-datanommer02, db-koji01, db-s390-koji01,
db-arm-koji01, db-ppc-koji01, db-qa01, dbqastg01

View file

@ -3,7 +3,7 @@
Debuginfod is the software that lies behind the service at
https://debuginfod.fedoraproject.org/ and
https://debuginfod.stg.fedoraproject.org/ . These services run on 1 VM
each in the stg and prod infrastructure at IAD2.
each in the stg and prod infrastructure at RDU3.
== Contact Information
@ -114,7 +114,7 @@ time in the last columns. These can be useful in tracking down possible
abuse. :
....
Jun 28 22:36:43 debuginfod01 debuginfod[381551]: [Mon 28 Jun 2021 10:36:43 PM GMT] (381551/2413727): 10.3.163.75:43776 UA:elfutils/0.185,Linux/x86_64,fedora/35 XFF:*elided* GET /buildid/90910c1963bbcf700c0c0c06ee3bf4c5cc831d3a/debuginfo 200 335440 0+0ms
Jun 28 22:36:43 debuginfod01 debuginfod[381551]: [Mon 28 Jun 2021 10:36:43 PM GMT] (381551/2413727): 10.16.163.75:43776 UA:elfutils/0.185,Linux/x86_64,fedora/35 XFF:*elided* GET /buildid/90910c1963bbcf700c0c0c06ee3bf4c5cc831d3a/debuginfo 200 335440 0+0ms
....
The lines related to prometheus /metrics are usually no big deal.

View file

@ -302,7 +302,7 @@ prefix of `logging.stats`.
== How is it deployed?
All of this runs on `log01.iad2.fedoraproject.org` and is deployed through the
All of this runs on `log01.rdu3.fedoraproject.org` and is deployed through the
`web-data-analysis` role and the `groups/logserver.yml` playbook,
respectively.

View file

@ -17,10 +17,10 @@ ns05.fedoraproject.org::
hosted at internetx (ipv6 enabled)
ns13.rdu2.fedoraproject.org::
in rdu2, internal to rdu2.
ns01.iad2.fedoraproject.org::
in iad2, internal to iad2.
ns02.iad2.fedoraproject.org::
in iad2, internal to iad2.
ns01.rdu3.fedoraproject.org::
in rdu3, internal to rdu3.
ns02.rdu3.fedoraproject.org::
in rdu3, internal to rdu3.
== Contents
@ -45,7 +45,7 @@ Contact:::
Location:::
ServerBeach and ibiblio and internetx and phx2.
Servers:::
ns02, ns05, ns13.rdu2, ns01.iad2, ns02.iad2
ns02, ns05, ns13.rdu2, ns01.rdu3, ns02.rdu3
Purpose:::
Provides DNS to our users
@ -314,15 +314,15 @@ Any machine that is not on our vpn or has not yet joined the vpn should
*NOT* have the vpn.fedoraproject.org search until after it has
been added to the vpn (if it ever does)
====
iad2::
rdu3::
....
search iad2.fedoraproject.org vpn.fedoraproject.org fedoraproject.org
search rdu3.fedoraproject.org vpn.fedoraproject.org fedoraproject.org
....
iad2 in the QA network:::
rdu3 in the QA network:::
....
search qa.fedoraproject.org vpn.fedoraproject.org iad2.fedoraproject.org fedoraproject.org
search qa.fedoraproject.org vpn.fedoraproject.org rdu3.fedoraproject.org fedoraproject.org
....
Non-iad2::
Non-rdu3::
....
search vpn.fedoraproject.org fedoraproject.org
....
@ -330,5 +330,5 @@ search vpn.fedoraproject.org fedoraproject.org
The idea here is that we can, when need be, setup local domains to
contact instead of having to go over the VPN directly but still have
sane configs. For example if we tell the proxy server to hit "app1" and
that box is in _iad2_, it will go directly to app1, if its not, it will go
that box is in _rdu3_, it will go directly to app1, if its not, it will go
over the vpn to app1.

View file

@ -29,14 +29,14 @@ To perform this procedure, you may need to have sysadmin-main access. In the fut
.Firstly, access the management console:
. Ensure you are connected to the official Red Hat VPN.
. Identify the server in question. For this SOP, we will use `bvmhost-x86-01.stg.iad2.fedoraproject.org` as an example.
. To access the management console, append `.mgmt` to the hostname: `bvmhost-x86-01-stg.mgmt.iad2.fedoraproject.org`.
. Identify the server in question. For this SOP, we will use `bvmhost-x86-01.stg.rdu3.fedoraproject.org` as an example.
. To access the management console, append `.mgmt` to the hostname: `bvmhost-x86-01-stg.mgmt.rdu3.fedoraproject.org`.
. Obtain the IP address by pinging the server from `batcave01`:
+
[source,bash]
----
ssh batcave01.iad2.fedoraproject.org
ping bvmhost-x86-01-stg.mgmt.iad2.fedoraproject.org
ssh batcave01.rdu3.fedoraproject.org
ping bvmhost-x86-01-stg.mgmt.rdu3.fedoraproject.org
----
. Visit the IP address in a web browser. The management console uses HTTPS, so accept the self-signed certificate:

View file

@ -1,42 +0,0 @@
= FAS-OpenID
FAS-OpenID is the OpenID server of Fedora infrastructure.
Live instance is at https://id.fedoraproject.org/ Staging instance is at
https://id.stg.fedoraproject.org/
== Contact Information
Owner::
Patrick Uiterwijk (puiterwijk)
Contact::
#fedora-admin, #fedora-apps, #fedora-noc
Location::
openid0\{1,2}.iad2.fedoraproject.org openid01.stg.fedoraproject.org
Purpose::
Authentication & Authorization
== Trusted roots
FAS-OpenID has a set of "trusted roots", which contains websites which
are always trusted, and thus FAS-OpenID will not show the Approve/Reject
form to the user when they login to any such site.
As a policy, we will only add websites to this list which Fedora
Infrastructure controls. If anyone ever ask to add a website to this
list, just answer with this default message:
....
We only add websites we (Fedora Infrastructure) maintain to this list.
This feature was put in because it wouldn't make sense to ask for permission
to send data to the same set of servers that it already came from.
Also, if we were to add external websites, we would need to judge their
privacy policy etc.
Also, people might start complaining that we added site X but not their site,
maybe causing us "political" issues later down the road.
As a result, we do NOT add external websites.
....

View file

@ -31,11 +31,11 @@ Every instance of each service on each host has its own cert and private
key, signed by the CA. By convention, we name the certs
`<service>-<fqdn>.\{crt,key}` For instance, bodhi has the following certs:
* bodhi-app01.iad2.fedoraproject.org
* bodhi-app02.iad2.fedoraproject.org
* bodhi-app03.iad2.fedoraproject.org
* bodhi-app01.stg.iad2.fedoraproject.org
* bodhi-app02.stg.iad2.fedoraproject.org
* bodhi-app01.rdu3.fedoraproject.org
* bodhi-app02.rdu3.fedoraproject.org
* bodhi-app03.rdu3.fedoraproject.org
* bodhi-app01.stg.rdu3.fedoraproject.org
* bodhi-app02.stg.rdu3.fedoraproject.org
* more
Scripts to generate new keys, sign them, and revoke them live in the
@ -60,7 +60,7 @@ The attempt here is to minimize the number of potential attack vectors.
Each private key should be readable only by the service that needs it.
bodhi runs under mod_wsgi in apache and should run as its own unique
bodhi user (not as apache). The permissions for
its _iad2.fedoraproject.org_ private_key, when deployed by ansible, should
its _rdu3.fedoraproject.org_ private_key, when deployed by ansible, should
be read-only for that local bodhi user.
For more information on how fedmsg uses these certs see
@ -105,15 +105,15 @@ $ ./build-and-sign-key <service>-<fqdn>
....
For instance, if we bring up a new app host,
_app10.iad2.fedoraproject.org_, we'll need to generate a new cert/key pair
_app10.rdu3.fedoraproject.org_, we'll need to generate a new cert/key pair
for each fedmsg-enabled service that will be running on it, so you'd
run:
....
$ source ./vars
$ ./build-and-sign-key shell-app10.iad2.fedoraproject.org
$ ./build-and-sign-key bodhi-app10.iad2.fedoraproject.org
$ ./build-and-sign-key mediawiki-app10.iad2.fedoraproject.org
$ ./build-and-sign-key shell-app10.rdu3.fedoraproject.org
$ ./build-and-sign-key bodhi-app10.rdu3.fedoraproject.org
$ ./build-and-sign-key mediawiki-app10.rdu3.fedoraproject.org
....
Just creating the keys isn't quite enough, there are four more things
@ -131,9 +131,9 @@ to be blown away and recreated, the new service-hosts will be included.
For the examples above, you would need to add to the list:
....
shell-app10.iad2.fedoraproject.org
bodhi-app10.iad2.fedoraproject.org
mediawiki-app10.iad2.fedoraproject.org
shell-app10.rdu3.fedoraproject.org
bodhi-app10.rdu3.fedoraproject.org
mediawiki-app10.rdu3.fedoraproject.org
....
You need to ensure that the keys are distributed to the host with the

View file

@ -15,7 +15,7 @@ Contact::
Persons::
nirik
Servers::
batcave01.iad2.fedoraproject.org Various application servers, which
batcave01.rdu3.fedoraproject.org Various application servers, which
will run scripts to delete data.
Purpose::
Respond to Delete requests.
@ -106,5 +106,5 @@ You also need to add the host that the script should run on to the
....
[gdpr_delete]
fedocal01.iad2.fedoraproject.org
fedocal01.rdu3.fedoraproject.org
....

View file

@ -15,7 +15,7 @@ Contact::
Persons::
bowlofeggs
Servers::
batcave01.iad2.fedoraproject.org Various application servers, which
batcave01.rdu3.fedoraproject.org Various application servers, which
will run scripts to collect SAR data.
Purpose::
Respond to SARs.
@ -122,7 +122,7 @@ You also need to add the host that the script should run on to the
....
[sar]
bodhi-backend02.iad2.fedoraproject.org
bodhi-backend02.rdu3.fedoraproject.org
....
=== Variables for OpenShift apps

View file

@ -45,7 +45,7 @@ We expect to grow these over time to new use cases (rawhide compose gating, etc.
== Observing Greenwave Behavior
Login to `os-master01.iad2.fedoraproject.org` as `root` (or,
Login to `os-master01.rdu3.fedoraproject.org` as `root` (or,
authenticate remotely with openshift using
`oc login https://os.fedoraproject.org`), and run:

View file

@ -9,7 +9,7 @@ Owner::
Contact::
#fedora-admin, sysadmin-main
Location::
IAD2, Tummy, ibiblio, Telia, OSUOSL
RDU3, Tummy, ibiblio, Telia, OSUOSL
Servers::
All xen servers, kvm/libvirt servers.
Purpose::

View file

@ -62,9 +62,9 @@ subtraction of specific nodes when we need them.:
....
listen fpo-wiki 0.0.0.0:10001
balance roundrobin
server app1 app1.fedora.iad2.redhat.com:80 check inter 2s rise 2 fall 5
server app2 app2.fedora.iad2.redhat.com:80 check inter 2s rise 2 fall 5
server app4 app4.fedora.iad2.redhat.com:80 backup check inter 2s rise 2 fall 5
server app1 app1.fedora.rdu3.redhat.com:80 check inter 2s rise 2 fall 5
server app2 app2.fedora.rdu3.redhat.com:80 check inter 2s rise 2 fall 5
server app4 app4.fedora.rdu3.redhat.com:80 backup check inter 2s rise 2 fall 5
option httpchk GET /wiki/Infrastructure
....
@ -77,13 +77,13 @@ one. Just check the config file for the lowest open port above 10001.
* The next line _balance roundrobin_ says to use round robin balancing.
* The server lines each add a new node to the balancer farm. In this
case the wiki is being served from app1, app2 and app4. If the wiki is
available at http://app1.fedora.iad2.redhat.com/wiki/ Then this
available at http://app1.fedora.rdu3.redhat.com/wiki/ Then this
config would be used in conjunction with "RewriteRule ^/wiki/(.*)
http://localhost:10001/wiki/$1 [P,L]".
* _server_ means we're adding a new node to the farm
* _app1_ is the worker name, it is analagous to fpo-wiki but should::
match shorthostname of the node to make it easy to follow.
* _app1.fedora.iad2.redhat.com:80_ is the hostname and port to be
* _app1.fedora.rdu3.redhat.com:80_ is the hostname and port to be
contacted.
* _check_ means to check via bottom line "option httpchk GET
/wiki/Infrastructure" which will use /wiki/Infrastructure to verify the

View file

@ -6,7 +6,7 @@ This SOP shows some of the steps required to troubleshoot and diagnose a power i
Symptoms:
- This server is not responding at all, and will not power on.
- To get to mgmt of RDU2-CC devices its a bit trickier than IAD2. We have a private management vlan there, but its only reachable via cloud-noc-os01.rdu-cc.fedoraproject.org. I usually use the sshuttle package/command/app to transparently forward my traffic to devices on that network. That looks something like: `sshuttle 172.23.1.0/24 -r cloud-noc-os01.rdu-cc.fedoraproject.org`
- To get to mgmt of RDU2-CC devices its a bit trickier than RDU3. We have a private management vlan there, but its only reachable via cloud-noc-os01.rdu-cc.fedoraproject.org. I usually use the sshuttle package/command/app to transparently forward my traffic to devices on that network. That looks something like: `sshuttle 172.23.1.0/24 -r cloud-noc-os01.rdu-cc.fedoraproject.org`
- The devices are all in the 172.23.1 network. Theres a list of them in `ansible-private/docs/rdu-networks.txt` but this host is: `172.23.1.105`.
- In the Bitwarden Vault, the management password can be obtained.
- Logs show issues with voltages not being in the correct range.
@ -33,7 +33,7 @@ Purpose::
=== Troubleshooting Steps
.Connect to the management VLAN for the RDU2-CC network:
This is only required because this server is not in IAD2 datacenter. Use sshuttle to make a connection to the 172.23.1.0/24 (from your laptop directly, not from the batcave01 to the management network). `sshuttle 172.23.1.0/24 -r cloud-noc-os01.rdu-cc.fedoraproject.org`
This is only required because this server is not in RDU3 datacenter. Use sshuttle to make a connection to the 172.23.1.0/24 (from your laptop directly, not from the batcave01 to the management network). `sshuttle 172.23.1.0/24 -r cloud-noc-os01.rdu-cc.fedoraproject.org`
.SSH to the batcave01 and retrieve the ip address for this machine
Ssh to the batcave01, access the ansible-private repo and read the IP address for this machine from the `docs/rdu-networks.txt`

View file

@ -67,7 +67,7 @@ the-new-hotness on production.
To deploy the new configuration, you need
xref:sshaccess.adoc[ssh
access] to _batcave01.iad2.fedoraproject.org_ and
access] to _batcave01.rdu3.fedoraproject.org_ and
xref:ansible.adoc[permissions
to run the Ansible playbook].

View file

@ -96,7 +96,6 @@ xref:developer_guide:sops.adoc[Developing Standard Operating Procedures].
* xref:docs.fedoraproject.org.adoc[Docs]
* xref:externally-hosted-services.adoc[Externally Hosted Services]
* xref:failedharddrive.adoc[Replacing Failed Hard Drives]
* xref:fas-openid.adoc[FAS-OpenID]
* xref:fedmsg-certs.adoc[fedmsg (Fedora Messaging) Certs, Keys, and CA]
* xref:fedocal.adoc[Fedocal]
* xref:fedora-releases.adoc[Fedora Release Infrastructure]

View file

@ -16,7 +16,7 @@ Contact::
Location::
Phoenix
Servers::
batcave01.iad2.fedoraproject.org, batcave-comm01.qa.fedoraproject.org
batcave01.rdu3.fedoraproject.org, batcave-comm01.qa.fedoraproject.org
== Steps

View file

@ -12,12 +12,12 @@ Primary upstream contact::
Alexander Bokovoy - FAS: abbra
Servers::
* ipa01.iad2.fedoraproject.org
* ipa02.iad2.fedoraproject.org
* ipa03.iad2.fedoraproject.org
* ipa01.stg.iad2.fedoraproject.org
* ipa02.stg.iad2.fedoraproject.org
* ipa03.stg.iad2.fedoraproject.org
* ipa01.rdu3.fedoraproject.org
* ipa02.rdu3.fedoraproject.org
* ipa03.rdu3.fedoraproject.org
* ipa01.stg.rdu3.fedoraproject.org
* ipa02.stg.rdu3.fedoraproject.org
* ipa03.stg.rdu3.fedoraproject.org
URL::
* link:https://id.fedoraproject.org/ipa/ui[]

View file

@ -25,9 +25,9 @@ Backup upstream contact::
Simo Sorce - FAS: simo (irc: simo) Howard Johnson - FAS: merlinthp
(irc: MerlinTHP) Rob Crittenden - FAS: rcritten (irc: rcrit)
Servers::
* ipsilon01.iad2.fedoraproject.org
* ipsilon02.iad2.fedoraproject.org
* ipsilion01.stg.iad2.fedoraproject.org
* ipsilon01.rdu3.fedoraproject.org
* ipsilon02.rdu3.fedoraproject.org
* ipsilion01.stg.rdu3.fedoraproject.org
Purpose::
Ipsilon is our central authentication service that is used to
authenticate users agains FAS. It is seperate from FAS.

View file

@ -68,7 +68,7 @@ wget https://infrastructure.fedoraproject.org/repo/rhel/RHEL7-x86_64/images/pxeb
wget https://infrastructure.fedoraproject.org/repo/rhel/RHEL7-x86_64/images/pxeboot/initrd.img -O /boot/initrd-install.img
....
For iad2 hosts:
For rdu3 hosts:
....
grubby --add-kernel=/boot/vmlinuz-install \
@ -81,7 +81,7 @@ grubby --add-kernel=/boot/vmlinuz-install \
(You will need to setup the br1 device if any after install)
For non iad2 hosts:
For non rdu3 hosts:
....
grubby --add-kernel=/boot/vmlinuz-install \

View file

@ -27,7 +27,7 @@ $ koji tag-build do-not-archive-yet build1 build2 ...
Then update the archive policy which is available in releng repo
(https://pagure.io/releng/blob/main/f/koji-archive-policy)
Run the following from _compose-x86-01.iad2.fedoraproject.org_
Run the following from _compose-x86-01.rdu3.fedoraproject.org_
....
$ cd $ wget https://pagure.io/releng/raw/master/f/koji-archive-policy

View file

@ -125,7 +125,7 @@ If the openshift-ansible playbook fails it can be easier to run it
directly from osbs-control01 and use the verbose mode.
....
$ ssh osbs-control01.iad2.fedoraproject.org
$ ssh osbs-control01.rdu3.fedoraproject.org
$ sudo -i
# cd /root/openshift-ansible
# ansible-playbook -i cluster-inventory playbooks/prerequisites.yml
@ -143,7 +143,7 @@ When this is done we need to get the new koji service token and update
its value in the private repository
....
$ ssh osbs-master01.iad2.fedoraproject.org
$ ssh osbs-master01.rdu3.fedoraproject.org
$ sudo -i
# oc -n osbs-fedora sa get-token koji
dsjflksfkgjgkjfdl ....

View file

@ -14,7 +14,7 @@ Purpose::
== Description
Mailing list services for Fedora projects are located on the
mailman01.iad2.fedoraproject.org server.
mailman01.rdu3.fedoraproject.org server.
== Common Tasks

View file

@ -14,7 +14,7 @@ Owner:::
Contact:::
#fedora-admin, Red Hat ticket
Servers:::
server[1-5].download.iad2.redhat.com
server[1-5].download.rdu3.redhat.com
Purpose:::
Provides the master mirrors for Fedora distribution
@ -46,11 +46,11 @@ The load balancers then balance between the below Fedora IPs on the
rsync servers:
....
10.8.24.21 (fedora1.download.iad2.redhat.com) - server1.download.iad2.redhat.com
10.8.24.22 (fedora2.download.iad2.redhat.com) - server2.download.iad2.redhat.com
10.8.24.23 (fedora3.download.iad2.redhat.com) - server3.download.iad2.redhat.com
10.8.24.24 (fedora4.download.iad2.redhat.com) - server4.download.iad2.redhat.com
10.8.24.25 (fedora5.download.iad2.redhat.com) - server5.download.iad2.redhat.com
10.8.24.21 (fedora1.download.rdu3.redhat.com) - server1.download.rdu3.redhat.com
10.8.24.22 (fedora2.download.rdu3.redhat.com) - server2.download.rdu3.redhat.com
10.8.24.23 (fedora3.download.rdu3.redhat.com) - server3.download.rdu3.redhat.com
10.8.24.24 (fedora4.download.rdu3.redhat.com) - server4.download.rdu3.redhat.com
10.8.24.25 (fedora5.download.rdu3.redhat.com) - server5.download.rdu3.redhat.com
....
== RDU I2 Master Mirror Setup

View file

@ -1,6 +1,6 @@
= Netapp Infrastructure SOP
Provides primary mirrors and additional storage in IAD2
Provides primary mirrors and additional storage in RDU3
== Contents
@ -8,7 +8,7 @@ Provides primary mirrors and additional storage in IAD2
* <<_description>>
* <<_public_mirrors>>
** <<_snapshots>>
* <<_iad2_nfs_storage>>
* <<_rdu3_nfs_storage>>
** <<_access>>
** <<_snapshots>>
* <<_iscsi>>
@ -24,25 +24,25 @@ Contact::
Servers::
batcave01, virt servers, application servers, builders, releng boxes
Purpose::
Provides primary mirrors and additional storage in IAD2
Provides primary mirrors and additional storage in RDU3
== Description
At present we have three netapps in our infrastructure. One in TPA, RDU
and IAD2. For purposes of visualization its easiest to think of us as
having 4 netapps, 1 TPA, 1 RDU and 1 IAD2 for public mirrors. And an
additional 1 in IAD2 used for additional storage not related to the
and RDU3. For purposes of visualization its easiest to think of us as
having 4 netapps, 1 TPA, 1 RDU and 1 RDU3 for public mirrors. And an
additional 1 in RDU3 used for additional storage not related to the
public mirrors.
== Public Mirrors
The netapps are our primary public mirrors. The canonical location for
the mirrors is currently in IAD2. From there it gets synced to RDU and
the mirrors is currently in RDU3. From there it gets synced to RDU and
TPA.
=== Snapshots
Snapshots on the IAD2 netapp are taken hourly. Unfortunately the way it
Snapshots on the RDU3 netapp are taken hourly. Unfortunately the way it
is setup only Red Hat employees can access this mirror (this is
scheduled to change when PHX becomes the canonical location but that
will take time to setup and deploy). The snapshots are available, for
@ -52,12 +52,12 @@ example, on wallace in:
/var/ftp/download.fedora.redhat.com/.snapshot/hourly.0
....
== IAD2 NFS Storage
== RDU3 NFS Storage
There is a great deal of storage in IAD2 over NFS from the netapp there.
There is a great deal of storage in RDU3 over NFS from the netapp there.
This storage includes the public mirror. The majority of this storage is
koji however there are a few gig worth of storage that goes to wiki
attachments and other storage needs we have in IAD2.
attachments and other storage needs we have in RDU3.
You can access all of the nfs share shares at:
@ -68,7 +68,7 @@ batcave01:/mnt/fedora
or:
....
ntap-fedora-a.storage.iad2.redhat.com:/vol/fedora/
ntap-fedora-a.storage.rdu3.redhat.com:/vol/fedora/
....
=== Access
@ -120,7 +120,7 @@ On reboots sometimes the iscsi share is not remounted. This should be
automated in the future but for now run:
....
iscsiadm -m discovery -tst -p ntap-fedora-b.storage.iad2.redhat.com:3260
iscsiadm -m discovery -tst -p ntap-fedora-b.storage.rdu3.redhat.com:3260
sleep 1
iscsiadm -m node -T iqn.1992-08.com.netapp:sn.118047036 -p 10.5.88.21:3260 -l
sleep 1

View file

@ -17,16 +17,16 @@ been recently added to the data center/network that you want:
....
git grep badges-web01
built/126.5.10.in-addr.arpa:69 IN PTR badges-web01.stg.iad2.fedoraproject.org.
built/126.5.10.in-addr.arpa:69 IN PTR badges-web01.stg.rdu3.fedoraproject.org.
[...lots of other stuff in built/ ignore these as they'll be generated later...]
master/126.5.10.in-addr.arpa:69 IN PTR badges-web01.stg.iad2.fedoraproject.org.
master/126.5.10.in-addr.arpa:101 IN PTR badges-web01.iad2.fedoraproject.org.
master/126.5.10.in-addr.arpa:102 IN PTR badges-web02.iad2.fedoraproject.org.
master/126.5.10.in-addr.arpa:69 IN PTR badges-web01.stg.rdu3.fedoraproject.org.
master/126.5.10.in-addr.arpa:101 IN PTR badges-web01.rdu3.fedoraproject.org.
master/126.5.10.in-addr.arpa:102 IN PTR badges-web02.rdu3.fedoraproject.org.
master/168.192.in-addr.arpa:109.1 IN PTR badges-web01.vpn.fedoraproject.org
master/168.192.in-addr.arpa:110.1 IN PTR badges-web02.vpn.fedoraproject.org
master/iad2.fedoraproject.org:badges-web01.stg IN A 10.5.126.69
master/iad2.fedoraproject.org:badges-web01 IN A 10.5.126.101
master/iad2.fedoraproject.org:badges-web02 IN A 10.5.126.102
master/rdu3.fedoraproject.org:badges-web01.stg IN A 10.5.126.69
master/rdu3.fedoraproject.org:badges-web01 IN A 10.5.126.101
master/rdu3.fedoraproject.org:badges-web02 IN A 10.5.126.102
master/vpn.fedoraproject.org:badges-web01 IN A 192.168.1.109
master/vpn.fedoraproject.org:badges-web02 IN A 192.168.1.110
....
@ -36,9 +36,9 @@ those files are for the host on the IAD network. The other two are for
the host to be able to talk over the VPN. Although the VPN is not always
needed, the common case is that the host will need it. (If any clients
_need to connect to it via the proxy servers_ or it is not hosted in
IAD2 it will need a VPN connection). An common exception is here the
RDU3 it will need a VPN connection). An common exception is here the
staging environment: since we only have one proxy server in staging and
it is in IAD2, a VPN connection is not typically needed for staging
it is in RDU3, a VPN connection is not typically needed for staging
hosts.
Edit the zone file for the reverse lookup first (the *in-addr.arpa file)
@ -55,13 +55,13 @@ in stg into production:
-106 IN PTR unused.
-107 IN PTR unused.
-108 IN PTR unused.
+105 IN PTR elections01.stg.iad2.fedoraproject.org.
+106 IN PTR elections02.stg.iad2.fedoraproject.org.
+107 IN PTR elections01.iad2.fedoraproject.org.
+108 IN PTR elections02.iad2.fedoraproject.org.
+105 IN PTR elections01.stg.rdu3.fedoraproject.org.
+106 IN PTR elections02.stg.rdu3.fedoraproject.org.
+107 IN PTR elections01.rdu3.fedoraproject.org.
+108 IN PTR elections02.rdu3.fedoraproject.org.
....
Edit the forward domain (iad2.fedoraproject.org in our example) next:
Edit the forward domain (rdu3.fedoraproject.org in our example) next:
....
elections01.stg IN A 10.5.126.105
@ -71,8 +71,8 @@ elections02 IN A 10.5.126.108
....
Repeat these two steps if you need to make them available on the VPN.
Note: if your stg hosts are in IAD2, you don't need to configure VPN for
them as all our stg proxy servers are in IAD2.
Note: if your stg hosts are in RDU3, you don't need to configure VPN for
them as all our stg proxy servers are in RDU3.
Also remember to update the Serial at the top of all zone files.
@ -115,11 +115,11 @@ to have valid SSL Certs. These are currently stored in the private repo:
git clone /srv/git/ansible-private && chmod 0700 ansible-private
cd ansible-private/files/2fa-certs
. ./vars
./build-and-sign-key $FQDN # ex: elections01.stg.iad2.fedoraproject.org
./build-and-sign-key $FQDN # ex: elections01.stg.rdu3.fedoraproject.org
....
The `$FQDN` should be the iad2 domain name if it's in iad2, vpn if not in
iad2, and if it has no vpn and is not in iad2 we should add it to the
The `$FQDN` should be the rdu3 domain name if it's in rdu3, vpn if not in
rdu3, and if it has no vpn and is not in rdu3 we should add it to the
vpn.:
....
@ -141,11 +141,11 @@ stored in the private repo:
....
cd ansible-private/files/vpn/
./addhost.sh $FQDN # ex: zabbix01.iad2.fedoraproject.org
./addhost.sh $FQDN # ex: zabbix01.rdu3.fedoraproject.org
....
The `$FQDN` should be the iad2 domain name if it's in iad2, and just
fedoraproject.org if it's not in IAD2 (note that there is never .vpn in
The `$FQDN` should be the rdu3 domain name if it's in rdu3, and just
fedoraproject.org if it's not in RDU3 (note that there is never .vpn in
the FQDN in the openvpn keys). Now commit and push.:
....
@ -178,26 +178,26 @@ create things like this:
....
[elections]
elections01.iad2.fedoraproject.org
elections02.iad2.fedoraproject.org
elections01.rdu3.fedoraproject.org
elections02.rdu3.fedoraproject.org
[elections-stg]
elections01.stg.iad2.fedoraproject.org
elections02.stg.iad2.fedoraproject.org
elections01.stg.rdu3.fedoraproject.org
elections02.stg.rdu3.fedoraproject.org
[... find the staging group and add there: ...]
[staging]
db-fas01.stg.iad2.fedoraproject.org
elections01.stg.iad2.fedoraproject.org
electionst02.stg.iad2.fedoraproject.org
db-fas01.stg.rdu3.fedoraproject.org
elections01.stg.rdu3.fedoraproject.org
electionst02.stg.rdu3.fedoraproject.org
....
The hosts should use their fully qualified domain names here. The rules
are slightly different than for 2fa certs. If the host is in IAD2, use
the .iad2.fedoraproject.org domain name. If they aren't in IAD2, then
are slightly different than for 2fa certs. If the host is in RDU3, use
the .rdu3.fedoraproject.org domain name. If they aren't in RDU3, then
they usually just have .fedoraproject.org as their domain name. (If in
doubt about a not-in-IAD2 host, just ask).
doubt about a not-in-RDU3 host, just ask).
=== VPN config
@ -209,7 +209,7 @@ ifconfig-push 192.168.1.X 192.168.0.X
....
Where X is the last octet of the DNS IP address assigned to the host, so
for example for _elections01.iad2.fedoraproject.org_ that would be:
for example for _elections01.rdu3.fedoraproject.org_ that would be:
....
ifconfig-push 192.168.1.44 192.168.0.44
@ -248,7 +248,7 @@ claimed in the dns repo:
....
cd ~/ansible/inventory/host_vars
cp badges-web01.stg.iad2.fedoraproject.org elections01.stg.iad2.fedoraproject.org
cp badges-web01.stg.rdu3.fedoraproject.org elections01.stg.rdu3.fedoraproject.org
<edit appropriately>
....

View file

@ -17,14 +17,14 @@ Contact::
Persons::
.oncall
Servers::
* os-master01.iad2.fedoraproject.org
* os-master02.iad2.fedoraproject.org
* os-master03.iad2.fedoraproject.org
* os-node01.iad2.fedoraproject.org
* os-node02.iad2.fedoraproject.org
* os-node03.iad2.fedoraproject.org
* os-node04.iad2.fedoraproject.org
* os-node05.iad2.fedoraproject.org
* os-master01.rdu3.fedoraproject.org
* os-master02.rdu3.fedoraproject.org
* os-master03.rdu3.fedoraproject.org
* os-node01.rdu3.fedoraproject.org
* os-node02.rdu3.fedoraproject.org
* os-node03.rdu3.fedoraproject.org
* os-node04.rdu3.fedoraproject.org
* os-node05.rdu3.fedoraproject.org
Purpose::
Run Fedora Infrastructure applications

View file

@ -89,10 +89,10 @@ search vpn.fedoraproject.org fedoraproject.org
for external hosts and:
....
search iad2.fedoraproject.org vpn.fedoraproject.org fedoraproject.org
search rdu3.fedoraproject.org vpn.fedoraproject.org fedoraproject.org
....
for IAD2 hosts.
for RDU3 hosts.
== Remove a host
@ -116,6 +116,6 @@ git push
== TODO
Deploy an additional VPN server outside of IAD2. OpenVPN does support
Deploy an additional VPN server outside of RDU3. OpenVPN does support
failover automatically so if configured properly, when the primary VPN
server goes down all hosts should connect to the next host in the list.

View file

@ -12,7 +12,7 @@ Owner::
Contact::
#fedora-admin, #fedora-noc or admin@fedoraproject.org
Server(s)::
sundries01.iad2.fedoraproject.org
sundries01.rdu3.fedoraproject.org
Purpose::
To explain the overall function of this page, where, and how it gets
its information.

View file

@ -28,7 +28,7 @@ signing key.
. Remove builder from koji:
+
....
koji disable-host bkernel01.iad2.fedoraproject.org
koji disable-host bkernel01.rdu3.fedoraproject.org
....
. Make sure all builds have completed.
. Stop existing processes:
@ -54,8 +54,8 @@ pesign-client -t "OpenSC Card (Fedora Signer)" -u
remove other builder:
+
....
koji enable-host bkernel01.iad2.fedoraproject.org
koji disable-host bkernel02.iad2.fedoraproject.org
koji enable-host bkernel01.rdu3.fedoraproject.org
koji disable-host bkernel02.rdu3.fedoraproject.org
....
. Have a commiter send a build of pesign-test-app and make sure it's
signed correctly.

View file

@ -15,8 +15,8 @@ Fedora Infrastructure Team
=== Servers
* rabbitmq0[1-3].iad2.fedoraproject.org
* rabbitmq0[1-3].stg.iad2.fedoraproject.org
* rabbitmq0[1-3].rdu3.fedoraproject.org
* rabbitmq0[1-3].stg.rdu3.fedoraproject.org
=== Purpose
@ -101,8 +101,8 @@ It should not return the empty array (`[]`) but something like:
{upstream,<<"pubsub-to-public_pubsub">>},
{id,<<"b40208be0a999cc93a78eb9e41531618f96d4cb2">>},
{status,running},
{local_connection,<<"<rabbit@rabbitmq01.iad2.fedoraproject.org.2.8709.481>">>},
{uri,<<"amqps://rabbitmq01.iad2.fedoraproject.org/%2Fpubsub">>},
{local_connection,<<"<rabbit@rabbitmq01.rdu3.fedoraproject.org.2.8709.481>">>},
{uri,<<"amqps://rabbitmq01.rdu3.fedoraproject.org/%2Fpubsub">>},
{timestamp,{{2020,3,11},{16,45,18}}}],
[{exchange,<<"zmq.topic">>},
{upstream_exchange,<<"zmq.topic">>},
@ -111,8 +111,8 @@ It should not return the empty array (`[]`) but something like:
{upstream,<<"pubsub-to-public_pubsub">>},
{id,<<"c1e7747425938349520c60dda5671b2758e210b8">>},
{status,running},
{local_connection,<<"<rabbit@rabbitmq01.iad2.fedoraproject.org.2.8718.481>">>},
{uri,<<"amqps://rabbitmq01.iad2.fedoraproject.org/%2Fpubsub">>},
{local_connection,<<"<rabbit@rabbitmq01.rdu3.fedoraproject.org.2.8718.481>">>},
{uri,<<"amqps://rabbitmq01.rdu3.fedoraproject.org/%2Fpubsub">>},
{timestamp,{{2020,3,11},{16,45,17}}}]]
....

View file

@ -16,11 +16,11 @@ Contact::
Persons::
bowlofeggs cverna puiterwijk
Servers::
* oci-candidate-registry01.iad2.fedoraproject.org
* oci-candidate-registry01.stg.iad2.fedoraproject.org
* oci-registry01.iad2.fedoraproject.org
* oci-registry01.stg.iad2.fedoraproject.org
* oci-registry02.iad2.fedoraproject.org
* oci-candidate-registry01.rdu3.fedoraproject.org
* oci-candidate-registry01.stg.rdu3.fedoraproject.org
* oci-registry01.rdu3.fedoraproject.org
* oci-registry01.stg.rdu3.fedoraproject.org
* oci-registry02.rdu3.fedoraproject.org
Purpose::
Serve Fedora's container images

View file

@ -17,27 +17,27 @@ eg:
+
----
[ocp_workers]
worker01.ocp.iad2.fedoraproject.org
worker02.ocp.iad2.fedoraproject.org
worker03.ocp.iad2.fedoraproject.org
worker01.ocp.rdu3.fedoraproject.org
worker02.ocp.rdu3.fedoraproject.org
worker03.ocp.rdu3.fedoraproject.org
[ocp_workers_stg]
worker01.ocp.stg.iad2.fedoraproject.org
worker02.ocp.stg.iad2.fedoraproject.org
worker03.ocp.stg.iad2.fedoraproject.org
worker04.ocp.stg.iad2.fedoraproject.org
worker05.ocp.stg.iad2.fedoraproject.org
worker01.ocp.stg.rdu3.fedoraproject.org
worker02.ocp.stg.rdu3.fedoraproject.org
worker03.ocp.stg.rdu3.fedoraproject.org
worker04.ocp.stg.rdu3.fedoraproject.org
worker05.ocp.stg.rdu3.fedoraproject.org
----
2. Add the new hostvars for each new host being added, see the following examples for `VM` vs `baremetal` hosts.
+
----
# control plane VM
inventory/host_vars/ocp01.ocp.iad2.fedoraproject.org
inventory/host_vars/ocp01.ocp.rdu3.fedoraproject.org
# compute baremetal
inventory/host_vars/worker01.ocp.iad2.fedoraproject.org
inventory/host_vars/worker01.ocp.rdu3.fedoraproject.org
----
3. If the nodes are `compute` or `worker` nodes, they must be also added to the following group_vars `proxies` for prod, `proxies_stg` for staging
@ -47,15 +47,15 @@ inventory/group_vars/proxies:ocp_nodes:
inventory/group_vars/proxies_stg:ocp_nodes_stg:
----
4. Changes must be made to the `roles/dhcp_server/files/dhcpd.conf.noc01.iad2.fedoraproject.org` file for DHCP to ensure that the node will receive an IP address based on its MAC address, and tells the node to reach out to the `next-server` where it can find the UEFI boot configuration.
4. Changes must be made to the `roles/dhcp_server/files/dhcpd.conf.noc01.rdu3.fedoraproject.org` file for DHCP to ensure that the node will receive an IP address based on its MAC address, and tells the node to reach out to the `next-server` where it can find the UEFI boot configuration.
+
----
host worker01-ocp { # UPDATE THIS
hardware ethernet 68:05:CA:CE:A3:C9; # UPDATE THIS
fixed-address 10.3.163.123; # UPDATE THIS
fixed-address 10.16.163.123; # UPDATE THIS
filename "uefi/grubx64.efi";
next-server 10.3.163.10;
option routers 10.3.163.254;
next-server 10.16.163.10;
option routers 10.16.163.254;
option subnet-mask 255.255.255.0;
}
----
@ -65,10 +65,10 @@ host worker01-ocp { # UPDATE THIS
See the following examples for the `worker01.ocp` nodes for production and staging.
+
----
master/163.3.10.in-addr.arpa:123 IN PTR worker01.ocp.iad2.fedoraproject.org.
master/166.3.10.in-addr.arpa:118 IN PTR worker01.ocp.stg.iad2.fedoraproject.org.
master/iad2.fedoraproject.org:worker01.ocp IN A 10.3.163.123
master/stg.iad2.fedoraproject.org:worker01.ocp IN A 10.3.166.118
master/163.3.10.in-addr.arpa:123 IN PTR worker01.ocp.rdu3.fedoraproject.org.
master/166.3.10.in-addr.arpa:118 IN PTR worker01.ocp.stg.rdu3.fedoraproject.org.
master/rdu3.fedoraproject.org:worker01.ocp IN A 10.16.163.123
master/stg.rdu3.fedoraproject.org:worker01.ocp IN A 10.16.166.118
----
6. Run the playbook to update the haproxy config to monitor the new nodes, and add it to the load balancer.
@ -82,13 +82,13 @@ sudo rbac-playbook groups/proxies.yml -t 'haproxy,httpd'
+
----
menuentry 'RHCOS 4.8 worker staging' {
linuxefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-kernel-x86_64 ip=dhcp nameserver=10.3.163.33 coreos.inst.install_dev=/dev/sda
coreos.live.rootfs_url=http://10.3.166.50/rhcos/rhcos-4.8.2-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://10.3.166.50/rhcos/worker.ign
linuxefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-kernel-x86_64 ip=dhcp nameserver=10.16.163.33 coreos.inst.install_dev=/dev/sda
coreos.live.rootfs_url=http://10.16.166.50/rhcos/rhcos-4.8.2-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://10.16.166.50/rhcos/worker.ign
initrdefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-initramfs.x86_64.img
}
menuentry 'RHCOS 4.8 worker production' {
linuxefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-kernel-x86_64 ip=dhcp nameserver=10.3.163.33 coreos.inst.install_dev=/dev/sda
coreos.live.rootfs_url=http://10.3.163.65/rhcos/rhcos-4.8.2-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://10.3.163.65/rhcos/worker.ign
linuxefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-kernel-x86_64 ip=dhcp nameserver=10.16.163.33 coreos.inst.install_dev=/dev/sda
coreos.live.rootfs_url=http://10.16.163.65/rhcos/rhcos-4.8.2-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://10.16.163.65/rhcos/worker.ign
initrdefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-initramfs.x86_64.img
}
----

View file

@ -19,7 +19,7 @@ This SOP should be used in the following scenario:
2. From within the Openshift webconsole, or via cli search for all "LocalVolumeDiscovery" objects.
+
----
[root@os-control01 ~][PROD-IAD2]# oc get localvolumediscovery --all-namespaces
[root@os-control01 ~][PROD-RDU3]# oc get localvolumediscovery --all-namespaces
NAMESPACE NAME AGE
openshift-local-storage auto-discover-devices 167d
----
@ -43,11 +43,11 @@ spec:
- key: kubernetes.io/hostname
operator: In
values:
- worker01.ocp.iad2.fedoraproject.org
- worker02.ocp.iad2.fedoraproject.org
- worker03.ocp.iad2.fedoraproject.org
- worker04.ocp.iad2.fedoraproject.org
- worker05.ocp.iad2.fedoraproject.org
- worker01.ocp.rdu3.fedoraproject.org
- worker02.ocp.rdu3.fedoraproject.org
- worker03.ocp.rdu3.fedoraproject.org
- worker04.ocp.rdu3.fedoraproject.org
- worker05.ocp.rdu3.fedoraproject.org
...
----
+
@ -75,11 +75,11 @@ spec:
- key: kubernetes.io/hostname
operator: In
values:
- worker01.ocp.iad2.fedoraproject.org
- worker02.ocp.iad2.fedoraproject.org
- worker03.ocp.iad2.fedoraproject.org
- worker04.ocp.iad2.fedoraproject.org
- worker05.ocp.iad2.fedoraproject.org
- worker01.ocp.rdu3.fedoraproject.org
- worker02.ocp.rdu3.fedoraproject.org
- worker03.ocp.rdu3.fedoraproject.org
- worker04.ocp.rdu3.fedoraproject.org
- worker05.ocp.rdu3.fedoraproject.org
...
----
+
@ -88,7 +88,7 @@ Write and save the change.
4. Add the `cluster.ocs.openshift.io/openshift-storage` label to the new node:
+
----
oc label no worker05.ocp.iad2.fedoraproject.org cluster.ocs.openshift.io/openshift-storage=''
oc label no worker05.ocp.rdu3.fedoraproject.org cluster.ocs.openshift.io/openshift-storage=''
----
5. From the Openshift Web console visit `Storage, OpenShift Data Foundation`, then in the `Storage Systems` sub menu, click the 3 dot menu on the right beside the `ocs-storagecluster-storage` object. Choose `Add Capacity` option. From the popup menu that appears, ensure that the storage class `local-block` is selected in the list. Finally confirm with add.

View file

@ -30,13 +30,13 @@ The following machines are those which are relevant to Releng.
----
machines:
[releng_compose]
compose-x86-01.iad2.fedoraproject.org
compose-branched01.iad2.fedoraproject.org
compose-rawhide01.iad2.fedoraproject.org
compose-iot01.iad2.fedoraproject.org
compose-x86-01.rdu3.fedoraproject.org
compose-branched01.rdu3.fedoraproject.org
compose-rawhide01.rdu3.fedoraproject.org
compose-iot01.rdu3.fedoraproject.org
[releng_compose_stg]
compose-x86-01.stg.iad2.fedoraproject.org
compose-x86-01.stg.rdu3.fedoraproject.org
----
First install the Zabbix agent on these `releng_compose:releng_compose_stg` hosts via the `zabbix/zabbix_agent` ansible role [11]. We targetted the `groups/releng-compose.yml` playbook as this is responsible for targetting these hosts.
@ -68,13 +68,13 @@ Cronjobs are installed on the releng hosts via the following ansible task[13]. T
- 1: ftbfs weekly cron job `"ftbfs.cron" /etc/cron.weekly/ on compose-x86-01`
- 2: branched compose cron `"branched" /etc/cron.d/branched on compose-branched01.iad2`
- 3: rawhide compose cron `"rawhide" etc/cron.d/rawhide on compose-rawhide01.iad2`
- 4: cloud-updates compose cron `"cloud-updates" /etc/cron.d/cloud-updates on compose-x86-01.iad2`
- 5: container-updates compose cron `"container-updates" /etc/cron.d/container-updates on compose-x86-01.iad2`
- 6: clean-amis cron `"clean-amis.j2" /etc/cron.d/clean-amis on compose-x86-01.iad2`
- 7: rawhide-iot compose cron `"rawhide-iot" /etc/cron.d/rawhide-iot on compose-iot-01.iad2`
- 8: sig_policy cron `"sig_policy.j2" /etc/cron.d/sig_policy on compose-x86-01.iad2'`
- 2: branched compose cron `"branched" /etc/cron.d/branched on compose-branched01.rdu3`
- 3: rawhide compose cron `"rawhide" etc/cron.d/rawhide on compose-rawhide01.rdu3`
- 4: cloud-updates compose cron `"cloud-updates" /etc/cron.d/cloud-updates on compose-x86-01.rdu3`
- 5: container-updates compose cron `"container-updates" /etc/cron.d/container-updates on compose-x86-01.rdu3`
- 6: clean-amis cron `"clean-amis.j2" /etc/cron.d/clean-amis on compose-x86-01.rdu3`
- 7: rawhide-iot compose cron `"rawhide-iot" /etc/cron.d/rawhide-iot on compose-iot-01.rdu3`
- 8: sig_policy cron `"sig_policy.j2" /etc/cron.d/sig_policy on compose-x86-01.rdu3'`
Need at least one Zabbix check per cronjob. The Zabbix check should do the following.

View file

@ -21,7 +21,7 @@ The following is a sample configuration to install a baremetal OCP4 worker in th
----
menuentry 'RHCOS 4.8 worker staging' {
linuxefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-kernel-x86_64 ip=dhcp nameserver=10.3.163.33 coreos.inst.install_dev=/dev/sda coreos.live.rootfs_url=http://10.3.166.50/rhcos/rhcos-4.8.2-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://10.3.166.50/rhcos/worker.ign
linuxefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-kernel-x86_64 ip=dhcp nameserver=10.16.163.33 coreos.inst.install_dev=/dev/sda coreos.live.rootfs_url=http://10.16.166.50/rhcos/rhcos-4.8.2-x86_64-live-rootfs.x86_64.img coreos.inst.ignition_url=http://10.16.166.50/rhcos/worker.ign
initrdefi images/RHCOS/4.8/x86_64/rhcos-4.8.2-x86_64-live-initramfs.x86_64.img
}
----

View file

@ -44,7 +44,7 @@ spec:
capacity:
storage: 100Gi
nfs:
server: 10.3.162.11
server: 10.16.162.11
path: /ocp_prod_registry
accessModes:
- ReadWriteMany

View file

@ -21,12 +21,12 @@ To connect to `idrac`, you must be connected to the Red Hat VPN. Next find the m
On the `batcave01` instance, in the dns configuration, the following bare metal machines make up the production/staging OCP4 worker nodes.
----
oshift-dell01 IN A 10.3.160.180 # worker01 prod
oshift-dell02 IN A 10.3.160.181 # worker02 prod
oshift-dell03 IN A 10.3.160.182 # worker03 prod
oshift-dell04 IN A 10.3.160.183 # worker01 staging
oshift-dell05 IN A 10.3.160.184 # worker02 staging
oshift-dell06 IN A 10.3.160.185 # worker03 staging
oshift-dell01 IN A 10.16.160.180 # worker01 prod
oshift-dell02 IN A 10.16.160.181 # worker02 prod
oshift-dell03 IN A 10.16.160.182 # worker03 prod
oshift-dell04 IN A 10.16.160.183 # worker01 staging
oshift-dell05 IN A 10.16.160.184 # worker02 staging
oshift-dell06 IN A 10.16.160.185 # worker03 staging
----
Login to the `idrac` interface that corresponds with each worker, one at a time. Ensure the node is booting via harddrive, then power it on.

View file

@ -138,7 +138,7 @@ If there are VMs used for some of the roles, make sure to leave it in.
==== Baremetal
At this point we can switch on the baremetal nodes and begin the PXE/UEFI boot process. The baremetal nodes should via DHCP/DNS have the configuration necessary to reach out to the `noc01.iad2.fedoraproject.org` server and retrieve the UEFI boot configuration via PXE.
At this point we can switch on the baremetal nodes and begin the PXE/UEFI boot process. The baremetal nodes should via DHCP/DNS have the configuration necessary to reach out to the `noc01.rdu3.fedoraproject.org` server and retrieve the UEFI boot configuration via PXE.
Once booted up, you should visit the management console for this node, and manually choose the UEFI configuration appropriate for its role.
@ -187,14 +187,14 @@ This should look something like this once completed:
----
[root@os-control01 ocp4][STG]= oc get nodes
NAME STATUS ROLES AGE VERSION
ocp01.ocp.stg.iad2.fedoraproject.org Ready master 34d v1.21.1+9807387
ocp02.ocp.stg.iad2.fedoraproject.org Ready master 34d v1.21.1+9807387
ocp03.ocp.stg.iad2.fedoraproject.org Ready master 34d v1.21.1+9807387
worker01.ocp.stg.iad2.fedoraproject.org Ready worker 21d v1.21.1+9807387
worker02.ocp.stg.iad2.fedoraproject.org Ready worker 20d v1.21.1+9807387
worker03.ocp.stg.iad2.fedoraproject.org Ready worker 20d v1.21.1+9807387
worker04.ocp.stg.iad2.fedoraproject.org Ready worker 34d v1.21.1+9807387
worker05.ocp.stg.iad2.fedoraproject.org Ready worker 34d v1.21.1+9807387
ocp01.ocp.stg.rdu3.fedoraproject.org Ready master 34d v1.21.1+9807387
ocp02.ocp.stg.rdu3.fedoraproject.org Ready master 34d v1.21.1+9807387
ocp03.ocp.stg.rdu3.fedoraproject.org Ready master 34d v1.21.1+9807387
worker01.ocp.stg.rdu3.fedoraproject.org Ready worker 21d v1.21.1+9807387
worker02.ocp.stg.rdu3.fedoraproject.org Ready worker 20d v1.21.1+9807387
worker03.ocp.stg.rdu3.fedoraproject.org Ready worker 20d v1.21.1+9807387
worker04.ocp.stg.rdu3.fedoraproject.org Ready worker 34d v1.21.1+9807387
worker05.ocp.stg.rdu3.fedoraproject.org Ready worker 34d v1.21.1+9807387
----
At this point the cluster is basically up and running.

View file

@ -13,7 +13,7 @@ This can be retrieved once the cluster control plane has been installed, from th
oc get configmap kube-root-ca.crt -o yaml -n openshift-ingress
----
Extract this CACERT in full, and commit it to ansible at: `https://pagure.io/fedora-infra/ansible/blob/main/f/roles/haproxy/files/ocp.<ENV>-iad2.pem`
Extract this CACERT in full, and commit it to ansible at: `https://pagure.io/fedora-infra/ansible/blob/main/f/roles/haproxy/files/ocp.<ENV>-rdu3.pem`
To deploy this cert, one must be apart of the `sysadmin-noc` group. Run the following playbook:

View file

@ -18,7 +18,7 @@ Contact::
Location::
All
Servers::
All IAD2 and VPN Fedora machines
All RDU3 and VPN Fedora machines
Purpose::
Access via ssh to Fedora project machines.
@ -67,10 +67,10 @@ Host bastion.fedoraproject.org
User FAS_USERNAME (all lowercase)
ProxyCommand none
ForwardAgent no
Host *.iad2.fedoraproject.org *.qa.fedoraproject.org 10.3.160.* 10.3.161.* 10.3.163.* 10.3.165.* 10.3.167.* 10.3.171.* *.vpn.fedoraproject.org
Host *.rdu3.fedoraproject.org *.qa.fedoraproject.org 10.16.160.* 10.16.161.* 10.16.163.* 10.16.165.* 10.16.167.* 10.16.171.* *.vpn.fedoraproject.org
ProxyJump bastion.fedoraproject.org
Host batcave01
HostName %h.iad2.fedoraproject.org
HostName %h.rdu3.fedoraproject.org
....
+
Note that there are 2 bastion servers: bastion01.fedoraproject.org
@ -106,7 +106,7 @@ To have SSH access to the servers, youll first need to add your public key to
You can configure Putty the same way by doing this:
[arabic, start=0]
. In the session section type _batcave01.iad2.fedoraproject.org_ port 22
. In the session section type _batcave01.rdu3.fedoraproject.org_ port 22
. In Connection:Data enter your FAS_USERNAME
. In Connection:Proxy add the proxy settings
@ -125,7 +125,7 @@ authentication you have used on FAS profile
You can use openssh from any terminal to access machines you are granted access to:
'ssh batcave01.iad2.fedoraproject.org'
'ssh batcave01.rdu3.fedoraproject.org'
It's important to use the fully qualified domain name of the host you are trying
to access so that the certificate matches correctly. Otherwise you may get a

View file

@ -107,7 +107,7 @@ pushes the changes live to https://status.fedoraproject.org
[arabic]
. Run certbot to generate certificate and have it signed by LetsEncrypt
(you can run this command anywhere certbot is installed, you can use
your laptop or _certgetter01.iad2.fedoraproject.org_):
your laptop or _certgetter01.rdu3.fedoraproject.org_):
+
....
rm -rf ~/certbot

View file

@ -14,7 +14,7 @@ Owner:::
Contact:::
#fedora-admin, sysadmin-main
Servers:::
log01.iad2.fedoraproject.org
log01.rdu3.fedoraproject.org
Purpose:::
Provides our central logs and reporting
@ -40,7 +40,7 @@ outputted to `/srv/web/epylog/merged`
+
This path requires a username and a password to access. To add your
username and password you must first join the sysadmin-logs group then
login to `log01.iad2.fedoraproject.org` and run this command:
login to `log01.rdu3.fedoraproject.org` and run this command:
+
....
htpasswd -m /srv/web/epylog/.htpasswd $your_username

View file

@ -23,7 +23,7 @@ follow their instructions completely.
+
____
[upperalpha]
.. Log into _batcave01.iad2.fedoraproject.org_
.. Log into _batcave01.rdu3.fedoraproject.org_
.. search for the hostname in the file `/var/log/virthost-lists.out`:
+
....

View file

@ -114,6 +114,6 @@ values are correct:
+
....
volgroup: /dev/vg_guests
vmhost: virthost??.iad2.fedoraproject.org
vmhost: virthost??.rdu3.fedoraproject.org
....
. Run the `noc.yml` ansible playbook to update nagios.

View file

@ -48,7 +48,7 @@ https://pagure.io/waiverdb/issue/77
== Observing WaiverDB Behavior
Login to _os-master01.iad2.fedoraproject.org_ as
Login to _os-master01.rdu3.fedoraproject.org_ as
_root_ (or authenticate remotely with openshift using
`oc login https://os.fedoraproject.org`, and run:

View file

@ -80,10 +80,10 @@ https://os.fedoraproject.org[Openshift webconsole] or by using the
openshift command line:
....
$ oc login os-master01.iad2.fedoraproject.org
$ oc login os-master01.rdu3.fedoraproject.org
You must obtain an API token by visiting https://os.fedoraproject.org/oauth/token/request
$ oc login os-master01.iad2.fedoraproject.org --token=<Your token here>
$ oc login os-master01.rdu3.fedoraproject.org --token=<Your token here>
$ oc -n asknot get pods
asknot-28-bfj52 1/1 Running 522 28d
$ oc logs asknot-28-bfj52