fix parsing errors and sphinx warnings

Signed-off-by: Ryan Lerch <rlerch@redhat.com>
This commit is contained in:
Ryan Lercho 2023-11-16 08:02:56 +10:00 committed by zlopez
parent 8fb9b2fdf0
commit ba720c3d77
98 changed files with 4799 additions and 4788 deletions

View file

@ -2,68 +2,68 @@ Authentication in Communishift
==============================
Resources
*********
* https://docs.fedoraproject.org/en-US/infra/ocp4/sop_configure_oauth_ipa/
* https://docs.fedoraproject.org/en-US/infra/sysadmin_guide/ipsilon/#_create_openid_connect_secrets_
---------
- https://docs.fedoraproject.org/en-US/infra/ocp4/sop_configure_oauth_ipa/
- https://docs.fedoraproject.org/en-US/infra/sysadmin_guide/ipsilon/#_create_openid_connect_secrets_
Discussion
**********
Would it be possible to have groups in Openshift linked to groups in FAS. Having
a central place to control group access would make authentication easier and more
----------
Would it be possible to have groups in Openshift linked to groups in FAS. Having a
central place to control group access would make authentication easier and more
transparent.
Identity provider
**********
-----------------
The cluster was linked to the Fedora account system as a necessary precursor to
the investigation.
The cluster was linked to the Fedora account system as a necessary precursor to the
investigation.
An openid secret was created in the private ansible repo using the `openid connect SOP https://docs.fedoraproject.org/en-US/infra/sysadmin_guide/ipsilon/#_create_openid_connect_secrets_`
An openid secret was created in the private ansible repo using the `openid connect SOP
https://docs.fedoraproject.org/en-US/infra/sysadmin_guide/ipsilon/#_create_openid_connect_secrets_`
The ipsilon playbook was then run to provision the secret.
To configure openshift this `SOP <https://docs.fedoraproject.org/en-US/infra/ocp4/sop_configure_oauth_ipa/>`_ was follwed to add an oauth client
to allow access to the cluster.
To configure openshift this `SOP
<https://docs.fedoraproject.org/en-US/infra/ocp4/sop_configure_oauth_ipa/>`_ was follwed
to add an oauth client to allow access to the cluster.
Group access
************
------------
Ideally we would like to be able to map groups from IPA to Communishift as this
would make adding and removeing users from projects easier to manage.
Ideally we would like to be able to map groups from IPA to Communishift as this would
make adding and removeing users from projects easier to manage.
Openshift supports group integration with ldap servers, ipa is an ldap based
server however it was deemed not secure to allow an external application have
access to our internal ipa server.
Openshift also supports group mapoping from openid clients which would be the
preffered course of action for us as we are using ipsilon anyway. However this
is not yet supported in Openshift Dedicated. OSD support have said there is an
RFE for this to be added but the ticket is private internally so we cannot track
progress on it.
Openshift supports group integration with ldap servers, ipa is an ldap based server
however it was deemed not secure to allow an external application have access to our
internal ipa server.
Openshift also supports group mapoping from openid clients which would be the preffered
course of action for us as we are using ipsilon anyway. However this is not yet
supported in Openshift Dedicated. OSD support have said there is an RFE for this to be
added but the ticket is private internally so we cannot track progress on it.
Conclusion
******************
----------
As the supported solutions are not suitable it would be necessary to create a
custom solution to carry out group mappings. This could be in the form of an
openshift operator, a toddler or an ansible script run on a cron job.
As the supported solutions are not suitable it would be necessary to create a custom
solution to carry out group mappings. This could be in the form of an openshift
operator, a toddler or an ansible script run on a cron job.
Namespaced groups would need to be created in IPA such as communishift-<project>
and the users added with a sponsor for each group. These would then need to be
automatically replicated in Communishift
Namespaced groups would need to be created in IPA such as communishift-<project> and the
users added with a sponsor for each group. These would then need to be automatically
replicated in Communishift
A possible skeleton solution would be to:
* Periodically call fasjson for any group that begins with communishift-
(but not **communishift** as this already exists and is separate).
* Get the list of users for that group
* Check if the group exixts in openshift and create if not
* Check the list of users in fasjson against the list in Openshift and add/remove
if necessary.
- Periodically call fasjson for any group that begins with communishift- (but not
**communishift** as this already exists and is separate).
- Get the list of users for that group
- Check if the group exixts in openshift and create if not
- Check the list of users in fasjson against the list in Openshift and add/remove if
necessary.
Optional:
* Get the list of sponsors for the group in fasjson and use these to set rbac
permission levels of admin and all other members of the group have basic user access
- Get the list of sponsors for the group in fasjson and use these to set rbac permission
levels of admin and all other members of the group have basic user access

View file

@ -2,7 +2,7 @@ Communishift
============
Purpose
*******
-------
Investigate what is needed in order to run a community focused Openshift instance.
@ -10,13 +10,13 @@ Identify possible bottlenecks, issues and whatever the CPE team needs to develop
components/services for this new Openshift instance.
Resources
*********
---------
* https://docs.openshift.com/dedicated/storage/persistent_storage/osd-persistent-storage-aws.html
* https://docs.openshift.com/container-platform/4.6/applications/quotas/quotas-setting-per-project.html
- https://docs.openshift.com/dedicated/storage/persistent_storage/osd-persistent-storage-aws.html
- https://docs.openshift.com/container-platform/4.6/applications/quotas/quotas-setting-per-project.html
Investigation
*************
-------------
The team discussed the following topics:
@ -27,66 +27,75 @@ The team discussed the following topics:
resource-quota
storage
Conclusions
***********
-----------
* The cluster can leverage EFS to provision volumes (using the AWS EFS operator from the Operator Marketplace) and an extra
ansible playbook to automate part of the process;
* Quotas can be enforced by creating an Openshift operator that watches all user namespaces;
* Authentication groups can be automatically synched between FasJSON and Openshift with a new Operator.
- The cluster can leverage EFS to provision volumes (using the AWS EFS operator from the
Operator Marketplace) and an extra ansible playbook to automate part of the process;
- Quotas can be enforced by creating an Openshift operator that watches all user
namespaces;
- Authentication groups can be automatically synched between FasJSON and Openshift with
a new Operator.
Proposed Roadmap
****************
----------------
AWS EFS Ansible Playbook
------------------------
~~~~~~~~~~~~~~~~~~~~~~~~
One needs to provide some AWS info when creating a volume using the EFS operator, sample resource bellow::
One needs to provide some AWS info when creating a volume using the EFS operator, sample
resource bellow:
apiVersion: aws-efs.managed.openshift.io/v1alpha1
kind: SharedVolume
metadata:
name: sv1
namespace: default
spec:
accessPointID: fsap-0123456789abcdef
fileSystemID: fs-0123cdef
.. code-block::
Both "accessPointID and fileSystemID" are generated by AWS with "accessPointID" being generated for every PVC that gets provisioned in the cluster.
apiVersion: aws-efs.managed.openshift.io/v1alpha1
kind: SharedVolume
metadata:
name: sv1
namespace: default
spec:
accessPointID: fsap-0123456789abcdef
fileSystemID: fs-0123cdef
An ansible playbook comes into play to automate the process of creating an "accessPoint" for a namespace whichs should be request in an
infra ticket when requesting the creation of a new namespace in the cluster.
Both "accessPointID and fileSystemID" are generated by AWS with "accessPointID" being
generated for every PVC that gets provisioned in the cluster.
An ansible playbook comes into play to automate the process of creating an "accessPoint"
for a namespace whichs should be request in an infra ticket when requesting the creation
of a new namespace in the cluster.
Fedora Cloud Quota Operator
---------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~
An operator can be created to ensure a namespace's resource quota.
The operator would watch for namespaces with specific tags/annotations (TBD) and apply the required quotas in those namespaces.
The operator would watch for namespaces with specific tags/annotations (TBD) and apply
the required quotas in those namespaces.
The quotas themselves are applied by creating a `ResourceQuota` object
in the namespace is supposed to manage: https://docs.openshift.com/container-platform/4.6/applications/quotas/quotas-setting-per-project.html.
The quotas themselves are applied by creating a `ResourceQuota` object in the namespace
is supposed to manage:
https://docs.openshift.com/container-platform/4.6/applications/quotas/quotas-setting-per-project.html.
Fedora FasJSON Sync Operator
----------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
An operator can be used to ensure FasJSON groups are synched with the cluster groups used by Openshift roles.
An operator can be used to ensure FasJSON groups are synched with the cluster groups
used by Openshift roles.
This operator would retrieve a group(s) information every N seconds from FasJSON and apply the changes in the cluster,
ensuring syncrhonization between the two systems.
This operator would retrieve a group(s) information every N seconds from FasJSON and
apply the changes in the cluster, ensuring syncrhonization between the two systems.
Food for thought: it would be interesting if FasJSON notifies group changes to fedora-messaging.
Food for thought: it would be interesting if FasJSON notifies group changes to
fedora-messaging.
Team and Skills
***************
---------------
A team of three indivduals should be able to deliver the proposed roadmap in ~6 weeks (2 week sprint, a sprint per component)
assuming the following technical skills:
A team of three indivduals should be able to deliver the proposed roadmap in ~6 weeks (2
week sprint, a sprint per component) assuming the following technical skills:
* Kubernetes basic concepts/usage;
* API or previous operator knowledge is a plus;
* Ansible basic usage;
* AWS API knowledge is a plus.
- Kubernetes basic concepts/usage; * API or previous operator knowledge is a plus;
- Ansible basic usage; * AWS API knowledge is a plus.
It might be a good opportunity to learn about Kubernetes and Operator/Controller development.
It might be a good opportunity to learn about Kubernetes and Operator/Controller
development.

View file

@ -4,7 +4,7 @@ Resource Quota
Resources
---------
* https://docs.openshift.com/container-platform/4.6/applications/quotas/quotas-setting-per-project.html
- https://docs.openshift.com/container-platform/4.6/applications/quotas/quotas-setting-per-project.html
Discussion
----------
@ -14,51 +14,56 @@ How to limit resource usage per namespace, such as memory, storage and so on.
What would be needed
--------------------
A ResourceQuota object needs to be created in the namespace it is supposed to manage/control.
A ResourceQuota object needs to be created in the namespace it is supposed to
manage/control.
This object could be automatically managed by an operator for each new namespace
that gets created (properly tagged) for community users.
This object could be automatically managed by an operator for each new namespace that
gets created (properly tagged) for community users.
Limits can go from storage, memory and cpu usage to the amount of objects (limit the namespace to have a max. of 5 secrets for example).
Limits can go from storage, memory and cpu usage to the amount of objects (limit the
namespace to have a max. of 5 secrets for example).
Sample object definition::
Sample object definition:
apiVersion: v1
kind: ResourceQuota
metadata:
name: app-quota
spec:
hard:
# compute
cpu: "1" # requests.cpu
memory: "1Gi" # requests.memory
ephemeral-storage: "10Gi" # requests.ephemeral-storage
limits.cpu: "2"
limits.memory: "2Gi"
limits.ephemeral-storage: "10Gi"
# storage
requests.storage: "10Gi"
persistentvolumeclaims: "1"
# <storage-class-name>.storageclass.storage.k8s.io/requests.storage
# <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims
# object counts
pods: "1"
replicationcontrollers: 1
# resourcequotas: 1
# services: 1
# services.loadbalancers: 1
# services.nodeports: 1
# secrets: 1
# configmaps: 1
# openshift.io/imagestreams: 1
# scopes:
# https://docs.openshift.com/container-platform/4.6/applications/quotas/quotas-setting-per-project.html#quotas-scopes_quotas-setting-per-project
# - Terminating
# - NotTerminating
# - BestEffort
# - NotBestEffort
.. code-block::
apiVersion: v1
kind: ResourceQuota
metadata:
name: app-quota
spec:
hard:
# compute
cpu: "1" # requests.cpu
memory: "1Gi" # requests.memory
ephemeral-storage: "10Gi" # requests.ephemeral-storage
limits.cpu: "2"
limits.memory: "2Gi"
limits.ephemeral-storage: "10Gi"
# storage
requests.storage: "10Gi"
persistentvolumeclaims: "1"
# <storage-class-name>.storageclass.storage.k8s.io/requests.storage
# <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims
# object counts
pods: "1"
replicationcontrollers: 1
# resourcequotas: 1
# services: 1
# services.loadbalancers: 1
# services.nodeports: 1
# secrets: 1
# configmaps: 1
# openshift.io/imagestreams: 1
# scopes:
# https://docs.openshift.com/container-platform/4.6/applications/quotas/quotas-setting-per-project.html#quotas-scopes_quotas-setting-per-project
# - Terminating
# - NotTerminating
# - BestEffort
# - NotBestEffort
Conclusion
----------
It can be easily achieved by creating a namespaced resourced and can be automated with an Openshift Operator.
It can be easily achieved by creating a namespaced resourced and can be automated with
an Openshift Operator.

View file

@ -4,28 +4,35 @@ Storage
Resources
---------
* https://docs.openshift.com/dedicated/storage/persistent_storage/osd-persistent-storage-aws.html
- https://docs.openshift.com/dedicated/storage/persistent_storage/osd-persistent-storage-aws.html
Discussion
----------
Find an Openshift storage backend solution so applications can use persistent volumes/storage when needed.
Find an Openshift storage backend solution so applications can use persistent
volumes/storage when needed.
What would be needed
--------------------
The AWS EFS operator can be installed from the Operator Marketplace to provision volumes in Openshift.
The AWS EFS operator can be installed from the Operator Marketplace to provision volumes
in Openshift.
There is a problem where each volume requires an access point to be created in a file system in AWS, this is a manual process.
There is a problem where each volume requires an access point to be created in a file
system in AWS, this is a manual process.
This process can be automated with an ansible playbook as each PVC object will need its own access point but requesting
storage for a namespace can be asked through an infra ticket.
This process can be automated with an ansible playbook as each PVC object will need its
own access point but requesting storage for a namespace can be asked through an infra
ticket.
AWS does not apply any limits to the created volume that control needs to be managed in Openshift.
AWS does not apply any limits to the created volume that control needs to be managed in
Openshift.
Conclusion
----------
The AWS EFS operator is the most straightforward path to support persistent storage in Communishift.
The AWS EFS operator is the most straightforward path to support persistent storage in
Communishift.
There is a manual step which requires the creation of an access point in AWS but that can be automated with ansible.
There is a manual step which requires the creation of an access point in AWS but that
can be automated with ansible.