move communishift docs to correct place

Signed-off-by: Mark O Brien <markobri@redhat.com>
This commit is contained in:
Mark O Brien 2022-06-15 11:05:14 +01:00
parent d26f555290
commit 078026ee34
5 changed files with 3 additions and 2 deletions

View file

@ -0,0 +1,37 @@
Authentication in Communishift
==============================
Adding FAS
**********
This was setup as it was necessary for the investigation. An oidc secret was created in the ansible private repo following this SOP: https://docs.fedoraproject.org/en-US/infra/sysadmin_guide/ipsilon/#_create_openid_connect_secrets_for_apps
The ipsilon playbook was then run to provision the secret.
To cinfigure openshift this sop was follwed to add an oauth client to allow access to the cluster https://docs.fedoraproject.org/en-US/infra/ocp4/sop_configure_oauth_ipa/
Group access
************
Ideally we would like to be able to map groups from IPA to Communishift as this would make adding and removeing users from projects easier to manage.
Openshift supports group integration with ldap servers, ipa is an ldap based server however it was deemed not secure to allow an external application have access to our internal ipa server.
Openshift also supports group mapoping from openid clients which would be the preffered course of action for us as we are using ipsilon anyway. However this is not yet supported in Openshift Dedicated. OSD support have said there is an RFE for this to be added but the ticket is private internally so we can not track progress on it.
Custom application
******************
As the supported solutions are not suitable it would be necessary to create a custom solution to carry out group mappings. This could be in the form of an openshift operator, a toddler or an ansible script run on a cron job.
Namespaced groups would need to be created in IPA such as communishift-<project> and the users added with a sponsor for each group. These would then need to be automatically replicated in Communishift
A possible skeleton solution would be to:
* Periodically call fasjson for any group that begins with communishift- (but not **communishift** as this already exists and is separate).
* Get the list of users for that group
* Check if the group exixts in openshift and create if not
* Check the list of users in fasjson against the list in Openshift and add/remove if necessary.
Optional:
* Get the list of sponsors for the group in fasjson and use these to set rbac permission levels of admin and all other members of the group have basic user access

View file

@ -0,0 +1,94 @@
Communishift
============
Purpose
*******
Investigate what is needed in order to run a community focused Openshift instance.
Identify possible bottlenecks, issues and whatever the CPE team needs to develop new
components/services for this new Openshift instance.
Resources
*********
* https://docs.openshift.com/dedicated/storage/persistent_storage/osd-persistent-storage-aws.html
* https://docs.openshift.com/container-platform/4.6/applications/quotas/quotas-setting-per-project.html
Investigation
*************
The team discussed the following topics:
.. toctree::
:maxdepth: 1
authentication
resource-quota
storage
Conclusions
***********
* The cluster can leverage EFS to provision volumes (using the AWS EFS operator from the Operator Marketplace) and an extra
ansible playbook to automate part of the process;
* Quotas can be enforced by creating an Openshift operator that watches all user namespaces;
* Authentication groups can be automatically synched between FasJSON and Openshift with a new Operator.
Proposed Roadmap
****************
AWS EFS Ansible Playbook
------------------------
One needs to provide some AWS info when creating a volume using the EFS operator, sample resource bellow:
```
apiVersion: aws-efs.managed.openshift.io/v1alpha1
kind: SharedVolume
metadata:
name: sv1
namespace: default
spec:
accessPointID: fsap-0123456789abcdef
fileSystemID: fs-0123cdef
```
Both "accessPointID and fileSystemID" are generated by AWS with "accessPointID" being generated for every PVC that gets provisioned in the cluster.
An ansible playbook comes into play to automate the process of creating an "accessPoint" for a namespace whichs should be request in an
infra ticket when requesting the creation of a new namespace in the cluster.
Fedora Cloud Quota Operator
---------------------------
An operator can be created to ensure a namespace's resource quota.
The operator would watch for namespaces with specific tags/annotations (TBD) and apply the required quotas in those namespaces.
The quotas themselves are applied by creating a `ResourceQuota` object
in the namespace is supposed to manage: https://docs.openshift.com/container-platform/4.6/applications/quotas/quotas-setting-per-project.html.
Fedora FasJSON Sync Operator
----------------------------
An operator can be used to ensure FasJSON groups are synched with the cluster groups used by Openshift roles.
This operator would retrieve a group(s) information every N seconds from FasJSON and apply the changes in the cluster,
ensuring syncrhonization between the two systems.
Food for thought: it would be interesting if FasJSON notifies group changes to fedora-messaging.
Team and Skills
***************
A team of three indivduals should be able to deliver the proposed roadmap in ~6 weeks (2 week sprint, a sprint per component)
assuming the following technical skills:
* Kubernetes basic concepts/usage;
* API or previous operator knowledge is a plus;
* Ansible basic usage;
* AWS API knowledge is a plus.
It might be a good opportunity to learn about Kubernetes and Operator/Controller development.

View file

@ -0,0 +1,66 @@
Resource Quota
==============
Resources
---------
* https://docs.openshift.com/container-platform/4.6/applications/quotas/quotas-setting-per-project.html
Discussion
----------
How to limit resource usage per namespace, such as memory, storage and so on.
What would be needed
--------------------
A ResourceQuota object needs to be created in the namespace it is supposed to manage/control.
This object could be automatically managed by an operator for each new namespace
that gets created (properly tagged) for community users.
Limits can go from storage, memory and cpu usage to the amount of objects (limit the namespace to have a max. of 5 secrets for example).
Sample object definition:
```
apiVersion: v1
kind: ResourceQuota
metadata:
name: app-quota
spec:
hard:
# compute
cpu: "1" # requests.cpu
memory: "1Gi" # requests.memory
ephemeral-storage: "10Gi" # requests.ephemeral-storage
limits.cpu: "2"
limits.memory: "2Gi"
limits.ephemeral-storage: "10Gi"
# storage
requests.storage: "10Gi"
persistentvolumeclaims: "1"
# <storage-class-name>.storageclass.storage.k8s.io/requests.storage
# <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims
# object counts
pods: "1"
replicationcontrollers: 1
# resourcequotas: 1
# services: 1
# services.loadbalancers: 1
# services.nodeports: 1
# secrets: 1
# configmaps: 1
# openshift.io/imagestreams: 1
# scopes:
# https://docs.openshift.com/container-platform/4.6/applications/quotas/quotas-setting-per-project.html#quotas-scopes_quotas-setting-per-project
# - Terminating
# - NotTerminating
# - BestEffort
# - NotBestEffort
```
Conclusion
----------
It can be easily achieved by creating a namespaced resourced and can be automated with an Openshift Operator.

View file

@ -0,0 +1,31 @@
Storage
=======
Resources
---------
* https://docs.openshift.com/dedicated/storage/persistent_storage/osd-persistent-storage-aws.html
Discussion
----------
Find an Openshift storage backend solution so applications can use persistent volumes/storage when needed.
What would be needed
--------------------
The AWS EFS operator can be installed from the Operator Marketplace to provision volumes in Openshift.
There is a problem where each volume requires an access point to be created in a file system in AWS, this is a manual process.
This process can be automated with an ansible playbook as each PVC object will need its own access point but requesting
storage for a namespace can be asked through an infra ticket.
AWS does not apply any limits to the created volume that control needs to be managed in Openshift.
Conclusion
----------
The AWS EFS operator is the most straightforward path to support persistent storage in Communishift.
There is a manual step which requires the creation of an access point in AWS but that can be automated with ansible.

View file

@ -16,16 +16,17 @@ Completed review
----------------
.. toctree::
:maxdepth: 1
mailman3/index
pdc/index
flask-oidc/index
communishift/index
Implemented
-----------
.. toctree::
:maxdepth: 1
datanommer_datagrepper/index
monitoring_metrics/index
mirrors-countme/index