refactor communishift docs
Signed-off-by: Mark O Brien <markobri@redhat.com>
This commit is contained in:
parent
078026ee34
commit
1da008d4b5
2 changed files with 81 additions and 51 deletions
|
@ -1,37 +1,69 @@
|
|||
Authentication in Communishift
|
||||
==============================
|
||||
|
||||
Adding FAS
|
||||
Resources
|
||||
*********
|
||||
|
||||
* https://docs.fedoraproject.org/en-US/infra/ocp4/sop_configure_oauth_ipa/
|
||||
* https://docs.fedoraproject.org/en-US/infra/sysadmin_guide/ipsilon/#_create_openid_connect_secrets_
|
||||
|
||||
|
||||
Discussion
|
||||
**********
|
||||
Would it be possible to have groups in Openshift linked to groups in FAS. Having
|
||||
a central place to control group access would make authentication easier and more
|
||||
transparent.
|
||||
|
||||
|
||||
Identity provider
|
||||
**********
|
||||
|
||||
This was setup as it was necessary for the investigation. An oidc secret was created in the ansible private repo following this SOP: https://docs.fedoraproject.org/en-US/infra/sysadmin_guide/ipsilon/#_create_openid_connect_secrets_for_apps
|
||||
The cluster was linked to the Fedora account system as a necessary precursor to
|
||||
the investigation.
|
||||
|
||||
The ipsilon playbook was then run to provision the secret.
|
||||
An openid secret was created in the private ansible repo using the `openid connect SOP https://docs.fedoraproject.org/en-US/infra/sysadmin_guide/ipsilon/#_create_openid_connect_secrets_`
|
||||
The ipsilon playbook was then run to provision the secret.
|
||||
|
||||
To cinfigure openshift this sop was follwed to add an oauth client to allow access to the cluster https://docs.fedoraproject.org/en-US/infra/ocp4/sop_configure_oauth_ipa/
|
||||
To configure openshift this `SOP <https://docs.fedoraproject.org/en-US/infra/ocp4/sop_configure_oauth_ipa/>`_ was follwed to add an oauth client
|
||||
to allow access to the cluster.
|
||||
|
||||
Group access
|
||||
************
|
||||
|
||||
Ideally we would like to be able to map groups from IPA to Communishift as this would make adding and removeing users from projects easier to manage.
|
||||
Ideally we would like to be able to map groups from IPA to Communishift as this
|
||||
would make adding and removeing users from projects easier to manage.
|
||||
|
||||
Openshift supports group integration with ldap servers, ipa is an ldap based server however it was deemed not secure to allow an external application have access to our internal ipa server.
|
||||
Openshift supports group integration with ldap servers, ipa is an ldap based
|
||||
server however it was deemed not secure to allow an external application have
|
||||
access to our internal ipa server.
|
||||
|
||||
Openshift also supports group mapoping from openid clients which would be the preffered course of action for us as we are using ipsilon anyway. However this is not yet supported in Openshift Dedicated. OSD support have said there is an RFE for this to be added but the ticket is private internally so we can not track progress on it.
|
||||
Openshift also supports group mapoping from openid clients which would be the
|
||||
preffered course of action for us as we are using ipsilon anyway. However this
|
||||
is not yet supported in Openshift Dedicated. OSD support have said there is an
|
||||
RFE for this to be added but the ticket is private internally so we cannot track
|
||||
progress on it.
|
||||
|
||||
|
||||
Custom application
|
||||
Conclusion
|
||||
******************
|
||||
|
||||
As the supported solutions are not suitable it would be necessary to create a custom solution to carry out group mappings. This could be in the form of an openshift operator, a toddler or an ansible script run on a cron job.
|
||||
As the supported solutions are not suitable it would be necessary to create a
|
||||
custom solution to carry out group mappings. This could be in the form of an
|
||||
openshift operator, a toddler or an ansible script run on a cron job.
|
||||
|
||||
Namespaced groups would need to be created in IPA such as communishift-<project> and the users added with a sponsor for each group. These would then need to be automatically replicated in Communishift
|
||||
Namespaced groups would need to be created in IPA such as communishift-<project>
|
||||
and the users added with a sponsor for each group. These would then need to be
|
||||
automatically replicated in Communishift
|
||||
|
||||
A possible skeleton solution would be to:
|
||||
* Periodically call fasjson for any group that begins with communishift- (but not **communishift** as this already exists and is separate).
|
||||
A possible skeleton solution would be to:
|
||||
|
||||
* Periodically call fasjson for any group that begins with communishift-
|
||||
(but not **communishift** as this already exists and is separate).
|
||||
* Get the list of users for that group
|
||||
* Check if the group exixts in openshift and create if not
|
||||
* Check the list of users in fasjson against the list in Openshift and add/remove if necessary.
|
||||
|
||||
* Check the list of users in fasjson against the list in Openshift and add/remove
|
||||
if necessary.
|
||||
|
||||
Optional:
|
||||
* Get the list of sponsors for the group in fasjson and use these to set rbac permission levels of admin and all other members of the group have basic user access
|
||||
* Get the list of sponsors for the group in fasjson and use these to set rbac
|
||||
permission levels of admin and all other members of the group have basic user access
|
||||
|
|
|
@ -23,42 +23,40 @@ Limits can go from storage, memory and cpu usage to the amount of objects (limit
|
|||
|
||||
Sample object definition:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: app-quota
|
||||
spec:
|
||||
hard:
|
||||
# compute
|
||||
cpu: "1" # requests.cpu
|
||||
memory: "1Gi" # requests.memory
|
||||
ephemeral-storage: "10Gi" # requests.ephemeral-storage
|
||||
limits.cpu: "2"
|
||||
limits.memory: "2Gi"
|
||||
limits.ephemeral-storage: "10Gi"
|
||||
# storage
|
||||
requests.storage: "10Gi"
|
||||
persistentvolumeclaims: "1"
|
||||
# <storage-class-name>.storageclass.storage.k8s.io/requests.storage
|
||||
# <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims
|
||||
# object counts
|
||||
pods: "1"
|
||||
replicationcontrollers: 1
|
||||
# resourcequotas: 1
|
||||
# services: 1
|
||||
# services.loadbalancers: 1
|
||||
# services.nodeports: 1
|
||||
# secrets: 1
|
||||
# configmaps: 1
|
||||
# openshift.io/imagestreams: 1
|
||||
# scopes:
|
||||
# https://docs.openshift.com/container-platform/4.6/applications/quotas/quotas-setting-per-project.html#quotas-scopes_quotas-setting-per-project
|
||||
# - Terminating
|
||||
# - NotTerminating
|
||||
# - BestEffort
|
||||
# - NotBestEffort
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: app-quota
|
||||
spec:
|
||||
hard:
|
||||
# compute
|
||||
cpu: "1" # requests.cpu
|
||||
memory: "1Gi" # requests.memory
|
||||
ephemeral-storage: "10Gi" # requests.ephemeral-storage
|
||||
limits.cpu: "2"
|
||||
limits.memory: "2Gi"
|
||||
limits.ephemeral-storage: "10Gi"
|
||||
# storage
|
||||
requests.storage: "10Gi"
|
||||
persistentvolumeclaims: "1"
|
||||
# <storage-class-name>.storageclass.storage.k8s.io/requests.storage
|
||||
# <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims
|
||||
# object counts
|
||||
pods: "1"
|
||||
replicationcontrollers: 1
|
||||
# resourcequotas: 1
|
||||
# services: 1
|
||||
# services.loadbalancers: 1
|
||||
# services.nodeports: 1
|
||||
# secrets: 1
|
||||
# configmaps: 1
|
||||
# openshift.io/imagestreams: 1
|
||||
# scopes:
|
||||
# https://docs.openshift.com/container-platform/4.6/applications/quotas/quotas-setting-per-project.html#quotas-scopes_quotas-setting-per-project
|
||||
# - Terminating
|
||||
# - NotTerminating
|
||||
# - BestEffort
|
||||
# - NotBestEffort
|
||||
|
||||
Conclusion
|
||||
----------
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue