persistentvolume-controller, "fedora-ostree-content-volume-2" already bound to a different claim #12555
Labels
No labels
announcement
authentication
automate
aws
backlog
blocked
bodhi
ci
Closed As
Duplicate
Closed As
Fixed
Closed As
Fixed with Explanation
Closed As
Initiative Worthy
Closed As
Insufficient data
Closed As
Invalid
Closed As
Spam
Closed As
Upstream
Closed As/Will Not
Can Not fix
cloud
communishift
copr
database
deprecated
dev
discourse
dns
downloads
easyfix
epel
factory2
firmitas
gitlab
greenwave
hardware
help wanted
high-gain
high-trouble
iad2
koji
koschei
lists
low-gain
low-trouble
mbs
medium-gain
medium-trouble
mini-initiative
mirrorlists
monitoring
Needs investigation
notifier
odcs
OpenShift
ops
OSBS
outage
packager_workflow_blocker
pagure
permissions
Priority
Needs Review
Priority
Next Meeting
Priority
🔥 URGENT 🔥
Priority
Waiting on Assignee
Priority
Waiting on External
Priority
Waiting on Reporter
rabbitmq
rdu-cc
release-monitoring
releng
repoSpanner
request-for-resources
s390x
security
SMTP
src.fp.o
staging
taiga
unfreeze
waiverdb
websites-general
wiki
No milestone
No project
No assignees
6 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: Infrastructure/fedora-infrastructure#12555
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I'm trying to migrate a few CoreOS related projects from DeploymentConfig to Deployment. The one I will be focusing in this example is the fedora-ostree-pruner.
Currently number of the replicas is set to 1, after the build is complete, the replica which is being created remains in the
Pending
stage.Upon somewhat of an investigation it was observed that the volume is already bound to a different claim:
This is true for staging... and as I already found out, also for production 👽
OpenShift Workloads Dashboard returns:
Describe what you would like us to do: Due to permission issues I am unable to dig deeper into the matter, it would be nice if I could get some assistance in figuring this out.
When do you need this to be done by? (YYYY/MM/DD) : ASAP
cc @cverna
On stg we tried deleting the PVC and re-running the playbook, but that didn't seems to help.
Metadata Update from @zlopez:
You need to specify the storage class name in your pvc definition, otherwise it uses
ocs-storagecluster-ceph-rbd
by default.If you want to use a netapp volume (nfs) then use
(note that there is no PV
fedora-ostree-content-volume-2
on stg, so the PVC will stay in pending until an admin creates that PV.If you want an RBD volume, then use
storageClassName: ocs-storagecluster-ceph-rbd
and remove thevolumeName
. ODF will provision a new volume for you automatically.Note that you cannot use
ReadWriteMany
inFilesystem
mode with RBD.If you want to use that combination, use the
ocs-storagecluster-cephfs
storage class instead.@c4rt0 It looks like in coreos-ostree-importer we have a pvc template --> https://pagure.io/fedora-infra/ansible/blob/main/f/roles/openshift-apps/coreos-ostree-importer/templates/pvc.yml.j2
Maybe we should have the same for fedore-ostree-pruner
I can see we already do, and from my understanding @darknao is referring to it. It's identical to the one coreos-ostree-importer:
https://pagure.io/fedora-infra/ansible/blob/main/f/roles/openshift-apps/fedora-ostree-pruner/templates/pvc.yml.j2
Thank you both of your answers. I think we can use
ocs-storagecluster-cephfs
as it is probably a good idea to keep the already existingReadWriteMany
. To be completely honest I'm not sure if we need the RBD volume.I will post an update here as soon as I will test the above.
If the volume is shared between apps in and out of OpenShift, then the NetApp (NFS) volume makes sense.
If it's shared with other pods in the same namespace (or more than 1 replicas running), then CephFS.
For any other use (single pod, 1 replica), then RBD provides the best performance.
The volumes for fedora-ostree-pruner and coreos-ostree-importer are special because they are essentially maps into a netapp volume where the main (compose & prod) ostree repos are stored.
I guess we need the volume created in staging for us?
We spent some time with @cverna on it today. We have the volume specified, but we still have the pod
hanging
both in staging and in production. As you can see our updated pvc file contains both storageClassName and volumeName :/After reverting the changes in pvc.yml.j2 back to it's original stage and after a manual binding of the volume by @kevin the
fedora-ostree-pruner
in production is working once again as expected, using deployment:Staging is still an issue.
I created the pv in stg ( fedora-ostree-content-volume-2 )
but the pvc still seems to refer to fedora-ostree-content-volume-1 ?
Metadata Update from @kevin:
I think this is all sorted now?
If not, please re-open...
Metadata Update from @kevin: