Volume copr-fe-dev-db attached to None on /dev/vdc #7970

Closed
opened 2019-07-08 16:23:01 +00:00 by frostyx · 12 comments

I, unfortunately, happened to get the copr-fe-dev-db volume to a state when it shows as

Attached to None on /dev/vdc

I was trying to upgrade copr-fe-dev instance from F28 to F30 and instead of terminating the instance I renamed it and shut it down. Then provisioned a new one and when the volume didn't mount into the new instance, I deleted the old one in order to fix the issue. Unfortunately, I made it much worse
and now I really don't know how to fix it.

I, unfortunately, happened to get the `copr-fe-dev-db` volume to a state when it shows as Attached to None on /dev/vdc I was trying to upgrade `copr-fe-dev` instance from F28 to F30 and instead of terminating the instance I renamed it and shut it down. Then provisioned a new one and when the volume didn't mount into the new instance, I deleted the old one in order to fix the issue. Unfortunately, I made it much worse and now I really don't know how to fix it.

ok,, I poked the database to get this back to available state.

Can you try and attach it now?

ok,, I poked the database to get this back to available state. Can you try and attach it now?

Metadata Update from @kevin:

  • Issue assigned to kevin
  • Issue priority set to: Waiting on Reporter (was: Needs Review)
**Metadata Update from @kevin**: - Issue assigned to kevin - Issue priority set to: Waiting on Reporter (was: Needs Review)
Author

Unfortunately, it still doesn't work. Well, it is not attached to None anymore, but I still get

TASK [copr/frontend-cloud : mount up disk of copr fe] ****************************************************************************************************************************************************************************************
Monday 08 July 2019  19:06:39 +0000 (0:00:00.271)       0:37:13.372 *********** 
fatal: [copr-fe-dev.cloud.fedoraproject.org]: FAILED! => {"changed": false, "msg": "Error mounting /srv/copr-fe: mount: /srv/copr-fe: can't find LABEL=copr-fe.\n"}

from the playbook

Unfortunately, it still doesn't work. Well, it is not attached to `None` anymore, but I still get TASK [copr/frontend-cloud : mount up disk of copr fe] **************************************************************************************************************************************************************************************** Monday 08 July 2019 19:06:39 +0000 (0:00:00.271) 0:37:13.372 *********** fatal: [copr-fe-dev.cloud.fedoraproject.org]: FAILED! => {"changed": false, "msg": "Error mounting /srv/copr-fe: mount: /srv/copr-fe: can't find LABEL=copr-fe.\n"} from the playbook

Are you able to login to the system at all? It might need some work from the console and fdisk to see if the drive has partitions or labels still

Are you able to login to the system at all? It might need some work from the console and fdisk to see if the drive has partitions or labels still

Metadata Update from @smooge:

  • Issue priority set to: Needs Review (was: Waiting on Reporter)
**Metadata Update from @smooge**: - Issue priority set to: Needs Review (was: Waiting on Reporter)
Author

@smooge, yep, I am able to log in via SSH. What should I do?

@smooge, yep, I am able to log in via SSH. What should I do?

Metadata Update from @smooge:

  • Issue assigned to smooge (was: kevin)
**Metadata Update from @smooge**: - Issue assigned to smooge (was: kevin)

Metadata Update from @smooge:

  • Issue priority set to: Waiting on Assignee (was: Needs Review)
  • Issue tagged with: cloud
**Metadata Update from @smooge**: - Issue priority set to: Waiting on Assignee (was: Needs Review) - Issue tagged with: cloud
Author

I've done some magic based on suggestions from @msuchy and got it working by the following procedure. I have ...

  1. Detached copr-fe-dev-db volume from copr-fe-dev instance
  2. Terminated copr-fe-dev instance because it was unable to boot
  3. Executed playbook for provisioning copr-fe-dev instance
  4. Created snapshot copr-fe-dev-db-snapshot from copr-fe-dev-db
  5. Created copr-fe-dev-db-2 volume from copr-fe-dev-db-snapshot snapshot
  6. Attached copr-fe-dev-db-2 volume to copr-fe-dev instance
    - As /dev/vdb instead of desired /dev/vdc, IDK how to change it
  7. Executed playbook for provisioning copr-fe-dev instance again

Now I have the volume available and mounted, the original data is there and the database is working.

Do I need to follow-up this with some other steps or I should just update the volume descriptions and leave it be?

I've done some magic based on suggestions from @msuchy and got it working by the following procedure. I have ... 1. Detached `copr-fe-dev-db` volume from `copr-fe-dev` instance 2. Terminated `copr-fe-dev` instance because it was unable to boot 3. Executed playbook for provisioning `copr-fe-dev` instance 4. Created snapshot `copr-fe-dev-db-snapshot` from `copr-fe-dev-db` 5. Created `copr-fe-dev-db-2` volume from `copr-fe-dev-db-snapshot` snapshot 6. Attached `copr-fe-dev-db-2` volume to `copr-fe-dev` instance - As `/dev/vdb` instead of desired `/dev/vdc`, IDK how to change it 7. Executed playbook for provisioning `copr-fe-dev` instance again Now I have the volume available and mounted, the original data is there and the database is working. Do I need to follow-up this with some other steps or I should just update the volume descriptions and leave it be?

Given how hopelessly fragile it is, I would say update the descriptions and call it a day. :)

Given how hopelessly fragile it is, I would say update the descriptions and call it a day. :)

I agree with kevin. Thanks to msuchy on that.

Drives are named by the order they are seen. For the old system to have a /dev/vdc there needed to be a /dev/vdb at one point so it probably had it for a similar mount/move in the past when it was rebuilt.

I agree with kevin. Thanks to msuchy on that. Drives are named by the order they are seen. For the old system to have a /dev/vdc there needed to be a /dev/vdb at one point so it probably had it for a similar mount/move in the past when it was rebuilt.

Metadata Update from @kevin:

  • Issue close_status updated to: Fixed
  • Issue status updated to: Closed (was: Open)
**Metadata Update from @kevin**: - Issue close_status updated to: Fixed - Issue status updated to: Closed (was: Open)
Sign in to join this conversation.
No milestone
No project
No assignees
3 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: Infrastructure/fedora-infrastructure#7970
No description provided.