Skip to content

Destroy CEPH Volume

Create a tools pod and remote in it

  • skipping osd.0: "438c2569-ee98-4279-b87d-a72849d9a6b3" belonging to a different ceph cluster "84a66cf7-9560-4ba8-9993-f3687e7dddd6"

and if you get something like: not a block device you propably not on the right node

Find the right image tools version to use

sh
kubectl -n rook-ceph exec deploy/rook-ceph-mon-a -- ceph --version

Create temporary pod

sh
kubectl -n rook-ceph run rook-ceph-tools \
  --image=quay.io/ceph/ceph:v19.2.3 \
  --restart=Never \
  --overrides='
{
  "apiVersion": "v1",
  "spec": {
    "nodeSelector": {
      "kubernetes.io/hostname": "cb-node-1" # replace this by your node
    },
    "hostPID": true,
    "containers": [
      {
        "name": "rook-ceph-tools",
        "stdin": true,
        "tty": true,
        "securityContext": {
          "privileged": true
        },
        "volumeMounts": [
          {
            "name": "dev",
            "mountPath": "/dev"
          }
        ]
      }
    ],
    "volumes": [
      {
        "name": "dev",
        "hostPath": {
          "path": "/dev"
        }
      }
    ]
  }
}'

kubectl -n rook-ceph exec -it rook-ceph-tools -- bash

Destroy ceph data

sh
ceph-volume lvm zap /dev/nvme0n1 --destroy

Force rescan drive

Delete the operator node, and wait.