This note came from one of those storage tasks that is easy to get wrong if you try to reconstruct it from memory later.
The goal was not to build a new Ceph cluster from scratch. The goal was to connect a Kubernetes cluster to an existing Ceph deployment and expose it through Rook as external storage for workloads.
1. Collect the Ceph Cluster Details
Start on a Ceph node and capture the monitor map and the FSID:
| |
What I cared about here was:
- the Ceph FSID
- the monitor endpoints
- the pool names that Kubernetes would use later for RBD and CephFS
The original note contained real monitor names, IP addresses, and secrets, so those are omitted here.
2. Generate the External Cluster Resource Exports
Rook ships a helper script that can generate the environment values needed to import an external Ceph cluster.
| |
That script prints a set of exported variables for:
- the namespace
- the Ceph FSID
- the monitor endpoints
- the RBD and CephFS pool names
- the provisioner and node secrets
Do not paste those secrets into a public post. Treat that output like credentials.
3. Import the External Ceph Configuration into Kubernetes
On a machine that has working kubectl access to the target cluster:
| |
If the import works, you should see the external namespace, monitor config map, secrets, and storage classes get created.
That part is important because it tells you the target cluster now has the credentials and metadata it needs to talk to the existing Ceph backend.
4. Install the Rook Operator and External Cluster Chart
Once the import step is done, install the operator and the external cluster chart:
| |
At this point I watched both the operator and the CephCluster resource:
| |
5. Verify the External Cluster Connection
The state I wanted to reach was:
| |
And then:
| |
In the original work log, the final result exposed both ceph-rbd and cephfs.
6. Set the Default StorageClass if Needed
If this Ceph-backed RBD class should become the cluster default, patch it explicitly:
| |
I like making this explicit instead of assuming the right default will be chosen automatically.
What This Guide Is Really About
The hard part here is not the Helm command. It is keeping the flow straight:
- collect the external Ceph details
- generate the importable resources
- import them into Kubernetes
- install the Rook operator and external cluster chart
- verify the connection and the resulting storage classes
That is the part worth writing down.