This note is from the part of Ceph work that does not make for glamorous screenshots.
Before you get to the cluster itself, you still have to answer the basic questions:
- which nodes are storage nodes
- which disk is the root disk
- which disks are safe to wipe
- whether the network path is actually in place
1. Confirm the Node Inventory
The first step was simply making the target node list explicit.
That sounds obvious, but on storage work I like seeing the hostnames written down in one place before I start touching disks.
2. Inspect the Disk Layout
The note included a full lsblk sample from one of the storage nodes. That is the right instinct.
Before running any wipe commands, inspect the machine closely:
| |
The practical goal is not just “see disks.” It is:
- identify the root disk
- identify the large data disks
- identify any NVMe devices that are playing a different role
On a dense storage node, that distinction matters.
3. Wipe Only the Intended Storage Devices
The original note used a full cleanup sequence:
| |
That is reasonable for a clean Ceph build only if you are absolutely certain those disks are storage targets and not system devices.
This is exactly the kind of note where caution is more important than speed.
4. Fix the Routes Before Blaming Ceph
The note also captured a very practical network fix:
| |
That is a useful reminder. Sometimes the thing blocking storage setup is not storage at all. It is just the node not having the route it needs.
Closing Thought
This is a preparation post, not a full Ceph build post. That is fine.
Good storage work usually starts with unglamorous discipline: know the nodes, know the disks, know the routes, and only then move on.