Skip to content

Instantly share code, notes, and snippets.

@aelhusseiniakl
Last active December 26, 2025 23:05
Show Gist options
  • Select an option

  • Save aelhusseiniakl/39e3fd9f29abda6153a3b5a0a5bc191b to your computer and use it in GitHub Desktop.

Select an option

Save aelhusseiniakl/39e3fd9f29abda6153a3b5a0a5bc191b to your computer and use it in GitHub Desktop.
Configure Ceph in Proxmox 9.* easy way as you never did before

Configure Ceph in Proxmox 9.* easy way as you never did before:

I spent a considerable amount of time researching and testing different scenarios to deploy Ceph in my Proxmox cluster. Although the process should be straightforward, it actually took me weeks due to outdated tutorials and articles based on older versions. The steps below are the result of multiple tests and references—most of which are not clearly documented, and not everyone would be able to follow or understand them. I’ve now validated the full procedure in my lab, and it has been running successfully for three days without any issues.

Hardware Setup

• 3× Minisforum MS-01 nodes

• Thunderbolt 4 cables

• 4TB NVMe drives (recommended: Samsung EVO Plus for high read/write performance)

• Unifi USW Pro Aggregation switch

Prerequisites

• Thunderbolt cables must be connected during the Proxmox installation This is critical—otherwise, the interfaces will not be detected and you will have to apply manual workarounds to make Thunderbolt ports visible afterward.

• Proxmox 9.x

• Isolated Layer 2 subnet not utilized on your network.

• Thunderbolt cabling must follow this specific connection layout:

image

It’s not about port order or connection sequence—the key requirement is that each node must be physically connected to the others. Steps After completing the Proxmox installation, verify that the Thunderbolt interfaces are visible under the Network section.

A. Prepare Network Interfaces

  1. Enable Autostart o Edit each Thunderbolt interface and ensure Autostart is selected.
  2. Add Descriptive Comments o Document each interface with a clear label (e.g., tb-node1, tb-node2, tb-backbone).
  3. Adjust MTU Settings o Increase the MTU for all Thunderbolt interfaces (exclude AMT/iKVM or management ports). o Based on testing, MTU 9000 is recommended instead of 65*, which tends to cause packet drops during Ceph traffic, especially under heavy replication and recovery load.
image image image image

B. Now let’s create the cluster and join the nodes.

Node 1:
Navigate to Datacenter --> Cluster image

Create Cluster --> provide Cluster name --> Select cluster Network (Recommended to use Management interface) image

image

Node 2:
Navigate to Datacenter --> Cluster Click on join Cluster --> past the join info --> enter root password --> select Cluster Network (management interface) Clisck on join image

Node 3:
Navigate to Datacenter --> Cluster Click on join Cluster --> past the join info --> enter root password --> select Cluster Network (management interface) Clisck on join image

image now our three nodes appears on the cluster

Next part: SDN

C. Create OpenFabric

Navigate to Datacenter on any Node (For example Node 1) --> SDN
confirm that all three nodes appear

image

Click on Fabrics image

Add fabric --> Select OpenFabric image

Enter a valid name and IPv4 Prefix (Subnet): in this example i'm using same subnet on the reference --> click Create image

OpenFabric Created now it's time to add nodes
Note: Don’t Assign any IP Address for the physical interface, issues will appear and this could disturb your connectivity.

Select the Fabric and Click in Add Node image

Select the First Node -- Add IPv4 --> Select both thunderbolt interfaces --> Click on Create another
Note: Don’t Assign any IP Address for the physical interface, issues will appear and this could disturb your connectivity. image

Select the Second Node -- Add IPv4 --> Select both thunderbolt interfaces --> Click on Create another
Note: Don’t Assign any IP Address for the physical interface, issues will appear and this could disturb your connectivity. image

Select the Third Node -- Add IPv4 --> Select both thunderbolt interfaces --> Click on Create another
Note: Don’t Assign any IP Address for the physical interface, issues will appear and this could disturb your connectivity. image

Now it's the time to apply the configuration on all nodes:
Navigate back to SDN --> Click Apply image

Click Yes image

Ensure that Everything is (OK) image

image

Now let’s verify through vtysh:
an integrated shell for FRRouting (FRR), an open-source suite for routing protocols, often used with Proxmox for advanced networking like Ceph storage or SDN, allowing users to manage routers (like OSPF, BGP) and view network status through a single interface on Proxmox nodes.

Commands:

vtysh

show openfabric neighbor

show openfabric route

Node 1: image

Node 2: image

Node 3: image

D. Configure Ceph:

Note: Sequence is important

Node 1:
Navigate to Datacenter --> Ceph --> Install Ceph image

Select No-Subscription (Since i don't have subscription) --> Start Squid Installation image

Press Y image

Click Next image

Selct Public Network IP/CIDR image

in my case i'm using 10.15.15.0/24. --> Click Next image

Click Finish image

Confirm that the Monitor and manager created:
Navigate to Node --> Ceph --> Monitor image

Node 2:
Navigate to Datacenter --> Ceph --> Install Ceph image

Select No-Subscription (Since i don't have subscription) --> Start Squid Installation image

Press Y image

Click Next image

Click Next Since you configured Ceph network on first Node image

Click Finish image

Node 3:
Navigate to Datacenter --> Ceph --> Install Ceph image

Select No-Subscription (Since i don't have subscription) --> Start Squid Installation image

Press Y image

Click Next image

Click Finish image

Now it’s the time to add Monitor and Mangers (Second and third nodes)
Navigate to Node 1 --> Ceph --> Monitor image

on Top window (Monitor) Create Select Node 2 --> Create image

Again for node 3: on Top window (Monitor) Create Select Node 2 --> Create image

on bottom window (Manager) Create Select Node 2 --> Create image

Again for node 3: on bottom window (Manager) Create Select Node 2 --> Create image

Now you should have something like this if not refresh the page image

Now we will change the Migration Network to use same thunderbolt subnet
Navigate to Datacenter --> Options image

Edit Migration Settings --> Select Thunderbolt Network (Important) --> Click OK image

Confirm image

Create OSD:
Node 1:

Navigate to the Node --> Ceph --> OSD

image

Click Create: OSD

Slect the Disk --> Create

image

Node 2:

Navigate to the Node --> Ceph --> OSD

image

Click Create: OSD

Slect the Disk --> Create

image

Node 3:
Navigate to the Node --> Ceph --> OSD

Click Create: OSD --> Slect the Disk --> Create

image image

Now we will create a pool:
Navigate to Node 1 --> Ceph --> Pools image

Click Create --> If you would like keep everything as default and click Create image

Now we will create cephfs (optional but recommended)
First Add Metadata Servers: image

Click Create --> Select Node 1 --> Create image

Click Create --> Select Node 2 --> Create image

Click Create --> Select Node 3 --> Create image

Now we will create the cephfs

on the top window --> Click Create CephFS --> Create

image image

Let’s verify:

image image image image image

Hope you enjoyed and everything working as expected:

Follow on X: https://x.com/aelhusseiniakl .check my youtube Channel: https://www.youtube.com/@SysIntegration

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment