I spent a considerable amount of time researching and testing different scenarios to deploy Ceph in my Proxmox cluster. Although the process should be straightforward, it actually took me weeks due to outdated tutorials and articles based on older versions. The steps below are the result of multiple tests and references—most of which are not clearly documented, and not everyone would be able to follow or understand them. I’ve now validated the full procedure in my lab, and it has been running successfully for three days without any issues.
• 3× Minisforum MS-01 nodes
• Thunderbolt 4 cables
• 4TB NVMe drives (recommended: Samsung EVO Plus for high read/write performance)
• Unifi USW Pro Aggregation switch
• Thunderbolt cables must be connected during the Proxmox installation This is critical—otherwise, the interfaces will not be detected and you will have to apply manual workarounds to make Thunderbolt ports visible afterward.
• Proxmox 9.x
• Isolated Layer 2 subnet not utilized on your network.
• Thunderbolt cabling must follow this specific connection layout:
It’s not about port order or connection sequence—the key requirement is that each node must be physically connected to the others. Steps After completing the Proxmox installation, verify that the Thunderbolt interfaces are visible under the Network section.
- Enable Autostart o Edit each Thunderbolt interface and ensure Autostart is selected.
- Add Descriptive Comments o Document each interface with a clear label (e.g., tb-node1, tb-node2, tb-backbone).
- Adjust MTU Settings o Increase the MTU for all Thunderbolt interfaces (exclude AMT/iKVM or management ports). o Based on testing, MTU 9000 is recommended instead of 65*, which tends to cause packet drops during Ceph traffic, especially under heavy replication and recovery load.
Node 1:
Navigate to Datacenter --> Cluster

Create Cluster --> provide Cluster name --> Select cluster Network (Recommended to use Management interface)

Node 2:
Navigate to Datacenter --> Cluster Click on join Cluster --> past the join info --> enter root password --> select Cluster Network (management interface) Clisck on join

Node 3:
Navigate to Datacenter --> Cluster Click on join Cluster --> past the join info --> enter root password --> select Cluster Network (management interface) Clisck on join

now our three nodes appears on the cluster
Navigate to Datacenter on any Node (For example Node 1) --> SDN
confirm that all three nodes appear
Add fabric --> Select OpenFabric

Enter a valid name and IPv4 Prefix (Subnet): in this example i'm using same subnet on the reference --> click Create

OpenFabric Created now it's time to add nodes
Note: Don’t Assign any IP Address for the physical interface, issues will appear and this could disturb your connectivity.
Select the Fabric and Click in Add Node

Select the First Node -- Add IPv4 --> Select both thunderbolt interfaces --> Click on Create another
Note: Don’t Assign any IP Address for the physical interface, issues will appear and this could disturb your connectivity.

Select the Second Node -- Add IPv4 --> Select both thunderbolt interfaces --> Click on Create another
Note: Don’t Assign any IP Address for the physical interface, issues will appear and this could disturb your connectivity.

Select the Third Node -- Add IPv4 --> Select both thunderbolt interfaces --> Click on Create another
Note: Don’t Assign any IP Address for the physical interface, issues will appear and this could disturb your connectivity.

Now it's the time to apply the configuration on all nodes:
Navigate back to SDN --> Click Apply

Ensure that Everything is (OK)

Now let’s verify through vtysh:
an integrated shell for FRRouting (FRR), an open-source suite for routing protocols, often used with Proxmox for advanced networking like Ceph storage or SDN, allowing users to manage routers (like OSPF, BGP) and view network status through a single interface on Proxmox nodes.
Commands:
vtysh
show openfabric neighbor
show openfabric route
Note: Sequence is important
Node 1:
Navigate to Datacenter --> Ceph --> Install Ceph

Select No-Subscription (Since i don't have subscription) --> Start Squid Installation

in my case i'm using 10.15.15.0/24. --> Click Next

Confirm that the Monitor and manager created:
Navigate to Node --> Ceph --> Monitor

Node 2:
Navigate to Datacenter --> Ceph --> Install Ceph

Select No-Subscription (Since i don't have subscription) --> Start Squid Installation

Click Next Since you configured Ceph network on first Node

Node 3:
Navigate to Datacenter --> Ceph --> Install Ceph

Select No-Subscription (Since i don't have subscription) --> Start Squid Installation

Now it’s the time to add Monitor and Mangers (Second and third nodes)
Navigate to Node 1 --> Ceph --> Monitor

on Top window (Monitor) Create Select Node 2 --> Create

Again for node 3: on Top window (Monitor) Create Select Node 2 --> Create

on bottom window (Manager) Create Select Node 2 --> Create

Again for node 3: on bottom window (Manager) Create Select Node 2 --> Create

Now you should have something like this if not refresh the page

Now we will change the Migration Network to use same thunderbolt subnet
Navigate to Datacenter --> Options

Edit Migration Settings --> Select Thunderbolt Network (Important) --> Click OK

Create OSD:
Node 1:
Navigate to the Node --> Ceph --> OSD
Click Create: OSD
Slect the Disk --> Create
Node 2:
Navigate to the Node --> Ceph --> OSD
Click Create: OSD
Slect the Disk --> Create
Node 3:
Navigate to the Node --> Ceph --> OSD
Click Create: OSD --> Slect the Disk --> Create
Now we will create a pool:
Navigate to Node 1 --> Ceph --> Pools

Click Create --> If you would like keep everything as default and click Create

Now we will create cephfs (optional but recommended)
First Add Metadata Servers:

Click Create --> Select Node 1 --> Create

Click Create --> Select Node 2 --> Create

Click Create --> Select Node 3 --> Create

Now we will create the cephfs
on the top window --> Click Create CephFS --> Create
Let’s verify:
Hope you enjoyed and everything working as expected:
Follow on X: https://x.com/aelhusseiniakl .check my youtube Channel: https://www.youtube.com/@SysIntegration















