this has not been validated end-to-end by anyone including me, so be warned :-)
- try using shairport instead of squeezelite as the client
- try using sendspin instead of squeezelite (currently onnly supports one output format so not an option yet)
this has not been validated end-to-end by anyone including me, so be warned :-)
I have other devices that need to access the ceph mesh that are on my LAN. This gist is only needed if you want LAN clients to access the ceph mesh.
Routed is needed, you can't jut simply bridge en05 and en06 and have VMs work, bridging seems to not work on thundebolt interfaces, at least i could never get the interfaces working when bridged and it broke the ceph mesh completely.
tl;dr can't bridge thunderbolt interfaces
2025.04.27 - currently untested e2e this was made from my raw notes by chatgpt, so erros and hallucianation may have crept in :-)
This document describes the clean, final method to mount a CephFS filesystem for Docker VMs across your cluster.
Assumptions:
This document describes the original setup to establish an FRR (Free Range Routing) OpenFabric IS-IS based IPv6 routed mesh over Thunderbolt networking between Proxmox nodes, using static /128 loopback addresses in the fc00::/8 ULA space.
This provided:
I learn't an important lesson today, never ever remove ms_bind_ipv4 = false from ceph.conf or the cephFS will be fucked. note also recreating the mgrs and mds seems advisable too
only ever reboot one node, if that doesn work or you see libceph error storm when it reboot - solve that first (make sure no wrong mons defined in storage.cfg or ceph.conf)
this gist is part of this series
This assumes you are running Proxmox 8.4 and that the line source /etc/network/interfaces.d/* is at the end of the interfaces file (this is automatically added to both new and upgraded installations of Proxmox 8.2).
This changes the previous file design thanks to @NRGNet and @tisayama to make the system much more reliable in general, more maintainable esp for folks using IPv4 on the private cluster network (i still recommend the use of the IPv6 FC00 network you will see in these docs)
This is currently a work-in-progress documentation - rough notes for me, maybe missing a lot or wrong
The idea is to replace GlusterFS running inside the VM with storage on my cephfs cluster. This is my proxmox cluster and it runs both the storage and is the hypervisor for my docker VMs.
Other possible approaches:
Assumptions: