Skip to content

Instantly share code, notes, and snippets.

@asachs
Created January 27, 2026 17:21
Show Gist options
  • Select an option

  • Save asachs/3cdd2af018f5de916976f9027ce655d5 to your computer and use it in GitHub Desktop.

Select an option

Save asachs/3cdd2af018f5de916976f9027ce655d5 to your computer and use it in GitHub Desktop.
⏺ User approved Claude's plan
⎿  Plan saved to: ~/.claude/plans/abstract-percolating-yeti.md · /plan to edit
NixOS NAS Setup Plan — vetinari.lab.unix.ie
Hardware Summary
┌───────────┬──────────────────────────────────────────┐
│ Component │ Detail │
├───────────┼──────────────────────────────────────────┤
│ CPU │ AMD Ryzen 9 5900X (12c/24t) │
├───────────┼──────────────────────────────────────────┤
│ RAM │ 64GB DDR4-3600 │
├───────────┼──────────────────────────────────────────┤
│ GPU │ NVIDIA RTX 3090 (24GB) │
├───────────┼──────────────────────────────────────────┤
│ NVMe │ 1x Crucial T500 1TB (nvme0n1) │
├───────────┼──────────────────────────────────────────┤
│ HDDs │ 7x Seagate ST24000NM000C 24TB (sda-sdg) │
├───────────┼──────────────────────────────────────────┤
│ Network │ enp5s0 at 10.0.1.58/24 │
├───────────┼──────────────────────────────────────────┤
│ Board │ ASUS ROG Crosshair VIII Hero X570 (UEFI) │
└───────────┴──────────────────────────────────────────┘
Architecture
NVMe (1TB Crucial T500)
├── p1: 512MB — EFI System Partition (FAT32)
├── p2: 32GB — Swap
└── p3: ~967GB — ZFS L2ARC cache
Each HDD (7x 24TB):
├── p1: 1GB — EFI System Partition (FAT32, GRUB mirror)
└── p2: ~24TB — ZFS RAIDZ2 member
Bootloader: GRUB (EFI) with mirroredBoots
→ ESP synced to NVMe + all 7 HDDs (8 boot targets)
→ System boots from any surviving drive
ZFS Pool: tank (RAIDZ2, 7x HDD p2)
├── tank/root → / (NixOS root)
├── tank/nix → /nix (Nix store)
├── tank/home → /home
├── tank/var → /var/log, /var/lib
└── tank/data → /data (media, container volumes)
L2ARC: NVMe p3 (~967GB read cache for tank)
Failure scenarios:
- NVMe dies → boot from any HDD ESP, root still on RAIDZ2, lose swap + L2ARC (graceful degrade)
- 1-2 HDDs die → boot from NVMe ESP, root on RAIDZ2 (degraded but intact)
- 3+ HDDs die → pool lost (expected for RAIDZ2)
Step-by-step Implementation
Step 1: Partition all drives from live USB
Wipe existing partitions on all 7 HDDs and partition the NVMe.
NVMe (nvme0n1):
sgdisk --zap-all /dev/nvme0n1
sgdisk -n1:0:+512M -t1:ef00 -c1:"EFI" /dev/nvme0n1
sgdisk -n2:0:+32G -t2:8200 -c2:"swap" /dev/nvme0n1
sgdisk -n3:0:0 -t3:bf01 -c3:"l2arc" /dev/nvme0n1
Each HDD (sda-sdg):
for disk in sda sdb sdc sdd sde sdf sdg; do
mdadm --stop /dev/md127 2>/dev/null || true
sgdisk --zap-all /dev/$disk
sgdisk -n1:0:+1G -t1:ef00 -c1:"EFI" /dev/$disk
sgdisk -n2:0:0 -t2:bf00 -c2:"zfs" /dev/$disk
done
Format all ESPs:
mkfs.fat -F32 -n BOOT /dev/nvme0n1p1
for disk in sda sdb sdc sdd sde sdf sdg; do
mkfs.fat -F32 -n BOOT /dev/${disk}1
done
Enable swap:
mkswap -L swap /dev/nvme0n1p2
swapon /dev/nvme0n1p2
Step 2: Create ZFS pool and datasets
# Create RAIDZ2 pool using disk IDs (stable names)
zpool create -f \
-o ashift=12 \
-o autotrim=on \
-O acltype=posixacl \
-O atime=off \
-O compression=zstd \
-O dnodesize=auto \
-O normalization=formD \
-O xattr=sa \
-O mountpoint=none \
tank raidz2 \
/dev/disk/by-id/ata-ST24000NM000C-3WD103_ZXA0PGAB-part2 \
/dev/disk/by-id/ata-ST24000NM000C-3WD103_ZXA0LWFA-part2 \
/dev/disk/by-id/ata-ST24000NM000C-3WD103_ZXA0MZHR-part2 \
/dev/disk/by-id/ata-ST24000NM000C-3WD103_ZXA0QCLG-part2 \
/dev/disk/by-id/ata-ST24000NM000C-3WD103_ZXA0NCKF-part2 \
/dev/disk/by-id/ata-ST24000NM000C-3WD103_ZXA0PTPV-part2 \
/dev/disk/by-id/ata-ST24000NM000C-3WD103_ZXA0R2W7-part2
# Create datasets (legacy mountpoints for NixOS)
zfs create -o mountpoint=legacy tank/root
zfs create -o mountpoint=legacy tank/nix
zfs create -o mountpoint=legacy tank/home
zfs create -o mountpoint=legacy tank/var
zfs create -o mountpoint=legacy -o recordsize=1M tank/data
Step 3: Mount filesystems for install
# Mount root
mount -t zfs tank/root /mnt
# Create mount points and mount
mkdir -p /mnt/{nix,home,var,data,boot}
mount -t zfs tank/nix /mnt/nix
mount -t zfs tank/home /mnt/home
mount -t zfs tank/var /mnt/var
mount -t zfs tank/data /mnt/data
# Mount primary ESP
mount /dev/nvme0n1p1 /mnt/boot
# Mount mirror ESPs
for i in {0..6}; do
disk=(sda sdb sdc sdd sde sdf sdg)
mkdir -p /mnt/boot-${disk[$i]}
mount /dev/${disk[$i]}1 /mnt/boot-${disk[$i]}
done
Step 4: Generate and write NixOS configuration
Run nixos-generate-config --root /mnt to get hardware detection, then replace with our flake-based config.
Files to create in the homelab repo (nixos/vetinari/):
- flake.nix — Top-level flake
- configuration.nix — Main system config
- hardware-configuration.nix — Generated + customized hardware config
- zfs.nix — ZFS pool, datasets, scrub, snapshots
- boot.nix — GRUB mirroredBoots config
- podman.nix — Podman + Docker Compose compat
- network.nix — Static IP, hostname, firewall
Key configuration decisions:
Bootloader (boot.nix):
boot.loader.grub = {
enable = true;
efiSupport = true;
efiInstallAsRemovable = true;
copyKernels = true;
mirroredBoots = [
{ devices = [ "nodev" ]; path = "/boot"; }
{ devices = [ "nodev" ]; path = "/boot-sda"; }
{ devices = [ "nodev" ]; path = "/boot-sdb"; }
{ devices = [ "nodev" ]; path = "/boot-sdc"; }
{ devices = [ "nodev" ]; path = "/boot-sdd"; }
{ devices = [ "nodev" ]; path = "/boot-sde"; }
{ devices = [ "nodev" ]; path = "/boot-sdf"; }
{ devices = [ "nodev" ]; path = "/boot-sdg"; }
];
};
boot.supportedFilesystems = [ "zfs" ];
boot.zfs.extraPools = [ "tank" ];
ZFS (zfs.nix):
services.zfs = {
autoScrub.enable = true;
autoScrub.interval = "monthly";
trim.enable = true;
};
# L2ARC added post-install: zpool add tank cache /dev/nvme0n1p3
Podman (podman.nix):
virtualisation = {
containers.enable = true;
podman = {
enable = true;
dockerCompat = true; # docker → podman alias
dockerSocket.enable = true; # /run/podman/podman.sock
defaultNetwork.settings.dns_enabled = true; # inter-container DNS
autoPrune = {
enable = true;
flags = [ "--all" ];
};
};
};
environment.systemPackages = [ pkgs.podman-compose ];
Network (network.nix):
networking = {
hostName = "vetinari";
domain = "lab.unix.ie";
hostId = "<8 random hex chars>"; # required for ZFS
interfaces.enp5s0.ipv4.addresses = [{
address = "10.0.1.58";
prefixLength = 24;
}];
defaultGateway = "10.0.1.1";
nameservers = [ "10.0.0.1" ];
};
Step 5: Install NixOS
# Copy flake config to /mnt/etc/nixos/
nixos-install --flake /mnt/etc/nixos#vetinari --no-root-passwd
Set root password and reboot.
Step 6: Post-install — add L2ARC and verify
# Add L2ARC cache device
zpool add tank cache /dev/disk/by-id/nvme-CT1000T500SSD8_25064DFA9BE7-part3
# Verify
zpool status tank
zfs list
Step 7: Copy SSH key and disable password auth
# From local machine:
ssh-copy-id root@10.0.1.58
# Then disable password auth in configuration.nix
Files to Create
nixos/vetinari/
├── flake.nix
├── flake.lock (generated)
├── configuration.nix (main config, imports modules)
├── hardware-configuration.nix (generated + edited)
├── boot.nix (GRUB mirroredBoots)
├── zfs.nix (pool config, scrub, snapshots)
├── podman.nix (Podman + docker-compose compat)
└── network.nix (static IP, hostname, firewall)
Verification
1. Boot redundancy: Reboot, enter UEFI, verify multiple EFI boot entries exist
2. ZFS pool: zpool status tank — all 7 drives ONLINE, RAIDZ2
3. ZFS datasets: zfs list — root, nix, home, var, data mounted
4. L2ARC: zpool iostat -v tank — cache device present
5. Swap: swapon --show — 32GB on NVMe
6. Podman: podman run hello-world and podman compose version
7. Network: ping -c1 10.0.1.1 and hostnamectl shows vetinari.lab.unix.ie
8. SSH: Key-based login works, password auth disabled
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment