Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save 1ofeightbillion-hub/66c75a53f01833fb10e47f041ec6815b to your computer and use it in GitHub Desktop.

Select an option

Save 1ofeightbillion-hub/66c75a53f01833fb10e47f041ec6815b to your computer and use it in GitHub Desktop.
Home Server Guide: Proxmox + HAOS + Frigate + Coral TPU (Verified Working Build)
# 🏡 Home Server Guide: Proxmox + HAOS + Frigate + Coral TPU
Please read my associated reddit post before proceeding.
https://www.reddit.com/r/homeassistant/comments/1orftfe/after_60_hours_of_trial_and_error_i_finally_got/
(Verified Working Build — Minimal First → Add Complexity Later)
This guide walks through building a reliable, low-maintenance **home-automation** and **NVR setup** using:
* **Proxmox VE** (host virtualization platform)
* **Home Assistant OS (HAOS)** in a VM
* **Frigate NVR** in a Debian 12 LXC (Docker-based)
* *Optional:* **Coral TPU** (object-detection acceleration)-- Recommended
* *Optional:* **GPU decoding** (Intel iGPU)-- NOT RECOMMENDED. This actually didn't work for me.
* *Optional:* **rclone** Google Drive continuous backup and storage cap
* *Addenda:* Fixed lenovo crash associated with Energy Efficient Ethernet; Added MSQTT; addreseed CPU issue
The first half builds a **minimal working setup** (guaranteed to work). The second half adds **advanced features** once the base is running cleanly.
---
## PART 1 – MINIMAL WORKING SETUP
### 1️⃣ Install Proxmox VE (PVE)
**Purpose:** Base virtualization platform that hosts HAOS (VM) and Frigate (LXC).
#### Steps
1. **Download & Install**
* Get the ISO from the official Proxmox website.
* Write to a USB drive (using Rufus or Balena Etcher), boot, and install with the default settings.
2. **Fix Enterprise Repository**
* *PVE is free, but we disable the paid enterprise repo to avoid update errors.*
```bash
# Disable the enterprise repository
sed -i 's|^deb .*pve-enterprise|# &|' /etc/apt/sources.list.d/pve-enterprise.list
# Add the no-subscription repository
echo "deb [http://download.proxmox.com/debian/pve](http://download.proxmox.com/debian/pve) bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list
# Update and upgrade
apt update && apt full-upgrade -y
```
3. **Install Utilities**
```bash
apt install -y htop iotop curl jq usbutils unattended-upgrades
dpkg-reconfigure -plow unattended-upgrades
```
4. **Verify Hardware**
```bash
uname -a
lscpu
lsusb
lspci | grep -E "VGA|3D|Display"
```
---
### 2️⃣ Install Home Assistant OS (HAOS) VM
**Purpose:** Runs automations and smart-home integrations.
#### Steps
1. **Download & Decompress the QCOW2 Image**
* Run these commands from your Proxmox shell.
```bash
wget [https://github.com/home-assistant/operating-system/releases/latest/download/haos_ova.qcow2.xz](https://github.com/home-assistant/operating-system/releases/latest/download/haos_ova.qcow2.xz)
xz -d haos_ova.qcow2.xz
mv haos_ova.qcow2 haos.qcow2
```
2. **Create VM and Import Disk**
* `100` is the VM ID. Adjust resources as needed.
```bash
qm create 100 --name haos --memory 4096 --cores 2 --net0 virtio,bridge=vmbr0
qm importdisk 100 haos.qcow2 local-lvm
qm set 100 --scsihw virtio-scsi-single --scsi0 local-lvm:vm-100-disk-0
qm set 100 --boot order=scsi0
qm set 100 --bios ovmf
qm set 100 --efidisk0 local-lvm:vm-100-efi,size=4M,pre-enrolled-keys=1
qm start 100
```
3. **Access HAOS**
* Access via browser: `http://homeassistant.local:8123` or `http://<VM-IP>:8123`
---
### 3️⃣ Install Frigate NVR (Minimal Config)
**Purpose:** Provide NVR with both continuous and event-based recording. Frigate runs inside a Debian 12 **LXC** container using **Docker**.
#### Steps
1. **Create LXC Container**
* `101` is the LXC ID. Adjust resources as needed.
```bash
pct create 101 local:vztmpl/debian-12-standard_12.0-1_amd64.tar.zst \
--hostname frigate --memory 4096 --cores 4 \
--net0 name=eth0,bridge=vmbr0,ip=dhcp \
--rootfs local-lvm:16 \
--features nesting=1,fuse=1
pct start 101
pct exec 101 -- bash
```
2. **Install Docker (Inside the LXC)**
```bash
apt update
apt install -y ca-certificates curl gnupg lsb-release
install -m 0755 -d /etc/apt/keyrings
curl -fsSL [https://download.docker.com/linux/debian/gpg](https://download.docker.com/linux/debian/gpg) | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] [https://download.docker.com/linux/debian](https://download.docker.com/linux/debian) $(. /etc/os-release && echo $VERSION_CODENAME) stable" > /etc/apt/sources.list.d/docker.list
apt update
apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
systemctl enable --now docker
```
3. **Create Frigate Folders and Files**
```bash
mkdir -p /opt/frigate/{config,storage}
cd /opt/frigate
```
4. **Create `docker-compose.yml`**
```yaml
version: "3.9"
services:
frigate:
container_name: frigate
image: ghcr.io/blakeblackshear/frigate:stable
restart: unless-stopped
privileged: true
shm_size: "256m"
environment:
TZ: "America/Chicago" # <-- Set your Timezone
volumes:
- /opt/frigate/config:/config
- /opt/frigate/storage:/media/frigate
ports:
- "5000:5000"
- "1984:1984"
- "8554:8554"
- "8555:8555/tcp"
- "8555:8555/udp"
```
5. **Create Minimal `config.yml`**
* ***Important:*** *Replace `<USER>`, `<PASS>`, and `<CAM-IP>` with your camera's details.*
```yaml
mqtt:
enabled: false
detectors:
cpu1:
type: cpu
record:
enabled: true
retain:
days: 1
events:
retain:
default: 1
snapshots:
enabled: true
timestamp: true
bounding_box: true
go2rtc:
streams: {}
cameras:
example_camera: # <-- Name your camera
ffmpeg:
inputs:
- path: rtsp://<USER>:<PASS>@<CAM-IP>:554/stream1 # <-- High-res stream for record
roles: [record]
- path: rtsp://<USER>:<PASS>@<CAM-IP>:554/stream2 # <-- Low-res stream for detect
roles: [detect]
detect:
width: 640
height: 360
fps: 5
record:
enabled: true
retain:
days: 1
snapshots:
enabled: true
```
6. **Start Frigate**
```bash
cd /opt/frigate
docker compose up -d
docker ps
```
7. **Access Frigate UI**
* Access via browser: `http://<LXC-IP>:5000`
---
## PART 2 – ADD COMPLEXITY LATER
### 4️⃣ Add Coral TPU (Optional)
#### Proxmox Configuration
1. Verify the Coral is connected to the PVE host.
```bash
lsusb | grep -i "coral"
```
2. Edit the LXC configuration file on the **Proxmox host** (`/etc/pve/lxc/101.conf`).
```conf
# Add this line to the bottom
lxc.mount.entry: /dev/bus/usb /dev/bus/usb none bind,optional,create=dir
```
#### Frigate Configuration
1. Add the `devices` section to your `docker-compose.yml` (inside the LXC).
```yaml
# ... inside 'frigate' service block
# ... (shm_size, environment, volumes, etc.)
devices:
- /dev/bus/usb:/dev/bus/usb
ports:
# ...
```
2. Update the `detectors` block in your `config.yml` (inside the LXC).
```yaml
detectors:
coral:
type: edgetpu
device: usb
cpu1:
type: cpu
```
3. Restart Frigate.
```bash
cd /opt/frigate
docker compose restart frigate
```
---
### 5️⃣ Add iGPU Decoding (Optional but NOT recommended as this broke my set up)
NOT RECOMMENDED. I DID NOT GET THE
This step reduces CPU usage by offloading video decoding to an Intel iGPU.
#### Proxmox Configuration
1. Edit the LXC configuration file on the **Proxmox host** (`/etc/pve/lxc/101.conf`).
```conf
# Add these two lines to the bottom
lxc.cgroup2.devices.allow: c 226:* rwm
lxc.mount.entry: /dev/dri /dev/dri none bind,optional,create=dir
```
#### Frigate Configuration
1. Add the `hwaccel_args` to the `ffmpeg` block for each camera in your `config.yml` (inside the LXC).
```yaml
cameras:
example_camera:
ffmpeg:
hwaccel_args: preset-vaapi # <-- Add this line AGAIN NOT RECOMMENDED
inputs:
# ...
```
2. Restart Frigate.
```bash
cd /opt/frigate
docker compose restart frigate
```
---
### 6️⃣ Add Google Drive Backup (rclone)
This is set up inside the Frigate LXC (`101`).
1. **Install rclone and inotify-tools**
```bash
apt install -y rclone inotify-tools
rclone config # <-- Follow prompts to set up your Google Drive remote (e.g., named 'gdrive')
```
2. **Real-time Uploader Script**
* Create the script `/usr/local/bin/frigate_upload_watch.sh`.
```bash
#!/usr/bin/env bash
SRC="/opt/frigate/storage"
DEST="gdrive:Frigate_Backups" # <-- Update 'gdrive' if your remote name is different
LOG="/var/log/frigate_rclone.log"
inotifywait -m -r -e close_write,moved_to,create "$SRC" | \
while read -r dir action file; do
rclone copyto "$dir/$file" "$DEST/${dir#${SRC}/}/$file" \
--log-file="$LOG" --log-level INFO --update
done
```
3. **Create Systemd Service**
* Create the file `/etc/systemd/system/frigate-upload-watch.service`.
```systemd
[Unit]
Description=Frigate -> Google Drive uploader
After=network-online.target
[Service]
Type=simple
ExecStart=/usr/local/bin/frigate_upload_watch.sh
Restart=always
RestartSec=3
User=root
[Install]
WantedBy=multi-user.target
```
4. **Start and Enable Service**
```bash
systemctl daemon-reload
systemctl enable --now frigate-upload-watch.service
```
---
### 7️⃣ Add Drive Storage Cap (Optional)
This is a script to automatically delete the oldest files from the Google Drive remote when the storage limit is reached.
1. **Config File**
* Create `/etc/gdrive_cap.conf`.
```conf
REMOTE="gdrive:Frigate_Backups" # <-- Match your rclone remote
CAP_BYTES="500G" # <-- Set your storage cap
```
2. **Enforcer Script**
* Create `/usr/local/bin/gdrive_cap_enforcer.sh`.
```bash
#!/usr/bin/env bash
set -euo pipefail
source /etc/gdrive_cap.conf
CAP=$(numfmt --from=iec $CAP_BYTES)
current=$(rclone lsf "$REMOTE" --files-only --format s --recursive | awk '{s+=$1} END{print s+0}')
if [ "$current" -le "$CAP" ]; then exit 0; fi
rclone lsf "$REMOTE" --files-only --format pst --separator '|' --recursive | sort -t'|' -k3,3 | \
while IFS='|' read -r path size mtime; do
[ "$current" -le "$CAP" ] && break
rclone deletefile "$REMOTE/$path"
current=$(( current - size ))
done
```
3. **Create Systemd Timer**
* Create `/etc/systemd/system/gdrive-cap-enforcer.timer`.
```systemd
[Timer]
OnBootSec=2min
OnUnitActiveSec=10min # <-- How often to run the check
AccuracySec=1min
Unit=gdrive-cap-enforcer.service
[Install]
WantedBy=timers.target
```
4. **Create Systemd Service for the Enforcer (for the timer to call)**
* Create `/etc/systemd/system/gdrive-cap-enforcer.service`.
```systemd
[Unit]
Description=Frigate Google Drive Capacity Enforcer
Requires=gdrive-cap-enforcer.timer
[Service]
Type=oneshot
ExecStart=/usr/local/bin/gdrive_cap_enforcer.sh
```
5. **Start and Enable Timer**
```bash
systemctl daemon-reload
systemctl enable --now gdrive-cap-enforcer.timer
```
---
## 🚀 Workflow Summary
| Step | Component | Purpose |
| :--- | :--- | :--- |
| **1** | Proxmox | Base OS / Virtualization Host |
| **2** | HAOS (VM) | Runs Home Automation / Integrations |
| **3** | Frigate (LXC + Docker) | NVR for Recording and Detection (CPU-only) |
| **4** | **Optional** Coral TPU | Accelerate Object Detection |
| **5** | **Optional** iGPU Decoding | Lower CPU usage for video decoding BUT NOT RECOMMENDED|
| **6** | **Optional** rclone | Real-time backup to Google Drive |
| **7** | **Optional** Drive Cap | Auto-delete oldest cloud files to enforce a limit |
**✅ End Result:** Frigate continuously records and shows events; HAOS runs separately; the system is **modular, stable, expandable, and easy to maintain**.
⚙️ Addendum — Intel NIC Stability Fix [REVISED]
Date: 2025-11-10
Issue
System logs showed repeated network adapter errors, causing the entire Proxmox host to hang and become unresponsive:
e1000e 0000:00:1f.6 eno1: Detected Hardware Unit Hang
Cause: Intel i219 adapter (driver e1000e) instability, a known bug triggered by sustained network load (like Frigate) and Energy Efficient Ethernet (EEE).
Fix Summary
Actions implemented to stabilize NIC and prevent hangs
1. Disable ASPM and adjust driver parameters (No Change)
This was already correct in /etc/default/grub:
Bash
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on iommu=pt quiet pcie_aspm=off e1000e.IntTxDelay=125 e1000e.IntRxDelay=0 e1000e.eee_disable=1"
update-grub
reboot
2. Disable Energy Efficient Ethernet (EEE) [Revised Fix]
The original disable-eee.service was found to be inactive (dead) and not working. It was removed and replaced with a new, persistent service.
1. Remove old, broken service (if it exists):
Bash
systemctl stop disable-eee.service
systemctl disable disable-eee.service
rm /etc/systemd/system/disable-eee.service
2. Create new, correct service file:
nano /etc/systemd/system/fix-nic-hang.service
3. Paste this content:
Ini, TOML
[Unit]
Description=Fix e1000e NIC Hang by Disabling EEE
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/sbin/ethtool --set-eee eno1 eee off
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
4. Enable and start the new service:
Bash
systemctl daemon-reload
systemctl enable --now fix-nic-hang.service
Verification Result (Pending)
The new fix-nic-hang.service is now active (exited) and enabled. This fix has been implemented, but its long-term stability under full 15-camera load is pending verification.
⚙️ Addendum — Frigate High CPU Load & MQTT Integration Fix [REVISED]
Date: 2025-11-08
Issue
A health check of the running system revealed a major performance issue: despite being stable, Frigate was exhibiting extremely high CPU load. The Home Assistant (MQTT) integration was also not yet enabled.
Symptom 1 (High CPU): The top command (via ha_frigate_health.sh) showed the frigate.detector.cpu1 process consuming 70-95% of the CPU.
Symptom 2 (Idle TPU): The frigate.detector.coral process was idle (5-10% CPU), even though logs confirmed "TPU found".
Symptom 3 (MQTT): The Home Assistant integration was not yet configured, as mqtt: enabled: false was set in the config.
Cause: The config.yml defined both a coral and a cpu1 detector. Frigate was defaulting to (or falling back to) the cpu1 detector for all detection tasks, which overloaded the CPU and left the Coral TPU unused.
Fix Summary
Actions implemented to force Frigate to use the Coral TPU and to enable the Home Assistant integration.
1. Force Coral TPU Usage
The config.yml was modified to remove the cpu1 detector entirely. This forces Frigate to use the only available detector, coral, for all detection tasks.
Edit /opt/frigate/config/config.yml (inside LXC 101):
YAML
# ======== DETECTORS ========
detectors:
coral: # <--- This is now the ONLY detector
type: edgetpu
device: usb
# cpu1: <--- This block was REMOVED
# type: cpu
2. Enable MQTT Integration
The config.yml was updated to enable MQTT and point to the Home Assistant VM (192.168.1.137), which runs the Mosquitto broker.
Edit /opt/frigate/config/config.yml (inside LXC 101):
YAML
# ======== GLOBAL ========
mqtt:
enabled: true
host: 192.168.1.137 # Your HA IP address
user: frigate # User created in Mosquitto add-on
password: EchoChloe828
3. Restart Frigate
The container was restarted to apply the new configuration.
Bash
# Run from Proxmox host
pct exec 101 -- docker compose -f /opt/frigate/docker-compose.yml restart frigate
4. Re-Enable Home Assistant Integration
After the restart, the Frigate integration in Home Assistant was "disabled" (a carry-over from previous troubleshooting).
Fix: In Home Assistant, navigating to Settings > Devices & Services, clicking the three-dot menu (⋮) on the Frigate integration, and selecting "Reload" (or restarting Home Assistant) forced it to reconnect and find all devices and entities.
Verification Result (Stable)
This fix is confirmed stable and working. Health checks show:
frigate.detector.cpu1 process is gone.
frigate.detector.coral is handling all detection at a low, healthy CPU load (~10-15%).
Mosquitto broker logs confirm Frigate is connected and sending data.
⚙️ Addendum — System RAM Upgrade & Re-allocation [ADDED]
Date: 2025-11-10
Issue
While troubleshooting the NIC hang, a critical memory issue was discovered. The Proxmox host (8GB total) was at 81.5% RAM usage, causing instability and Frigate to log "corrupt segment" errors.
Symptom 1 (Host OOM): Host memory usage was ~7.3 GiB out of 7.6 GiB.
Symptom 2 (LXC Over-usage): Frigate LXC (ID 101), allocated 2GB, was actively using ~2.9 GiB.
Symptom 3 (VM Allocation): HAOS VM (ID 100), allocated 4GB, was only using ~1.8 GiB.
Cause: The 2GB RAM allocated to Frigate was insufficient for 7 active cameras. The container was "stealing" memory from the host, starving it and leading to instability.
Fix Summary
Actions implemented to stabilize host memory and correctly allocate resources.
1. Hardware Upgrade
Physical RAM on the lenovo host was upgraded from 8GB to 16GB.
2. Re-allocate Virtualization RAM
After shutting down both containers, the allocations were corrected to provide Frigate with enough RAM and prevent host starvation.
HAOS VM (ID 100):
Bash
qm set 100 --memory 4096
Frigate LXC (ID 101): (First, unmount if locked: pct unmount 101)
Bash
pct set 101 --memory 8192
Verification Result (Stable)
Host RAM usage is now stable at ~36% (5.5GiB used out of 15.6GiB total).
Frigate (101) has 8GB allocated, well above its projected load.
HAOS (100) has 4GB allocated, providing a large safety buffer.
The system is now stable with all 9 cameras enabled.
⚙️ Addendum — Remote Access Setup (Tailscale) [ADDED]
Date: 2025-11-10
Issue
Needed secure remote access to all three web UIs (Proxmox, HA, Frigate) from a phone and computer without opening public firewall ports.
Fix Summary
Actions implemented to create a zero-config VPN tunnel.
1. Install Tailscale on Proxmox Host
The client was installed directly on the lenovo host (not in a container).
Bash
# Add Tailscale repo and key
curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.gpg | gpg --dearmor | tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.list | tee /etc/apt/sources.list.d/tailscale.list
# Install
apt update
apt install tailscale -y
2. Authenticate Host
The host was connected to the Tailscale account.
Bash
tailscale up
(Followed the on-screen URL to log in and authorize the new machine)
3. Enable Subnet Routing
This allows the host to act as a secure gateway to the entire local network.
Bash
# 1. Enable IP Forwarding
echo 'net.ipv4.ip_forward = 1' | tee /etc/sysctl.d/99-ip-forward.conf
echo 'net.ipv6.conf.all.forwarding = 1' | tee /etc/sysctl.d/99-ip-forward.conf
sysctl -p /etc/sysctl.d/99-ip-forward.conf
# 2. Advertise the route
tailscale up --advertise-routes=192.168.1.0/24 --reset
4. Approve Route in Admin Console
Logged into the Tailscale web admin panel and approved the new 192.168.1.0/24 subnet route advertised by the lenovo host.
Verification Result (Stable)
Confirmed that by enabling the Tailscale VPN on a mobile phone (off the home WiFi), all local IP addresses (192.168.1.114:8006, 192.168.1.137:8123, 192.168.1.40:5000) are fully accessible.
No ports were forwarded on the router.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment