This document explains how to resolve IPv6 internet connectivity issues in K3s clusters running on Rocky Linux with firewall-cmd.
When K3s is configured with IPv6 support (dual-stack), pods may have IPv6 addresses but cannot reach external IPv6 services on the internet. This happens because:
- Missing IPv6 NAT/Masquerading: Unlike IPv4, IPv6 requires explicit masquerading rules for container networks
- Firewall Configuration: firewall-cmd needs masquerading enabled for NAT to work
- Network Policies: Kube-router enforces strict egress policies that can block unmarked traffic
In IPv4, NAT (Network Address Translation) is commonly used to allow private networks to access the internet. IPv6 was designed to eliminate NAT, but in containerized environments like Kubernetes, we still need masquerading to:
- Translate source addresses: Convert pod IPv6 addresses to the host's routable IPv6 address
- Maintain connectivity: Ensure return traffic reaches the correct pod
- Preserve security: Keep internal cluster addressing separate from external routing
# Without masquerading:
# Pod (2001:cafe:42::9) -> Internet (2001:db8::1)
# Internet sees: 2001:cafe:42::9 (unreachable from internet)
# Response: 2001:db8::1 -> 2001:cafe:42::9 (fails, no route)
# With masquerading:
# Pod (2001:cafe:42::9) -> MASQ -> Host (2001:cafe:42::1) -> Internet
# Internet sees: 2001:cafe:42::1 (routable)
# Response: Internet -> Host -> MASQ -> Pod (success)
I'll add the Solution Implementation section to the IPv6 connectivity documentation.
```markdown name=k8s/docs/ipv6-connectivity-fix.md
# IPv6 Connectivity Fix for K3s Clusters
This document explains how to resolve IPv6 internet connectivity issues in K3s clusters running on Rocky Linux with firewall-cmd.
## Problem Description
When K3s is configured with IPv6 support (dual-stack), pods may have IPv6 addresses but cannot reach external IPv6 services on the internet. This happens because:
1. **Missing IPv6 NAT/Masquerading**: Unlike IPv4, IPv6 requires explicit masquerading rules for container networks
2. **Firewall Configuration**: firewall-cmd needs masquerading enabled for NAT to work
3. **Network Policies**: Kube-router enforces strict egress policies that can block unmarked traffic
## Understanding IPv6 Masquerading
In IPv4, NAT (Network Address Translation) is commonly used to allow private networks to access the internet. IPv6 was designed to eliminate NAT, but in containerized environments like Kubernetes, we still need masquerading to:
- **Translate source addresses**: Convert pod IPv6 addresses to the host's routable IPv6 address
- **Maintain connectivity**: Ensure return traffic reaches the correct pod
- **Preserve security**: Keep internal cluster addressing separate from external routing
### Why Masquerading is Required
```bash
# Without masquerading:
# Pod (2001:cafe:42::9) -> Internet (2001:db8::1)
# Internet sees: 2001:cafe:42::9 (unreachable from internet)
# Response: 2001:db8::1 -> 2001:cafe:42::9 (fails, no route)
# With masquerading:
# Pod (2001:cafe:42::9) -> MASQ -> Host (2001:cafe:42::1) -> Internet
# Internet sees: 2001:cafe:42::1 (routable)
# Response: Internet -> Host -> MASQ -> Pod (success)Masquerading must be enabled in firewall-cmd for NAT rules to function:
# Enable masquerading (required for IPv6 NAT)
sudo firewall-cmd --add-masquerade --permanent
# Add K3s essential ports
sudo firewall-cmd --add-port=6443/tcp --permanent # Kubernetes API
sudo firewall-cmd --add-port=10250/tcp --permanent # kubelet
sudo firewall-cmd --add-port=2379-2380/tcp --permanent # etcd
sudo firewall-cmd --add-port=30000-32767/tcp --permanent # NodePort range
sudo firewall-cmd --add-port=30000-32767/udp --permanent # NodePort range UDP
# Apply changes
sudo firewall-cmd --reload
# Verify configuration
sudo firewall-cmd --list-allExpected output should include:
masquerade: yes
Add the IPv6 masquerading rule for the K3s cluster CIDR:
# Add IPv6 masquerading for K3s pods
# Source: K3s cluster CIDR (2001:cafe:42::/56)
# Destination: NOT the cluster CIDR (external traffic only)
sudo ip6tables -t nat -A POSTROUTING -s 2001:cafe:42::/56 ! -d 2001:cafe:42::/56 -j MASQUERADE
# Verify the rule was added
sudo ip6tables -L -n -t natRule explanation:
-t nat: Use the NAT table-A POSTROUTING: Add to POSTROUTING chain (after routing decision)-s 2001:cafe:42::/56: Source is any pod in the cluster! -d 2001:cafe:42::/56: Destination is NOT another pod (external traffic only)-j MASQUERADE: Replace source IP with host's IP
Create a systemd service to ensure the IPv6 NAT rule persists across reboots:
sudo tee /etc/systemd/system/k3s-ipv6-nat.service << 'EOF'
[Unit]
Description=K3s IPv6 NAT Rules
Documentation=https://github.com/k3s-io/k3s/issues/1126
After=network.target k3s.service
Before=firewalld.service
[Service]
Type=oneshot
RemainAfterExit=yes
# Add IPv6 masquerading rule for K3s cluster CIDR
ExecStart=/sbin/ip6tables -t nat -A POSTROUTING -s 2001:cafe:42::/56 ! -d 2001:cafe:42::/56 -j MASQUERADE
# Remove the rule on service stop
ExecStop=/sbin/ip6tables -t nat -D POSTROUTING -s 2001:cafe:42::/56 ! -d 2001:cafe:42::/56 -j MASQUERADE
[Install]
WantedBy=multi-user.target
EOFService configuration details:
- After=network.target k3s.service: Runs after network and K3s are ready
- Before=firewalld.service: Runs before firewalld to avoid conflicts
- Type=oneshot: Service runs once and exits
- RemainAfterExit=yes: Systemd considers service active after successful execution
- ExecStart: Command to add the NAT rule
- ExecStop: Command to remove the NAT rule (cleanup)
# Enable service to start automatically on boot
sudo systemctl enable k3s-ipv6-nat.service
# Start the service immediately
sudo systemctl start k3s-ipv6-nat.service
# Check service status
sudo systemctl status k3s-ipv6-nat.service
# Verify the rule is active
sudo ip6tables -L -n -t nat | grep MASQUERADECreate a Traefik dynamic route to expose Navidrome through your domain:
# Create the route file
sudo tee /path/to/traefik/dynamic/navidrome.yml << 'EOF'
http:
routers:
navidrome-router-http:
rule: "Host(`navidrome.drewdomi.space`)"
service: navidrome
entryPoints:
- anubis
services:
navidrome:
loadBalancer:
servers:
- url: "http://10.43.113.92:4533"
EOF
# Restart Traefik to pick up the new route
cd /path/to/traefik
docker compose restart traefikRoute configuration details:
- Host rule: Matches requests to
navidrome.drewdomi.space - Entry point: Uses the
anubisentry point (port 3923) for bot protection - Load balancer: Points to the Navidrome service internal IP and port
- Dynamic loading: Traefik automatically picks up changes in the dynamic directory
# Get a pod name
kubectl get pods -A
# Test IPv6 connectivity
kubectl exec -it -n <namespace> <pod-name> -- ping6 google.com
# Test IPv4 connectivity (should also work)
kubectl exec -it -n <namespace> <pod-name> -- ping 8.8.8.8
# Test DNS resolution
kubectl exec -it -n <namespace> <pod-name> -- nslookup google.com# Test internal access
curl -I http://10.43.113.92:4533
# Test external access via domain
curl -I https://navidrome.drewdomi.spaceBefore fix:
PING google.com (2800:3f0:4001:839::200e): 56 data bytes
--- google.com ping statistics ---
5 packets transmitted, 0 packets received, 100% packet lossAfter fix:
PING google.com (2800:3f0:4001:839::200e): 56 data bytes
64 bytes from 2800:3f0:4001:839::200e: seq=0 ttl=58 time=20.1 ms
64 bytes from 2800:3f0:4001:839::200e: seq=1 ttl=58 time=19.8 ms# Verify K3s IPv6 configuration
kubectl get nodes -o wide
# Check firewall status
sudo firewall-cmd --list-all
# Verify IPv6 NAT rules
sudo ip6tables -L -n -t nat
# Check service status
sudo systemctl status k3s-ipv6-nat.service
# Test Traefik route
curl -H "Host: navidrome.drewdomi.space" http://localhost:3923-
Masquerading not enabled:
sudo firewall-cmd --add-masquerade --permanent sudo firewall-cmd --reload
-
IPv6 forwarding disabled:
sudo sysctl net.ipv6.conf.all.forwarding=1 echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.conf
-
Service fails to start:
# Check service logs sudo journalctl -u k3s-ipv6-nat.service -f # Manually test the rule sudo ip6tables -t nat -A POSTROUTING -s 2001:cafe:42::/56 ! -d 2001:cafe:42::/56 -j MASQUERADE
-
Traefik route not working:
# Check Traefik logs docker logs Traefik # Verify dynamic file syntax docker exec Traefik cat /etc/traefik/dynamic/navidrome.yml # Check if file is being watched docker exec Traefik ls -la /etc/traefik/dynamic/
# Check if packets are being masqueraded
sudo ip6tables -L -n -t nat -v
# Monitor IPv6 traffic
sudo tcpdump -i any -n ip6 and host google.com
# Monitor Traefik access logs
tail -f /AppData/traefik/logs/access.log | jq '.'Your K3s should be configured with IPv6 support:
# Example K3s server configuration
/usr/local/bin/k3s server \
--node-ip=192.168.1.64,2804:4f60:40c8:7400:d6ae:52ff:feff:67f6 \
--cluster-cidr=10.42.0.0/16,2001:cafe:42::/56 \
--service-cidr=10.43.0.0/16,2001:cafe:43::/112- Principle of Least Privilege: The NAT rule only applies to external traffic (
! -d 2001:cafe:42::/56) - Network Policies: Kube-router network policies still apply for pod-to-pod communication
- Firewall: Host firewall rules remain in effect for incoming traffic
- Bot Protection: Anubis provides additional security layer for web services
- Monitoring: Consider monitoring IPv6 traffic for unusual patterns
- K3s IPv6 Configuration
- Kubernetes IPv6 Support
- Rocky Linux Firewall Configuration
- IPv6 Masquerading in Linux
- Traefik Dynamic Configuration
# Monthly verification script
#!/bin/bash
echo "Checking K3s IPv6 connectivity..."
# Check service status
systemctl is-active k3s-ipv6-nat.service
# Verify NAT rule exists
ip6tables -L -n -t nat | grep -q "2001:cafe:42::/56" && echo "NAT rule: OK" || echo "NAT rule: MISSING"
# Test connectivity from a pod
kubectl run ipv6-test --image=busybox --rm -i --restart=Never -- ping6 -c 2 google.com
# Test Navidrome service
curl -I https://navidrome.drewdomi.spaceIf you change your K3s cluster CIDR, update the service:
# Edit the service file
sudo systemctl edit k3s-ipv6-nat.service
# Update the CIDR in ExecStart and ExecStop
# Reload and restart
sudo systemctl daemon-reload
sudo systemctl restart k3s-ipv6-nat.service# Monitor system logs for NAT-related issues
sudo journalctl -u k3s-ipv6-nat.service -f
# Check firewall logs
sudo journalctl -u firewalld -f
# Monitor K3s logs for network issues
sudo journalctl -u k3s -f | grep -i ipv6
The documentation now includes the complete Solution Implementation with the Traefik route configuration, comprehensive troubleshooting steps, and maintenance procedures.