| description | lang | tags |
|---|---|---|
Set up a Debian-based Incus host with cloud-init and use Open vSwitch for container networking. |
en |
linux, netplan.io, openvswitch, incus, cloud-init |
[toc]
Copyright (c) 2025 Philippe Latu. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License".
C-3PO, protocol droid by trade, network engineer by necessity, currently experiencing a fatal ‘missing coffee’ exception.
In this lab, you will build a small Open vSwitch‑backed network on a Debian Trixie host and attach a set of unprivileged Incus containers to it. This environment demonstrates how to separate host connectivity, virtual switching, and container networking while keeping full control over IPv4/IPv6 routing, firewalling, and address management.
The main stages of the scenario are as follows:
- Install the necessary networking and container tools, including Netplan, Open vSwitch, nftables, dnsmasq and Incus, on a Debian Testing host or virtual machine. The latest version of Incus is installed from the Zabbly repository.
- Create the Open vSwitch‑based topology by defining the c-3po switch, the vlan10 Switched Virtual Interface, enabling IPv4/IPv6 forwarding, and configuring dnsmasq plus an nftables masquerading ruleset for container egress.
- Initialise Incus to use C-3PO instead of a local bridge and adapt the default profile so that the container NICs are bridged into VLAN 10. Then, launch the first set of Debian Trixie containers attached to this virtual network.
- Validate the setup by checking addressing and routing from within the containers, testing East‑West connectivity, automating maintenance commands across running containers, and inspecting Open vSwitch state (ports, neighbours, and CAM table) on the host.
After completing this lab, you will be able to:
-
Configure an Open vSwitch‑based Layer 2/3 fabric using Netplan, including a VLAN‑backed Switched Virtual Interface for container traffic.
-
Deploy a basic routing, firewall, and DHCP/DNS stack with nftables and dnsmasq to provide IPv4/IPv6 connectivity for containers.
-
Install and initialise Incus to consume the existing Open vSwitch bridge, and attach unprivileged containers to a tagged VLAN.
-
Validate and troubleshoot the resulting setup by inspecting container connectivity, host routing, and Open vSwitch state (ports, neighbours, CAM entries).
The installation process begins with the base installation of Debian Testing on either a host system or a virtual machine.
:::info
Netplan is now included by default in the images provided at cloud.debian.org. Therefore, there is no need to install the netplan.io package as shown below.
:::
:::info
Having supported students who were struggling to incorporate Open vSwitch into cloud-init stages, I decided to pre-install the openvswitch-switch package in the qcow2 image files for the master virtual machines. This means that there is also no need to install the package manually after booting the virtual machine.
:::
sudo apt -y install netplan.io openvswitch-switch nftables dnsmasqWe can verify that the packages have been installed and display the versions of the tools used in this lab.
apt search ^netplan.io$netplan.io/testing,now 1.1.2-8 amd64 [installed]
Declarative network configuration for various backends at runtime
apt search ^openvswitch-switch$openvswitch-switch/testing,now 3.6.0-6+b1 amd64 [installed]
Open vSwitch switch implementations
apt search ^nftables$nftables/testing,now 1.1.6-1 amd64 [installed]
program for controlling Netfilter project packet filtering rules
apt search ^dnsmasq$dnsmasq/testing,now 2.92~rc3-1 all [installed]
small DNS cache proxy and DHCP/TFTP server – system daemon
Part 2 creates the c-3po Open vSwitch bridge and the vlan10 Switched Virtual Interface, which together provide Layer 2 connectivity and a routed gateway for all attached containers.
Note that the example file uses automatic addressing for the enp0s1 host interface. You may need to adjust these settings according to your requirements.
-
c-3po is an access layer switch. Quote from Star Wars:
"Don't blame me. I'm an interpreter. I'm not supposed to know a power socket from a computer terminal."
-
vlan10 is a switched virtual interface that feeds the host routing table and acts as the default gateway for all containers
Here is a copy of the /etc/netplan/10-enp0s1+ovs.yaml file.
---
network:
version: 2
renderer: networkd
ethernets:
enp0s1:
dhcp4: true
dhcp6: false
accept-ra: true
bridges:
c-3po:
openvswitch: {}
vlans:
vlan10:
id: 10
link: "c-3po"
openvswitch: {}
addresses:
- "192.0.2.1/24"
- "fdc0:7a62:a::1/64"
- "fe80:a::1/64"
Run the following command to apply all the network parameters declared in the /etc/netplan/10-enp0s1+ovs.yaml file.
sudo netplan applyFinally, check the status of all configured network interfaces.
sudo netplan status Online state: online
DNS Addresses: 127.0.0.53 (stub)
DNS Search: .
● 1: lo ethernet UNKNOWN/UP (unmanaged)
MAC Address: 00:00:00:00:00:00
Addresses: 127.0.0.1/8
::1/128
● 2: enp0s1 ethernet UP (networkd: enp0s1)
MAC Address: b8:ad:ca:fe:00:01 (Red Hat, Inc.)
Addresses: 198.18.53.1/22 (dynamic, dhcp)
2001:678:3fc:34:baad:caff:fefe:1/64 (dynamic, ra)
fe80::baad:caff:fefe:1/64 (link)
DNS Addresses: 172.16.0.2
2001:678:3fc:3::2
Routes: default via 198.18.52.1 from 198.18.53.1 metric 100 (dhcp)
172.16.0.2 via 198.18.52.1 from 198.18.53.1 metric 100 (dhcp)
198.18.52.0/22 from 198.18.53.1 metric 100 (link)
198.18.52.1 from 198.18.53.1 metric 100 (dhcp, link)
2001:678:3fc:34::/64 metric 100 (ra)
fe80::/64 metric 256
default via fe80::34:1 metric 100 (ra)
● 4: c-3po other UNKNOWN/UP (networkd: c-3po)
MAC Address: 46:2c:c3:2c:02:4c
Addresses: fe80::a807:89ff:fe34:a75b/64 (link)
Routes: fe80::/64 metric 256
● 5: vlan10 other UNKNOWN/UP (networkd: vlan10)
MAC Address: 46:2c:c3:2c:02:4c
Addresses: 192.0.2.1/24
fdc0:7a62:a:0:442c:c3ff:fe2c:24c/64 (ra)
fdc0:7a62:a::1/64 (ra)
fe80:a::1/64 (link)
fe80::442c:c3ff:fe2c:24c/64 (link)
DNS Addresses: 2620:fe::fe
2001:678:3fc:3::2
Routes: 192.0.2.0/24 from 192.0.2.1 (link)
fdc0:7a62:a::/64 metric 256
fdc0:7a62:a::/64 metric 1024 (ra)
fe80::/64 metric 256
fe80:a::/64 metric 256
To enable the host to route traffic between the external network and the Open vSwitch VLAN, IPv4/IPv6 forwarding must be enabled in the kernel. Additionally, implementing a small set of sysctl configuration parameters also improves basic IPv4 filtering.
. Create a new file named /etc/sysctl.d/10-routing.conf.
cat << 'EOF' | sudo tee /etc/sysctl.d/10-routing.conf
net.ipv4.conf.default.rp_filter=1
net.ipv4.conf.all.rp_filter=1
net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1
net.ipv4.conf.all.log_martians = 1
EOF. Make it happen!
sudo sysctl --system | grep netnet.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.c-3po.rp_filter = 2
net.ipv4.conf.enp0s1.rp_filter = 2
net.ipv4.conf.lo.rp_filter = 2
net.ipv4.conf.ovs-system.rp_filter = 2
net.ipv4.conf.vlan10.rp_filter = 2
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.c-3po.accept_source_route = 0
net.ipv4.conf.enp0s1.accept_source_route = 0
net.ipv4.conf.lo.accept_source_route = 0
net.ipv4.conf.ovs-system.accept_source_route = 0
net.ipv4.conf.vlan10.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.c-3po.promote_secondaries = 1
net.ipv4.conf.enp0s1.promote_secondaries = 1
net.ipv4.conf.lo.promote_secondaries = 1
net.ipv4.conf.ovs-system.promote_secondaries = 1
net.ipv4.conf.vlan10.promote_secondaries = 1
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
Part 4 configures dnsmasq on the container VLAN, enabling new Incus instances to automatically receive IPv4/IPv6 addresses, router advertisements and DNS settings. This turns the Open vSwitch segment into a self-contained, plug-and-play network for containers.
Edit the /etc/dnsmasq.conf file to set the required parameters for addressing containers and resolving names. Please note that running the following command will erase any previous dnsmasq configuration.
cat << 'EOF' | sudo tee /etc/dnsmasq.conf
# Specify Container VLAN interface
interface=vlan10
# Enable DHCPv4 on Container VLAN
dhcp-range=192.0.2.100,192.0.2.200,3h
# Enable IPv6 router advertisements
enable-ra
# Enable SLAAC
dhcp-range=::,constructor:vlan10,ra-names,slaac
# Optional: Specify DNS servers
dhcp-option=option:dns-server,172.16.0.2,9.9.9.9
dhcp-option=option6:dns-server,[2001:678:3fc:3::2],[2620:fe::fe]
# Avoid DNS listen port conflict between dnsmasq and systemd-resolved
port=0
EOFDon't forget to restart service after editing the configuration file.
sudo systemctl restart dnsmasq.servicePart 5 introduces the minimum necessary nftables configuration to perform source NAT on all packets leaving the host via the enp0s1 interface. This allows containers on the Open vSwitch VLAN to reach external networks using the host’s IP address.
Create a new basic /etc/nftables.conf file that only configures source address translation for all outbound packets passing through the enps0s1 interface.
cat << 'EOF' | sudo tee /etc/nftables.conf
#!/usr/sbin/nft -f
flush ruleset
table inet nat {
chain postrouting {
type nat hook postrouting priority 100;
oifname "enp0s1" masquerade
}
}
EOFDon't forget to restart the nftables systemd service to enable the ruleset.
sudo systemctl enable --now nftables.servicesudo nft list rulesettable inet nat {
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
oifname "enp0s1" masquerade
}
}
In Part 6, the Zabbly APT repository is added and the Incus container manager is installed. The unprivileged user is configured so that containers can be created and managed without root access.
We need to add a new package source.
-
Add the new repository key
wget -O - https://pkgs.zabbly.com/key.asc | sudo tee /etc/apt/keyrings/zabbly.asc -
Add the new repository parameters
cat << EOF | sudo tee /etc/apt/sources.list.d/zabbly-incus-stable.sources Enabled: yes Types: deb URIs: https://pkgs.zabbly.com/incus/stable Suites: trixie Components: main Architectures: $(dpkg --print-architecture) Signed-By: /etc/apt/keyrings/zabbly.asc EOF
-
Add the Trixie stable version to Debian sources file
sudo sed -i 's/Suites:[[:space:]]\+testing /Suites: trixie testing /' /etc/apt/sources.list.d/debian.sources -
We are now ready to update package catalog and install Incus
sudo apt update sudo apt -y install incus --no-install-recommends
-
The normal user
etuis our default unprivileged user and must belong toincus-adminandincusgroups.for grp in incus-admin incus do sudo adduser etu $grp done
Log out and log back in to make it effective.
-
After new login, group assignment is correct
groups
etu adm sudo users incus-admin incus
Part 7 customises the Incus default profile. This means that, rather than using the local Linux bridge proposed by Incus, containers will connect directly to the existing C-3PO Open vSwitch bridge on VLAN 10.
This customization can be done using the incus admin init command, which offers a variety of options. The important thing is to prevent the creation of a local network bridge and instead use our Open vSwitch c-3po switch.
incus admin initWould you like to use clustering? (yes/no) [default=no]:
Voulez-vous configurer un nouveau pool de stockage? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (dir, truenas) [default=dir]:
Where should this storage pool store its data? [default=/var/lib/incus/storage-pools/default]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: c-3po
Would you like the server to be available over the network? (yes/no) [default=no]: yes
Addresses to associate with (without ports) [default=all]:
Port to bind to [default=8443]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:We have to change the nictype: from macvlan to bridged and set the vlan: number.
incus profile device set default eth0 nictype bridged
incus profile device set default eth0 vlan 10incus profile show defaultconfig: {}
description: Default Incus profile
devices:
eth0:
name: eth0
nictype: bridged
parent: c-3po
type: nic
vlan: "10"
root:
path: /
pool: default
type: disk
name: default
used_by: []
project: default
If we want to make the initialisation of Incus reproducible, we can reuse the YAML dump of this setup. Here are the corresponding commands:
incus admin init --dump >incus-init.yamlcat incus-init.yaml---
config:
core.https_address: '[::]:8443'
networks: []
storage_pools:
- config:
source: /var/lib/incus/storage-pools/default
description: ""
name: default
driver: dir
storage_volumes: []
profiles:
- config: {}
description: Default Incus profile
devices:
eth0:
name: eth0
nictype: bridged
parent: c-3po
type: nic
vlan: "10"
root:
path: /
pool: default
type: disk
name: default
project: ""
projects:
- config:
features.images: "true"
features.networks: "true"
features.networks.zones: "true"
features.profiles: "true"
features.storage.buckets: "true"
features.storage.volumes: "true"
description: Default Incus project
name: default
certificates: []
cluster_groups: []
Here is the non-interactive Incus initialisation and customization command:
incus admin init --preseed < incus-init.yamlWe are done with the default profile.
In Part 8, a small set of Debian Trixie Incus containers is launched and connected to VLAN 10. It is then verified that each instance has booted correctly with working IPv4 and IPv6 connectivity.
for i in {0..2}
do
incus launch images:debian/trixie c${i}
doneLaunching c0
Launching c1
Launching c2
incus ls+------+---------+--------------------+------------------------------------------+-----------+-------------+
| NAME | STATE | IPv4 | IPv6 | TYPE | SNAPSHOTS |
+------+---------+--------------------+------------------------------------------+-----------+-------------+
| c0 | RUNNING | 192.0.2.183 (eth0) | fdc0:7a62:a:0:1266:6aff:fefb:bbae (eth0) | CONTAINER | 0 |
+------+---------+--------------------+------------------------------------------+-----------+-------------+
| c1 | RUNNING | 192.0.2.187 (eth0) | fdc0:7a62:a:0:1266:6aff:fe7e:cd6d (eth0) | CONTAINER | 0 |
+------+---------+--------------------+------------------------------------------+-----------+-------------+
| c2 | RUNNING | 192.0.2.161 (eth0) | fdc0:7a62:a:0:1266:6aff:fe31:7f36 (eth0) | CONTAINER | 0 |
+------+---------+--------------------+------------------------------------------+-----------+-------------+
Part 9 verifies that a newly created container with the correct IPv4/IPv6 addresses can access external package mirrors and communicate with its peers on VLAN 10.
incus exec c0 -- bashroot@c0:~#
root@c0:~# ip addr ls1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host proto kernel_lo
valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 10:66:6a:fb:bb:ae brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.0.2.183/24 metric 1024 brd 192.0.2.255 scope global dynamic eth0
valid_lft 10690sec preferred_lft 10690sec
inet6 fdc0:7a62:a:0:1266:6aff:fefb:bbae/64 scope global mngtmpaddr noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::1266:6aff:fefb:bbae/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
root@c0:~# apt updateHit:1 http://deb.debian.org/debian trixie InRelease
Get:2 http://deb.debian.org/debian trixie-updates InRelease [47.3 kB]
Get:3 http://deb.debian.org/debian-security trixie-security InRelease [43.4 kB]
Get:4 http://deb.debian.org/debian-security trixie-security/main amd64 Packages [85.2 kB]
Get:5 http://deb.debian.org/debian-security trixie-security/main Translation-en [53.6 kB]
Fetched 229 kB in 0s (685 kB/s)
All packages are up to date.
root@c0:~# for i in {1..2}; do ping -c2 c$i; donePING c1 (fdc0:7a62:a:0:1266:6aff:fe7e:cd6d) 56 data bytes
64 bytes from c1 (fdc0:7a62:a:0:1266:6aff:fe7e:cd6d): icmp_seq=1 ttl=64 time=0.341 ms
64 bytes from c1 (fdc0:7a62:a:0:1266:6aff:fe7e:cd6d): icmp_seq=2 ttl=64 time=0.078 ms
--- c1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.078/0.209/0.341/0.131 ms
PING c2 (fdc0:7a62:a:0:1266:6aff:fe31:7f36) 56 data bytes
64 bytes from c2 (fdc0:7a62:a:0:1266:6aff:fe31:7f36): icmp_seq=1 ttl=64 time=0.425 ms
64 bytes from c2 (fdc0:7a62:a:0:1266:6aff:fe31:7f36): icmp_seq=2 ttl=64 time=0.069 ms
--- c2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.069/0.247/0.425/0.178 ms
Part 10 introduces a small helper script that automates routine administration tasks such as upgrades and cleanup by running maintenance commands inside every running container.
To update packages in containers, let's create a shell script that runs commands in each running container.
cat << 'EOF' > run-in-c.sh
#!/usr/bin/env bash
set -euo pipefail
# All command *strings* as given on the CLI
cmds=("$@")
# Read container names into an array
mapfile -t containers < <(incus list --format csv --columns n status=Running)
for c in "${containers[@]}"; do
echo ">>>>>>>>>>>>>>>>> $c"
for cmd in "${cmds[@]}"; do
# Run via a single shell inside the container
incus exec "$c" -- bash -c "$cmd"
done
done
EOFThen run the usual apt commands within containers.
bash run-in-c.sh \
"apt update" \
"apt -y full-upgrade" \
"apt clean" \
"apt -y autopurge"In Part 11, the Open vSwitch configuration and forwarding tables on the host are inspected to confirm that all container interfaces have been correctly tagged in VLAN 10 and that the MAC and neighbour entries match the expected topology.
Display OvS main switch configuration.
sudo ovs-vsctl showPlease note that all veth container port connections are tagged in VLAN 10.
0119a374-1812-4fe9-a391-8be6031e7d1c
Bridge c-3po
fail_mode: standalone
Port vethc68e3315
tag: 10
Interface vethc68e3315
Port vethdd774d01
tag: 10
Interface vethdd774d01
Port c-3po
Interface c-3po
type: internal
Port veth898bc1a1
tag: 10
Interface veth898bc1a1
Port vlan10
tag: 10
Interface vlan10
type: internal
ovs_version: "3.6.0"Say hello to VLAN 10 neighborhood
ping -c2 ff02::1%vlan10PING ff02::1%vlan10 (ff02::1%vlan10) 56 data bytes
64 bytes from fe80:a::1%vlan10: icmp_seq=1 ttl=64 time=0.089 ms
64 bytes from fe80::1266:6aff:fefb:bbae%vlan10: icmp_seq=1 ttl=64 time=0.796 ms
64 bytes from fe80::1266:6aff:fe7e:cd6d%vlan10: icmp_seq=1 ttl=64 time=0.844 ms
64 bytes from fe80::1266:6aff:fe31:7f36%vlan10: icmp_seq=1 ttl=64 time=0.859 ms
64 bytes from fe80:a::1%vlan10: icmp_seq=2 ttl=64 time=0.072 ms
--- ff02::1%vlan10 ping statistics ---
2 packets transmitted, 2 received, +3 duplicates, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.072/0.532/0.859/0.369 ms
As we already knew, we have three neighbors.
ip nei ls dev vlan10192.0.2.183 lladdr 10:66:6a:fb:bb:ae STALE
192.0.2.161 lladdr 10:66:6a:31:7f:36 STALE
192.0.2.187 lladdr 10:66:6a:7e:cd:6d STALE
fdc0:7a62:a:0:1266:6aff:fe31:7f36 lladdr 10:66:6a:31:7f:36 STALE
fdc0:7a62:a:0:1266:6aff:fefb:bbae lladdr 10:66:6a:fb:bb:ae STALE
fe80:a::1 lladdr 46:2c:c3:2c:02:4c router STALE
fdc0:7a62:a:0:1266:6aff:fe7e:cd6d lladdr 10:66:6a:7e:cd:6d STALE
fe80::1266:6aff:fe31:7f36 lladdr 10:66:6a:31:7f:36 REACHABLE
fe80::1266:6aff:fefb:bbae lladdr 10:66:6a:fb:bb:ae REACHABLE
fe80::1266:6aff:fe7e:cd6d lladdr 10:66:6a:7e:cd:6d REACHABLE
Display Open vSwitch Content Addressable Memory (CAM) entries.
sudo ovs-appctl fdb/show c-3po port VLAN MAC Age
LOCAL 0 46:2c:c3:2c:02:4c 294
3 10 10:66:6a:7e:cd:6d 75
4 10 10:66:6a:31:7f:36 75
1 10 46:2c:c3:2c:02:4c 75
2 10 10:66:6a:fb:bb:ae 75
This part introduces the YAML declaration used to instantiate the ovs-incus virtual machine with cloud-init, turning it into a reproducible template that provisions Debian, networking packages, and all required Open vSwitch, routing, and dnsmasq configuration on first boot.
Two YAML variants are provided, reflecting different levels of cloud-init features:
- a minimal configuration that focuses on package installation and basic files,
- an extended configuration that also manages APT policy, additional repositories, and post-configuration commands.
Cloud-init is an open-source initialisation tool that automatically configures a virtual machine at boot time using a YAML file. It handles tasks such as network setup, user creation, SSH keys, APT configuration, file injection, package installation and final orchestration commands, eliminating the need for manual intervention.
Cloud-init has a step-by-step process for starting up, and each step allows you to add your own user data to customise the instance. In this lab, the YAML declarations use these hooks (such as bootcmd, write_files, apt, packages, and runcmd) at exact points in that flow.
This means there is a strict execution order that must be followed when deciding where to place commands, file writes, and package operations. The way user data is written in YAML is closely linked to the boot stages. If a directive is misordered, it can lead to subtle race conditions. For example, this can mean running a command before its package is installed or configuring a service before its configuration files exist.
:::warning Cloud-init is a powerful automation tool, but relying heavily on its execution order can quickly produce YAML declarations that are hard to read and debug. Subtle ordering dependencies between modules mean small changes in one hook can cause non-obvious side effects later in the boot flow.
This is why there are two different level YAML files in this part. :::
Below is an ordered list focused on the YAML keys we are using, in the order they are typically evaluated within the boot flow.
-
bootcmd: Runs early in the boot process on every boot, allowing low-level commands such as preparing keyrings, creating directories, or tweaking the system before package and configuration modules run.
-
write_files: Writes configuration files to disk so that services and package tools see the correct configuration when later modules (like apt and packages) execute.
-
apt (and apt: sources / apt: conf): Configures APT options, repositories, and non-interactive dpkg behavior, ensuring that subsequent package installation uses the desired sources and policies.
-
packages: Installs the requested Debian packages using the configured APT sources, typically after an automatic apt-get update triggered by the module.
-
runcmd: Defines commands that are written to a script and executed near the end of the boot sequence, after packages are installed, commonly used for final orchestration like enabling services or running Incus initialisation.
The YAML declaration file below defines the virtual machine template that is used to create a dedicated ovs-incus host in the Teaching Private Cloud environment.
This example automatically provisions a Debian Testing image and pre-installs vital networking components, such as Nftables and Dnsmasq from the shared lab scripts repository.
This first-level YAML declaration deliberately uses a small subset of cloud-init user-data hooks. The plan is to keep the template easy to read and understand. The trade-off is that some tasks are left to be done later in the lab when the virtual machine has booted, rather than being fully automated at VM boot time.
Here is the list of hooks used here:
-
packages: Installs the minimal set of networking packages (nftables and dnsmasq) required for the ovs-incus host.
-
write_files: Drops the routing, dnsmasq, and nftables configuration files directly into /etc on first boot.
-
runcmd: Applies the Netplan configuration, removes the default network file, and enables the nftables service once all files and packages are in place.
Here is a copy of the ovs+incus-level1.yaml file:
---
kvm:
vms:
- vm_name: ovs-incus
os: linux
master_image: debian-testing-amd64.qcow2
force_copy: false
memory: 2048
tapnum: 1
cloud_init:
force_seed: false
hostname: ovs-incus
packages:
- nftables
- dnsmasq
write_files:
- path: /etc/sysctl.d/10-routing.conf
content: |
net.ipv4.conf.default.rp_filter=1
net.ipv4.conf.all.rp_filter=1
net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1
net.ipv4.conf.all.log_martians = 1
- path: /etc/dnsmasq.conf
content: |
# Specify Container VLAN interface
interface=vlan10
# Enable DHCPv4 on Container VLAN
dhcp-range=192.0.2.100,192.0.2.200,3h
# Enable IPv6 router advertisements
enable-ra
# Enable SLAAC
dhcp-range=::,constructor:vlan10,ra-names,slaac
# Optional: Specify DNS servers
dhcp-option=option:dns-server,172.16.0.2,9.9.9.9
dhcp-option=option6:dns-server,[2001:678:3fc:3::2],[2620:fe::fe]
# Avoid DNS listen port conflict between dnsmasq and systemd-resolved
port=0
- path: /etc/nftables.conf
content: |
#!/usr/sbin/nft -f
flush ruleset
table inet nat {
chain postrouting {
type nat hook postrouting priority 100;
oifname "enp0s1" masquerade
}
}
netplan:
network:
version: 2
renderer: networkd
ethernets:
enp0s1:
dhcp4: true
dhcp6: false
accept-ra: true
bridges:
c-3po:
openvswitch: {}
vlans:
vlan10:
id: 10
link: c-3po
addresses:
- 192.0.2.1/24
- fdc0:7a62:a::1/64
- fe80:a::1/64
runcmd:
- netplan apply
- rm /etc/netplan/enp0s1.yaml
- systemctl enable --now nftables.service
To launch a virtual machine using the YAML declaration file, all we have to do is:
lab-startup.py ovs+incus-level1.yamlCloud-init user-data hooks in this second-level YAML declaration are used to create a VM with container management ready to use on first boot, but the configuration is much more complex. Here, the early bootcmd commands prepare APT keyrings and repositories. This makes sure that package installation and Incus-related configuration succeed when later modules run.
Here is the list of hooks used here:
-
bootcmd: Prepares APT keyrings and repository files early so that package management is ready before other modules run.
-
write_files: Installs configuration files such as dnsmasq, nftables, Netplan, and Incus preseed YAML into their final locations.
-
apt (and apt: conf / apt: sources): Declares APT policy and custom repositories so that subsequent package operations use the correct sources and options.
-
packages: Installs Incus and all required networking packages using the repositories prepared by bootcmd and apt.
-
runcmd: Performs final orchestration steps, including network reloads and Incus initialisation, after all packages and configuration files are in place.
Here is a copy of the ovs+incus-level2.yaml file:
---
kvm:
vms:
- vm_name: ovs-incus
os: linux
master_image: debian-testing-amd64.qcow2
force_copy: false
memory: 2048
tapnum: 1
cloud_init:
force_seed: false
hostname: ovs-incus
bootcmd:
- mkdir -p /etc/apt/keyrings
- [cloud-init-per, once, zabbly-key, wget,
-O, /etc/apt/keyrings/zabbly.asc,
https://pkgs.zabbly.com/key.asc]
- >
sed -i 's/Suites:[[:space:]]\+testing /Suites: trixie testing /'
/etc/apt/sources.list.d/debian.sources
apt:
# Preserve dnsmasq.conf file during package install
# Add --no-install-recommends to incus install
conf: |
Dpkg::Options {
"--force-confdef";
"--force-confold";
}
APT::Install-Recommends "false";
APT::Install-Suggests "false";
packages:
- nftables
- dnsmasq
- incus
write_files:
- path: /etc/sysctl.d/10-routing.conf
content: |
net.ipv4.conf.default.rp_filter=1
net.ipv4.conf.all.rp_filter=1
net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1
net.ipv4.conf.all.log_martians = 1
- path: /etc/apt/sources.list.d/zabbly-incus-stable.sources
content: |
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: trixie
Components: main
Architectures: amd64
Signed-By: /etc/apt/keyrings/zabbly.asc
- path: /etc/dnsmasq.conf
content: |
# Specify Container VLAN interface
interface=vlan10
# Enable DHCPv4 on Container VLAN
dhcp-range=192.0.2.100,192.0.2.200,3h
# Enable IPv6 router advertisements
enable-ra
# Enable SLAAC
dhcp-range=::,constructor:vlan10,ra-names,slaac
# Optional: Specify DNS servers
dhcp-option=option:dns-server,172.16.0.2,9.9.9.9
dhcp-option=option6:dns-server,[2001:678:3fc:3::2],[2620:fe::fe]
# Avoid DNS listen port conflict between
# dnsmasq and systemd-resolved
port=0
- path: /etc/nftables.conf
content: |
#!/usr/sbin/nft -f
flush ruleset
table inet nat {
chain postrouting {
type nat hook postrouting priority 100;
oifname "enp0s1" masquerade
}
}
- path: /home/etu/incus-init.yaml
content: |
config:
core.https_address: '[::]:8443'
networks: []
storage_pools:
- config:
source: /var/lib/incus/storage-pools/default
description: ""
name: default
driver: dir
storage_volumes: []
profiles:
- config: {}
description: Default Incus profile
devices:
eth0:
name: eth0
nictype: bridged
parent: c-3po
type: nic
vlan: "10"
root:
path: /
pool: default
type: disk
name: default
project: ""
projects:
- config:
features.images: "true"
features.networks: "true"
features.networks.zones: "true"
features.profiles: "true"
features.storage.buckets: "true"
features.storage.volumes: "true"
description: Default Incus project
name: default
certificates: []
cluster_groups: []
netplan:
network:
version: 2
renderer: networkd
ethernets:
enp0s1:
dhcp4: true
dhcp6: false
accept-ra: true
bridges:
c-3po:
openvswitch: {}
vlans:
vlan10:
id: 10
link: c-3po
addresses:
- 192.0.2.1/24
- fdc0:7a62:a::1/64
- fe80:a::1/64
runcmd:
- rm /etc/netplan/enp0s1.yaml
- netplan apply
- systemctl enable --now nftables.service
- adduser etu incus
- adduser etu incus-admin
- chown etu:etu /home/etu/incus-init.yaml
- >
runuser -u etu --
incus admin init --preseed <
/home/etu/incus-init.yaml
To launch a virtual machine using the YAML declaration file, all we have to do is:
lab-startup.py ovs+incus-level2.yamlHere is an image of the achieved logical topology.
This lab provides comprehensive instructions for setting up unprivileged Incus containers on Open vSwitch (OvS) within a Debian Trixie installation. It covers all the necessary steps, including installing the required tools, configuring a network switch and routing, setting up DHCP and DNS, installing Incus, and creating containers, and verifying the setup.
Cloud-init, combined with carefully designed YAML declarations, meets this lab’s initial objectives by turning a generic Debian image into a reproducible Incus host with Open vSwitch ready on first boot.
The two levels of user-data configuration illustrate how progressively richer use of hooks can move from a minimally configured VM to a fully operational container platform without manual post-install steps.
However, procedural automation has limits: tools like cloud-init excel at first-boot customization but do not naturally provide the strong, repeatable idempotency expected from full configuration management or infrastructure-as-code workflows. To reach that goal across the VM lifecycle and fleet, additional bricks such as Ansible for ongoing configuration and OpenTofu/Terraform for declarative infrastructure provisioning are typically combined with cloud-init, each tool covering a different layer of the automation stack.

