incus launch <image> <instance name>
- Note you may need to add
--config security.nesting=trueif using containers(possibly VMs too)
If you happen to have resized storage using profiles, you might still have to remove the default profile by first running
incus config device remove <instance name> root
then re-assign your profile.
Storage Notes For NixOS
For reference, if using NixOS, you need a minimum of 20-30GB of free space in order for builds to work(possibly more depending on what you're doing). By default, Incus only provisions VMs with about 10GB. To add more, it's probably best to set up a "profile", which for Incus is essentially a config file for containers/vm. This is what mine looks like
config:
limits.cpu: "4"
limits.memory: 4GB
description: ""
devices:
root:
path: /
pool: incus-pool
size: 50GiB
type: disk
name: base-profile
used_by:
- /1.0/instances/devenv-web
project: default
When running incus admin init you may find the nat settings a bit off. Incus appears to be looking for the nftable modules and not find it somehow.
If that happens, you can first check to see if they exist.
# you can find the os kernel name by running uname -r
find /lib/modules/<os kernel name> -name "*masq*"
find /lib/modules/<os kernel name> -name "*nat*"
If the necessary files exist, you should see 2 things, the first is something along the lines of nft_masq.ko.zst and nft_chain_nat. Run
sudo modprobe nft_masq
sudo modprobe nft_chain_nat
sudo systemctl incus restart
This should allow Incus to proceed with initiailiation. It was a bit weird on my end that somehow I happened to do an update at the same time that included the required modules, presumably they were also on the previous kernel version so ¯_(ツ)_/¯
When setting the up an unprivilaged user, you'll have to make a few other changes.
- First setup an appropriate range of sub{u,g}ids for the root user. See section 2.2 here
- Do the same thing for the unprivilged user.
- Lastly, add the unprivileged user to the user group
incus-admin
You should also check to see if your computer is running a firewall. If it is, then you likely will have to add rules allowing the bridge interface to reach the internet. See here for common solutions
This is what I had to do for ufw (taken directly from their docs)
#!/usr/bin/bash
# incusbr0 is the default name for the network bridge
ufw allow in on incusbr0 to any port 67 proto udp
ufw allow in on incusbr0 to any port 547 proto udp
ufw allow in on incusbr0 to any port 53
# allow guest to have access to outbound connections
CIDR4="$(incus network get incusbr0 ipv4.address | sed 's|\.[0-9]\+/|.0/|')"
CIDR6="$(incus network get incusbr0 ipv6.address | sed 's|:[0-9]\+/|:/|')"
ufw route allow in on incusbr0 from "${CIDR4}"
ufw route allow in on incusbr0 from "${CIDR6}"
When it comes to doing development in the VM, the best solution seems to be using Incus's mounting system which allows you to map a folder on your computer to a folder on the VM. What's great is that any changes are synced near instantanesouly. To do so run.
incus config device add <instance> <name for mount> disk source=<path on host> path=<path on vm> shift=true
To unmount run
incus config device remove <instance name> <mount name>
Other notes
-
It seems like the best way to start something is to generate all the necessary files first and ensure any files are added/removed on the vm vs the host system.
-
For any Sqlite related operations that require file (un)locks, it seems best to
-
Create a mount directly to your project folder. I tried mounting to a more general location to avoid having to make a mount for each project but that appeared to cause issues inspite of doing the following things:
-
Put any cache directories / directories that will involve a file locking/unlocking in another location outside of the mount folder.
-
make a symbolic link to that folder and link it to where it would be in your project folder, ie
ln -sf /home/user/.local/cache/devenv /workspace/.devenv -
Note that things like hot-reloading will likely break and not work when editing files on the host.
-
The admin init should take you through the process of setting up networking. However if you import an image backup you may have to attach it again by running
incus network attach <network device name> <instance name>
Incus can back things up in one of 2 ways
- snapshots
- full exports
Snapshots are pretty straightforward but for full exports the command looks something like
incus export <instance> <path to write export to>
export provides some extra options including
--compression- the compression method to use when making the backup--instance-only- whether or not to include snapshots in the backup.
By default, gzip is the compression method used but if you happen to have something like zstd available, you can use that instead by passing in the command as a string for the compression parameter. For example
incus export <instance> <path to write export to> --compression "zstd -T0 --ultra -22" --instance-only
Though the export ends up larger, the speed difference more than makes up for it.
If using NixOS and Devenv and want to go with a workflow that mounts a folder on the VM to the host, you may have to remamp your project's .devenv folder elsewhere as it might interfere with how devenv works. To do so
- make a folder elsewhere, ie
mkdir -p ~/.cache/devenv/my-project - make a symlink to
.devenvieln -s <path to alt location> <path to project>/.devenv
not sure if it's due to mixing DevEnv with NixOS or not but presumably, this may need to be done for other distos as well.
NOTICE
Note that, at least with regards to DevEnv, you may have to mount directly to the project folder on the vm. I tried mapping to a more general location to, essentially link to a monorepo folder and had nothing but trouble.
This might be AI halucinated but to "truly" get free space back for the vm, while inside the vm try:
# Create a massive file of zeros until the disk is full
dd if=/dev/zero of=/zero.fill bs=1M || rm /zero.fill
sync