I’ve been clicking through the Ubuntu installer on Proxmox since 2019 – I even wrote a note to myself back then about the dance. Partition the disk, pick a locale, wait for packages, reboot, SSH in, install the things I always install. Ten minutes per VM, every time.
What finally pushed me to fix it was wanting ephemeral Ubuntu VMs for development – in particular, disposable sandboxes for running AI coding agents. My first instinct was to run Multipass inside a Proxmox VM. That afternoon I fought snap confinement (the daemon couldn’t see /tmp), tildes not expanding inside quoted arguments, and native mounts that happily clobbered the authorized_keys Multipass had just injected. I climbed back out of that rabbit hole and realized the answer was one layer up the stack, not two layers down: Proxmox supports cloud-init natively. Template once, clone in seconds, bootstrap at first boot.
Cloud Images vs ISOs
Ubuntu ships a cloud image as a pre-installed qcow2 disk: noble-server-cloudimg-amd64.img. No installer. No locale picker. No partitioning. Boot it and cloud-init runs on first start, reading a user-data file you provide and configuring the machine – users, SSH keys, packages, arbitrary commands. This is the exact same image AWS, GCP, Azure, and DigitalOcean hand you when you spin up an Ubuntu instance; Canonical just happens to publish it for the rest of us too.
Turn that image into a Proxmox template, and every new VM is a qm clone away. Attach a per-VM cicustom snippet with your cloud-init YAML, resize the disk, start the VM. First shell prompt in under a minute.
Building the Template
Here’s the script I run on the Proxmox host to build the template from scratch. It’s idempotent – destroy the old template, download a fresh image, verify the checksum, rebuild:
#!/bin/bash
set -euo pipefail
# Configuration
VMID=${VMID:-9000}
STORAGE=${STORAGE:-local-lvm}
BRIDGE=${BRIDGE:-vmbr0}
IMAGE_URL="https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
IMAGE_FILE="noble-server-cloudimg-amd64.img"
# Destroy existing template if present
if qm status $VMID &>/dev/null; then
echo "Destroying existing VM/template $VMID..."
qm destroy $VMID --purge
fi
# Download and verify image
echo "Downloading Ubuntu Noble cloud image..."
wget -q --show-progress "$IMAGE_URL"
wget -q "https://cloud-images.ubuntu.com/noble/current/SHA256SUMS"
sha256sum --check --ignore-missing SHA256SUMS
echo "Checksum OK"
# Create VM
qm create $VMID \
--name "ubuntu-2404-template" \
--ostype l26 \
--memory 1024 \
--agent 1 \
--bios ovmf --machine q35 --efidisk0 ${STORAGE}:0,pre-enrolled-keys=1,ms-cert=2023k \
--cpu host --sockets 1 --cores 1 \
--vga serial0 --serial0 socket \
--net0 virtio,bridge=${BRIDGE}
# Import disk
qm importdisk $VMID "$IMAGE_FILE" $STORAGE
# Configure disks - imported disk lands on disk-1 because efidisk takes disk-0
qm set $VMID \
--scsihw virtio-scsi-pci \
--virtio0 ${STORAGE}:vm-${VMID}-disk-1,discard=on,iothread=1 \
--boot order=virtio0 \
--scsi1 ${STORAGE}:cloudinit
# Convert to template
qm template $VMID
# Cleanup
rm -f "$IMAGE_FILE" SHA256SUMS
echo "Template $VMID ready."
A few of the choices worth calling out: --agent 1 enables the QEMU guest agent (graceful shutdown, IP reporting). --vga serial0 --serial0 socket routes the console to serial, which means qm terminal $VMID gives you a working shell – essential when cloud-init falls over and you need to see what happened. And the SHA256SUMS check is not optional; you’re baking this image into every future VM.
The image is also shim-signed, so pre-enrolled-keys=1 on the efidisk gets you secure boot out of the box – the same posture you’d get on AWS or GCP. Flip it to 0 if you want to enroll your own keys instead. The ms-cert=2023k part matters too: Microsoft’s 2011 UEFI CA certs expire in June 2026, and without that option Proxmox will mint the efidisk with only the old certs, nagging you at every VM start. Setting it now means fresh clones are ready for the cert rotation without manual qm enroll-efi-keys calls later.
The efidisk Gotcha
The gotcha: when you add --efidisk0 before importing the cloud image, the EFI vars disk takes disk-0 and the imported OS disk lands on disk-1. If you mindlessly script --virtio0 ${STORAGE}:vm-${VMID}-disk-0 you’ll “successfully” boot the EFI variable partition, watch it fail silently, and spend twenty minutes running qm config trying to figure out why. The script above references disk-1 for a reason.
Cloning and Customizing
Once the template exists, spinning up a new VM is a short script:
#!/bin/bash
set -euo pipefail
# Usage: ./launch-vm.sh <vmid> <name> [disk_size]
VMID=${1:?Usage: $0 <vmid> <name> [disk_size]}
NAME=${2:?Usage: $0 <vmid> <name> [disk_size]}
DISK_SIZE=${3:-20G}
# Accept a bare number as gigabytes
[[ "$DISK_SIZE" =~ ^[0-9]+$ ]] && DISK_SIZE="${DISK_SIZE}G"
TEMPLATE_ID=${TEMPLATE_ID:-9000}
STORAGE=${STORAGE:-local-lvm}
CORES=${CORES:-4}
MEMORY=${MEMORY:-4096}
CLOUDINIT_SNIPPET=${CLOUDINIT_SNIPPET:-"local:snippets/ai-worker.yaml"}
# Clone template
echo "Cloning template $TEMPLATE_ID -> VM $VMID ($NAME)..."
qm clone $TEMPLATE_ID $VMID --name "$NAME" --full --storage $STORAGE
# Configure VM
qm set $VMID \
--cores $CORES \
--memory $MEMORY \
--balloon 0 \
--ipconfig0 ip=dhcp \
--cicustom "user=${CLOUDINIT_SNIPPET}"
# Resize disk to absolute size (qm resize grows, never shrinks)
qm resize $VMID virtio0 $DISK_SIZE
# Start
qm start $VMID
echo "VM $VMID ($NAME) started with ${DISK_SIZE} disk."
echo "Watch boot: qm terminal $VMID"
The interesting flag is --cicustom. Proxmox’s built-in cloud-init panel handles the basics (SSH keys, DNS, IP, hostname) but cicustom lets you point at a full cloud-init YAML snippet for users, packages, runcmd, write_files – anything cloud-init supports. Snippets live in /var/lib/vz/snippets/ by default.
One caveat if you’re running a Proxmox cluster: /var/lib/vz/snippets/ is per-node. If you want a snippet available across nodes, put it on shared storage (CephFS, NFS) and reference it through that storage’s snippet path.
The qm resize step matters too. Cloud images ship with a small root disk (2-3 GB). cloud-init will expand the filesystem to fill the disk on first boot, but only if the disk itself is bigger, so resize before start.
A Minimal Cloud-Init Snippet
Here’s a stripped-down example of what goes in the snippet file – Docker, an ubuntu user with passwordless sudo, and SSH keys pulled from GitHub:
#cloud-config
packages:
- qemu-guest-agent
- ca-certificates
- curl
package_update: true
package_upgrade: true
users:
- name: ubuntu
groups: [sudo]
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_import_id:
- gh:vpetersson
runcmd:
- curl -fsSL https://get.docker.com | sh
- usermod -aG docker ubuntu
- systemctl enable --now qemu-guest-agent
final_message: "Bootstrap complete."
The coolest bit here is ssh_import_id: gh:vpetersson. Cloud-init pulls your public SSH keys straight from GitHub at first boot. No more copying authorized_keys around, no key-wrangling scripts.
My production snippet adds uv, bun, gh, and Claude Code for the AI-agent sandbox – the pattern is the interesting bit; swap in whatever toolchain your VMs need.
Why a VM Per Project
The reason this workflow matters more than it used to is simple: I don’t trust the sandboxing in AI coding agents. Every agent I’ve used has, at some point, done something I didn’t ask for – misread an instruction, skipped a confirmation, or blasted past an allow-list it was supposed to respect. Treating the agent’s built-in sandbox as a real security boundary is a bet I’ve stopped making.
So the pattern I’ve settled on is one VM (or LXC container on Proxmox) per project. Inside that VM I’m happy to let the agent run in whatever “dangerous” / auto-approve mode the tool offers, because the blast radius is bounded by the VM itself. If something goes sideways – rewrites my shell config, rm -rfs the wrong directory, pushes junk to a remote – it’s confined to a box I can qm destroy and rebuild from the template in seconds.
Each environment also gets its own SSH key pair – fresh keys for git operations and commit signing, never a copy of my personal key. If an agent ever does go rogue, the git history tells me exactly which environment pushed what. The audit trail survives even if the VM doesn’t.
This post was drafted by Claude Code running in an LXC container set up exactly this way – dedicated, scoped SSH keys, happy to run in dangerous mode because the blast radius stops at the container boundary.
Multipass Still Has a Place
Since this post opened with me abandoning Multipass, let me close the loop honestly: Multipass is a great tool, and I still use it on my laptop. multipass launch --cloud-init init.yaml 24.04 is the fastest way to get a throwaway Ubuntu shell on bare-metal Linux or macOS, and the same cloud-init file works in both worlds.
The other thing Multipass gets right is the ergonomics around ephemeral workflows. multipass launch, multipass shell, multipass transfer <file> instance:/path, multipass delete --purge – shunting files in and out of a short-lived instance is basically a one-liner in each direction. On a full Proxmox VM you’re back to scp, SSH keys, firewall rules, and (if the VM is short-lived) orchestrating all of that around a clone/destroy cycle. It’s doable, but there’s real overhead. For a quick “spin up an Ubuntu sandbox, poke at it, grab the output, throw it away” loop, Multipass wins on sheer friction.
The mismatch was the context. Running Multipass inside a Proxmox VM means nesting KVM twice, fighting snap confinement on every file transfer, and reinventing primitives Proxmox already gives you – templates, cloning, per-VM cloud-init injection. The heuristic I landed on: if your workstation is the host, Multipass. If Proxmox is the host, use the templates.
What’s Next
The 2019 version of me would be delighted. The primitives haven’t really changed – qm importdisk, OVMF, cloud images all existed back then – but the workflow is night and day. The obvious next step is to push all of this into OpenTofu via the bpg/proxmox provider, so the template, the snippet, and the VM lifecycle are all declarative and version-controlled. That’s probably the next post.
Enjoyed this post? Check out my podcast!
If you found this interesting, you might enjoy "Nerding Out with Viktor" - my podcast where I dive deep into tech, entrepreneurship, and security with industry experts.