Skip to main content
In this guide we will create a Kubernetes cluster using Proxmox.

Video Walkthrough

To see a live demo of this writeup, visit Youtube here:

Installation

How to Get Proxmox

It is assumed that you have already installed Proxmox onto the server you wish to create Talos VMs on. Visit the Proxmox downloads page if necessary.

Install talosctl

You can download talosctl on MacOS and Linux via:
brew install siderolabs/tap/talosctl
For manual installation and other platforms please see the talosctl installation guide.

Download ISO Image

In order to install Talos in Proxmox, you will need the ISO image from Image Factory.
mkdir -p _out/
curl https://factory.talos.dev/image/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/<version>/metal-<arch>.iso -L -o _out/metal-<arch>.iso
For example version for linux platform:

QEMU guest agent support (iso)

  • If you need the QEMU guest agent so you can do guest VM shutdowns of your Talos VMs, then you will need a custom ISO
  • To get this, navigate to https://factory.talos.dev/
  • Scroll down and select your Talos version ( for example)
  • Then tick the box for siderolabs/qemu-guest-agent and submit
  • This will provide you with a link to the bare metal ISO
  • The lines we’re interested in are as follows
  • Download the above ISO (this will most likely be amd64 for you)
  • Take note of the factory.talos.dev/installer URL as you’ll need it later

Upload ISO

From the Proxmox UI, select the “local” storage and enter the “Content” section. Click the “Upload” button: Select the ISO you downloaded previously, then hit “Upload”

Create VMs

Before starting, familiarise yourself with the system requirements for Talos and assign VM resources accordingly. Use the following baseline settings for Proxmox VMs running Talos:
SettingRecommended ValueNotes
BIOSovmf (UEFI)Modern firmware, Secure Boot support, better hardware compatibility
Machineq35Modern PCIe-based machine type with better device support
CPU TypehostEnables advanced instruction sets (AVX-512, etc.), best performance. Alternative: kvm64 with feature flags for Proxmox < 8.0
CPU Cores2+ (control plane), 4+ (workers)Minimum 2 cores required
Memory4GB+ (control plane), 8GB+ (workers)Minimum 2GB required
Disk ControllerVirtIO SCSI (NOT “VirtIO SCSI Single”)Single controller can cause bootstrap hangs (#11173)
Disk FormatRaw (performance) or QCOW2 (features/snapshots)Raw preferred for performance
Disk CacheWrite Through (safe default)Or None for clustered environments
Network ModelvirtioParavirtualized driver, best performance (up to 10 Gbit)
EFI Disk4MB (for OVMF)Required for UEFI firmware, stores Secure Boot keys
BallooningDisabledTalos doesn’t support memory hotplug
RNG DeviceVirtIO RNG (optional)Better entropy for security
Important: When configuring the disk controller, use VirtIO SCSI (not “VirtIO SCSI Single”). Using “VirtIO SCSI Single” can cause bootstrap to hang or prevent disk discovery. See issue #11173 for details.
Create a new VM by clicking the “Create VM” button in the Proxmox UI: Fill out a name for the new VM: In the OS tab, select the ISO we uploaded earlier: In the “System” tab:
  • Set BIOS to ovmf (UEFI) for modern firmware and Secure Boot support
  • Set Machine to q35 for modern PCIe-based machine type
  • Add EFI Disk (4MB) for persistent UEFI settings and Secure Boot key storage
In the “Hard Disk” tab:
  • Set Bus/Device to VirtIO SCSI (NOT “VirtIO SCSI Single”)
  • Set Storage to your main storage pool
  • Set Format to Raw (better performance) or QCOW2 (features/snapshots)
  • Set Size based on your workload requirements (adjust based on CSI and application needs)
  • Set Cache to Write Through (safe default) or None for clustered environments
  • Enable Discard (TRIM support) if using SSD storage
  • Enable SSD emulation if using SSD storage
Important: When configuring the disk controller, use VirtIO SCSI (not “VirtIO SCSI Single”). Using “VirtIO SCSI Single” can cause bootstrap to hang or prevent disk discovery. See issue #11173 for details.
In the “CPU” section:
  • Set Cores to 2+ for control planes, 4+ for workers
  • Set Sockets to 1 (keep simple)
  • Set Type to host (best performance, enables advanced instruction sets)
    • Alternative for Proxmox < 8.0: Use kvm64 with feature flags by adding to /etc/pve/qemu-server/<vmid>.conf:
      args: -cpu kvm64,+cx16,+lahf_lm,+popcnt,+sse3,+ssse3,+sse4.1,+sse4.2
      
    • Note: host CPU type prevents live VM migration but provides best performance
In the “Memory” section:
  • Set Memory to 4GB+ for control planes, 8GB+ for workers (minimum 2GB required)
  • Disable Ballooning (can cause issues with Talos memory detection)
In the “Network” section:
  • Set Model to virtio (paravirtualized driver, best performance)
  • Set Bridge to your network bridge (e.g., vmbr0)
  • Verify the VM is set to come up on the bridge interface
Tip: Enable a serial console (ttyS0) in Proxmox VM settings to see early boot logs and troubleshoot network connectivity issues. This is especially helpful when debugging DHCP timing or bridge configuration problems. Set Serial port to ttyS0 in Proxmox and add console=ttyS0 if you’re customizing kernel args.
Finish creating the VM by clicking through the “Confirm” tab and then “Finish”. Repeat this process for a second VM to use as a worker node. You can also repeat this for additional nodes desired.
Note: Talos doesn’t support memory hot plugging, if creating the VM programmatically don’t enable memory hotplug on your Talos VM’s. Doing so will cause Talos to be unable to see all available memory and have insufficient memory to complete installation of the cluster.

Start Control Plane Node

Once the VMs have been created and updated, start the VM that will be the first control plane node. This VM will boot the ISO image specified earlier and enter “maintenance mode”.

With DHCP server

Once the machine has entered maintenance mode, there will be a console log that details the IP address that the node received. Take note of this IP address, which will be referred to as $CONTROL_PLANE_IP for the rest of this guide. If you wish to export this IP as a bash variable, simply issue a command like export CONTROL_PLANE_IP=1.2.3.4.

Without DHCP server

To apply the machine configurations in maintenance mode, VM has to have IP on the network. So you can set it on boot time manually. Press e on the boot time. And set the IP parameters for the VM. Format is:
ip=<client-ip>:<srv-ip>:<gw-ip>:<netmask>:<host>:<device>:<autoconf>
For example $CONTROL_PLANE_IP will be 192.168.0.100 and gateway 192.168.0.1
linux /boot/vmlinuz init_on_alloc=1 slab_nomerge pti=on panic=0 consoleblank=0 printk.devkmsg=on earlyprintk=ttyS0 console=tty0 console=ttyS0 talos.platform=metal ip=192.168.0.100::192.168.0.1:255.255.255.0::eth0:off
Then press Ctrl-x or F10

Generate Machine Configurations

With the IP address above, you can now generate the machine configurations to use for installing Talos and Kubernetes. Issue the following command, updating the output directory, cluster name, and control plane IP as you see fit:
talosctl gen config talos-proxmox-cluster https://$CONTROL_PLANE_IP:6443 --output-dir _out
This will create several files in the _out directory: controlplane.yaml, worker.yaml, and talosconfig.
Note: The Talos config by default will install to /dev/sda. Depending on your setup the virtual disk may be mounted differently Eg: /dev/vda. You can check for disks running the following command:
talosctl get disks --insecure --nodes $CONTROL_PLANE_IP
Update controlplane.yaml and worker.yaml config files to point to the correct disk location.

QEMU guest agent support

For QEMU guest agent support, you can generate the config with the custom install image:
Important: Enable QEMU Guest Agent in Proxmox only if you built the ISO with the siderolabs/qemu-guest-agent extension in Image Factory. If you’re using a standard Talos ISO without this extension, leave QEMU Guest Agent disabled in Proxmox VM settings. Enabling it without the extension will only generate log spam and won’t provide any functionality. See: Image Factory for building a custom ISO with extensions.
  • If you did include the extension, go to your VM → Options and set QEMU Guest Agent to Enabled.

Create Control Plane Node

Using the controlplane.yaml generated above, you can now apply this config using talosctl. Issue:
talosctl apply-config --insecure --nodes $CONTROL_PLANE_IP --file _out/controlplane.yaml
You should now see some action in the Proxmox console for this VM. Talos will be installed to disk, the VM will reboot, and then Talos will configure the Kubernetes control plane on this VM. The VM will remain in stage Booting until the bootstrap is completed in a later step.
Note: This process can be repeated multiple times to create an HA control plane.

Create Worker Node

Create at least a single worker node using a process similar to the control plane creation above. Start the worker node VM and wait for it to enter “maintenance mode”. Take note of the worker node’s IP address, which will be referred to as $WORKER_IP Issue:
talosctl apply-config --insecure --nodes $WORKER_IP --file _out/worker.yaml
Note: This process can be repeated multiple times to add additional workers.

Using the Cluster

Once the cluster is available, you can make use of talosctl and kubectl to interact with the cluster. For example, to view current running containers, run talosctl containers for a list of containers in the system namespace, or talosctl containers -k for the k8s.io namespace. To view the logs of a container, use talosctl logs <container> or talosctl logs -k <container>. First, configure talosctl to talk to your control plane node by issuing the following, updating paths and IPs as necessary:
export TALOSCONFIG="_out/talosconfig"
talosctl config endpoint $CONTROL_PLANE_IP
talosctl config node $CONTROL_PLANE_IP

Bootstrap Etcd

talosctl bootstrap

Retrieve the kubeconfig

At this point we can retrieve the admin kubeconfig by running:
talosctl kubeconfig .

Troubleshooting

Cluster Creation Issues

If talosctl cluster create fails with disk controller errors:
  • “virtio-scsi-single disk controller is not supported”: This disk controller type causes Talos bootstrap to hang. Use virtio or scsi instead:
    # Wrong - will be rejected
    talosctl cluster create --disks virtio-scsi-single:10GiB
    
    # Correct - use virtio or scsi
    talosctl cluster create --disks virtio:10GiB
    talosctl cluster create --disks scsi:10GiB
    

Network Connectivity Issues

If nodes fail to obtain IP addresses or show “network is unreachable” errors:
  1. Verify bridge interface: Ensure the bridge interface (e.g., vmbr0) exists and is UP before starting VMs
    ip link show vmbr0
    
  2. Check DHCP server: Ensure DHCP server is running and reachable from the bridge network
  3. Firewall rules: If Proxmox VM firewall is enabled, allow DHCP traffic (UDP ports 67/68). If you enforce further filtering, ensure control-plane/API connectivity per your environment’s policy (see Talos networking docs).
  4. VLAN configuration: Ensure VLAN tags match between bridge configuration, VM network settings, and switch configuration
  5. Serial console: Enable serial console to view early boot logs and network initialization messages

Disk Controller Issues

  • Configuration rejected: If you see “virtio-scsi-single disk controller is not supported”, use --disks virtio:10GiB instead of --disks virtio-scsi-single:10GiB
  • Bootstrap hangs: If bootstrap hangs or disks aren’t discovered, verify you’re using VirtIO SCSI (not “VirtIO SCSI Single”)
  • Disk not found: Check disk path using talosctl get disks --insecure --nodes $CONTROL_PLANE_IP and update install.disk in machine config if needed (e.g., install.disk: /dev/vda)

Secure Boot

For Secure Boot setup, see the Secure Boot documentation.

Cleaning Up

To cleanup, simply stop and delete the virtual machines from the Proxmox UI.