Setting up Headscale and an Exit Node

With my new Proxmox box set up, I wanted to configure my own VPN so I could securely access my home network while away and also route my traffic through my home internet connection. This would let me benefit from my AdGuard setup, maintain a consistent “home” location, and have a fully private connection anywhere.

After some research, I landed on WireGuard, and then on Tailscale, which builds on WireGuard with automatic key management and device coordination. Even better, I learned that I could self‑host the control server using Headscale, giving me a completely independent, private VPN mesh.

Installing Headscale

Running the Install Script

The initial setup was straightforward thanks to the script available on community-scripts.org. I selected the root node, opened a shell and ran the “Headscale” script.

  • Enable Diagnostics → No, Opt out
  • Install type → Default
  • Storage pool for container template → Local
  • Storage pool for container → local-lvm
  • Add headscale-admin ui → Yes

The script created the LXC container, downloaded the Debian template, installed Headscale and the admin UI, and confirmed network connectivity.

Once complete, it provided URLs for the API and admin interface. I opened the admin UI to confirm it was running, then created a DHCP reservation so the container would always get the same IP.

Editing the Config File

With the container running, the next step was configuring Headscale itself. The script places the config at:

/etc/headscale/config.yaml

I opened a shell on the Headscale container, installed vim, and edited the file:

apt install vim
vim /etc/headscale/config.yaml

The only change I made was updating:

server_url: https://private.rirak.com

This will be the public entry point for my VPN.

DNS & Reverse Proxy Setup

With the server URL set, I needed DNS and SSL in place so clients could reach Headscale securely.

Creating DNS Records

In Cloudflare, I added an A record:

  • Name: private
  • Target: internal IP of the Headscale container
  • Proxy: DNS only

Creating the Initial SSL Certificate (DNS Challenge)

I use Nginx Proxy Manager for reverse proxying and SSL certificates.

Because this was a new domain with no existing certificate, I used the DNS challenge for the initial issuance. The DNS challenge works even before the proxy is serving traffic, which makes it ideal for initial certificate creation. This required creating a Cloudflare API token with:

DNS → Edit
Zone → Read
Zone DNS Settings → Read

In Cloudflare, Manage Account → Account API Tokens → Create Token. Then create a token with these permissions:

Then click next to review and then create the token. Upon creation, the website gives us the key which will be used for auth in the next step.

Next it was time to jump to Nginx Proxy Manager. I clicked “Add Proxy Host” and filled out the form.

  • Domain Name: private.rirak.com
  • Scheme: http
  • Forward Hostname / IP: Headscale container IP
  • Forward Port: 80
  • Websocket Support: Enabled

Then under the SSL Tab

  • SSL Certificate: Request New Certificate
  • Force SSL: Enabled
  • Use DNS Challenge: Enabled
  • DNS Provider: Cloudflare
  • Credentials File Content: dns_cloudflare_api_token=XXX

After saving, NPM generated the certificate and the UI showed that the certificate was active.

Switching to HTTP Challenge

Once the certificate existed, I deleted the proxy entry and recreated it — this time without the DNS challenge. Now it can renew via the HTTP challenge.

I also deleted the old certificate under the certificates tab and went back to Cloudflare to remove the API token.

With that configured, I rebooted Headscale to make sure it would work with the new configuration. Headscale produces a windows help page for configuring connections so this is a good way to check its running successfully. I tried to visit https://private.rirak.com/windows and the help page came up confirming the configuration was working correctly.

Setting Up the Headscale Admin UI

With Headscale configured, I wanted to try out the Admin UI. The admin UI is available at:

http://<LXC-IP>/admin/

Accessing it through the external domain returned a 403 (likely intentional), but the internal IP worked fine. To get the UI authenticated, I needed to create an API key. So back in the Headscale node shell I ran

headscale apikey create

This returned an api key which I copied and saved for use in the UI. The UI opened up to a settings page which asked for the api key. Optionally I could overwrite the api url but it was already set to the correct IP.

I saved the settings and then refreshed the page. The full sidebar now populated confirming that the authentication worked.

Creating a User

Next I needed to set up a user for myself, so that I could move on to configuring devices. There are two ways to do this.

  • Via UI
    • Navigate to the “Users” tab
    • Click create
    • Enter Name
    • Click the checkmark to create
  • Via CLI
    • headscale users create alex

Device Setup (Android)

With the server ready, it was time to connect my first device. First, I downloaded the Tailscale app from the App store. When you first open the app up, it requests permissions to set up a VPN connection, I accepted. Next I was brought to a login screen, but I want to use my own control server rather than the normal Tailscale control server, so I exited out of the login page. I then tapped the cog in the top right corner to enter the settings page.

Then I selected “Accounts”, clicked the 3 dots in the top right corner, and selected “Use an alternate server”

On the screen which came up, I entered the URL of my Headscale server. Upon clicking “Add Account” I was redirected to the “Machine Registration Page” which provided a registration command. This is where an Admin needs to register this device.

I took the registration command it gave me, and jumped over to the shell on the Tailscale node. I updated the user, and ran the registration command.

headscale nodes register --key <KEY> --user alex

It auto assigned my device the random hostname of “invalid-dfborob5”. Apparently this is a common issue with Android devices on Tailscale. I decided to rename it. First I listed the devices, then I ran a rename command (where identifier is the id of the node).

headscale nodes list
headscale nodes rename alex-s25 --identifier 1

With that complete, I checked the Admin UI, and confirmed that I was able to see my new device node there.

Back on the phone, I was asked for notification permission, and then it showed the main screen confirming I was connected.

Device Setup (Debian & Exit Node)

With my phone connected to the VPN, the next step was setting up an exit node so the phone could route all its traffic through my home network. For this, I used a Debian LXC container running the Tailscale client. I chose an LXC container instead of a full VM because it’s lightweight and more than sufficient for running Tailscale. This part was more involved, since I had to configure the container itself before installing Tailscale.

Downloading the Debian Template

I decided to use Debian 12 because it’s well‑documented, stable, and works cleanly with Tailscale. Proxmox has an official template repository, so downloading it was easy:

  • Under the root node, in the storage section, select “Local” and then in that window “CT Templates”
  • Click “Templates”
  • Search for “Debian 12”
  • Select it and click “Download”

Creating the LXC Container

Next I created a new container using the Debian 12 template. I right clicked on my root node and selected “Create LC”. I then filled out the settings and chose to use the Debian 12 image I had downloaded.

Once the container is created, select the container → options and double click features. Enable nesting, then click “ok”. This is required by systemd‑networkd and TUN.

Enabling Networking

When I first booted the container, it didn’t get an IP address. LXC containers sometimes need explicit network configuration. I opened a shell on the debian machine created a systemd‑networkd file

touch /etc/systemd/network/10-eth0.network

in that file I added the following content

[Match]
Name=eth0

[Network]
DHCP=yes

Save that file, then restart networking

systemctl restart systemd-networkd

I also want to make sure this happens automatically on each boot, so I ran the following commands

systemctl enable systemd-networkd
systemctl enable systemd-networkd-wait-online

After a reboot, the container correctly received an IP.

Enabling TUN Support

Next I needed to set up TUN since Tailscale uses a TUN device to create its encrypted WireGuard tunnel. LXC containers don’t expose /dev/net/tun by default, so I had to enable it manually. I opened a shell on the root node and opened up the config

vim /etc/pve/lxc/105.conf

I added:

lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
lxc.apparmor.profile: unconfined

The AppArmor change is necessary because TUN access is blocked by the default profile. After restarting the container, I verified TUN was available:

ls -l /dev/net/tun

Installing Tailscale

With the container set up, it was time to install Tailscale. My container did not have curl installed so first I installed that and then Tailscale:

apt install curl
curl -fsSL https://tailscale.com/install.sh | sh

The install took a bit of time but finished successfully. Next I brought up the client:

tailscale up --login-server https://private.rirak.com

This produced a registration URL. I copied that URL from the shell, pasted it in my browser, and that brought me to the familiar “Device Registration” page with a node registration command. I opened a new tab, opened a shell to the headscale node and ran the registration command

headscale nodes register --key <KEY> --user home

The shell from the Debian machine, now displayed a success message. Jumping back to the admin UI, I could now see two nodes.

Configuring Exit Node

With the Debian container registered, I could now turn it into a fully functional exit node.

Advertising and Approving Routes

By default, Tailscale won’t route traffic through a node unless it explicitly advertises itself as an exit node. In the headscale shell, I can list the current routes to see that there are none available (headscale nodes list-routes).

Back in the Debian machine shell, I updated the client to advertise itself as an exit node. I disabled Tailscale DNS because I already use AdGuard on my home network and didn’t want Tailscale to override my DNS settings.

tailscale set --advertise-exit-node
tailscale set --accept-dns=false

Going back to the headscale shell and re-running the same list routes command, now returns the routes being advertised by the node but they are not approved yet. Tailscale requires admin approval for any advertised routes.

I can go ahead and approve the route (-i is the node id and -r is the specific route)

headscale nodes approve-routes -i 2 -r "0.0.0.0/0"

Now if I list the routes again, they show up as approved

Now the node is recognized as an exit node.

Enabling IP Forwarding

Next we need to set up IP forwarding on the Debian machine. Without this, the node can advertise routes but won’t forward packets. In the Debian machine shell

echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding=1" >> /etc/sysctl.conf
sysctl -p

Enabling NAT

Finally, NAT is required so traffic from the VPN can reach the internet:

apt install iptables -y
iptables -t nat -A POSTROUTING -o tailscale0 -j MASQUERADE
iptables -A FORWARD -i tailscale0 -j ACCEPT
iptables -A FORWARD -o tailscale0 -j ACCEPT

and to make it persistent across restarts

apt install iptables-persistent -y

This ensures traffic from the VPN is translated correctly and survives reboots.

Testing the Exit Node

With all that setup complete, it was time to see if my phone would be able to use the exit node. I disconnected my phone from Wi‑Fi and checked my public IP. I then clicked the “Exit Node” banner in the Tailscale app and selected tailscale-node. Refreshing the IP check showed my home IP confirming everything worked.

Exposing Local Devices

The one last tweak I wanted to make to the setup is to allow access to my home router via the VPN. With my current setup, there is no way to access the router config from outside the house so this would make that possible.

On the Debian machine shell, I ran the following command to start exposing a route to the router.

tailscale set --advertise-routes=192.168.1.1/32

Back in the Headscale shell, if I list routes again there is a new unapproved route. I approved it similar to before. We need to list the previous route too to keep it approved.

headscale nodes approve-routes -i 2 -r "0.0.0.0/0,192.168.1.1/32"

Now back on my android device, when connected to the VPN and using the exit node, I can go to 192.168.1.1 and I get to my router’s config page. One thing to note here, if the client is on a network that also uses 192.168.1.x, local routing will override the VPN route. But on a mobile or commercial network this should work fine.

Wrap Up

The setup was involved and took a few attempts to figure out but now I have a fully private, self‑hosted VPN mesh with a working exit node and remote access to key devices. Next, I’m considering adding Tailscale to more containers and experimenting with RustDesk over the VPN.

Proxmox: Adding an SSD and Configuring Backups

With all my VMs and containers moved over to the new Proxmox machine, the old server was finally free and so was its hardware. It had been running on an Inland 1 TB NVMe SSD that was more than a few years old, but still usable. Rather than let it sit around, I figured it would make a great secondary drive for the new server, especially for backups.

Adding the SSD

Hardware Installation

I pulled the SSD out of the old machine and opened up the mini PC. The case was straightforward to open, and the empty M.2 slot was easy to spot. I inserted the SSD, secured it with a screw, and added a thermal pad on top for good measure.

Drive Configuration

With the drive installed, I reassembled the mini PC and booted it up. From a root shell, I ran lsblk to identify the new drive. In my case, it showed up as nvme1n1, still containing the old partitions from when it was the boot drive in the previous machine.

To clean it up, I went to: Node → Disks → (select drive) → Wipe Disk
Once wiped, I clicked Initialize Disk with GPT to give it a fresh partition table.

I planned to split the drive into two halves — one for backups and one for VM storage — but decided to reboot first. When the machine came back up, the drive names had swapped (nvme1n1 became nvme0n1), which threw me off for a moment. Apparently this is normal behavior depending on PCIe enumeration order.

Since the Proxmox UI doesn’t support partitioning NVMe drives directly, I switched back to the shell and created two partitions manually:

parted /dev/nvme0n1 --script mklabel gpt
parted /dev/nvme0n1 --script mkpart primary 0% 500GB
parted /dev/nvme0n1 --script mkpart primary 500GB 100%

A quick lsblk confirmed the new partitions.

Adding the Drive to Proxmox

For the VM storage, I wanted an LVM-Thin pool. I went to Node → Disks → LVM-Thin → Create: Thinpool. I selected the first new partition, gave it a name, and then clicked “Create”. Proxmox reformatted the partition and added it to the storage list. Now, when creating a new VM, I can choose secondary-storage as the disk location.

Next I wanted to add the second partition to be used for backups. I went to Node → Disks → Directory → Create: Directory. I selected the second partition, named it, set the filesystem to ext4, and clicked “Create”. Similar to before, this reformatted the partition and added it to the storage list.

Setting up backups

With the new drive ready, I set up an automated backup schedule. I wanted all my machines backed up, so I went to: Datacenter → Backup → Add.

  • Storage: local-backups (the new backup drive)
  • Mode: Stop
  • Schedule: sat *-1..7 04:00
  • Selection Mode: All
  • Compression: ZSTD

In the “Retention” tab, I configured how many backups to keep, then clicked “Create”.

Finally, I selected the new backup job and clicked “Run now” to test it. Once it completed successfully, everything was set.

The new Proxmox box now has all the initial setup I need to run my critical containers. From here, I’m planning to explore additional containers and continue growing the homelab.

Proxmox: Restoring VMs

With my new Proxmox host set up, it was time to bring over my existing VMs and containers. Since I had already experimented with Proxmox on my old desktop, I had a handful of machines ready to go — Home Assistant, a few LXCs, and some supporting services. I might do a separate post on how those were originally set up, but this felt like the perfect chance to test the backup and restore process for real.

I’d configured backups during my initial Proxmox trial, but you never really know how reliable a backup system is until you have to depend on it. My plan was simple: copy the backups to a USB drive, plug it into the new machine, and see how smoothly the restore process went.

Setting up the USB

I had a 32 GB USB drive lying around that seemed big enough for the backups. It was formatted as NTFS from its previous life in Windows, but since I’d be using it between Linux servers, I decided to reformat it as ext4.

With the new server still on my workbench, I plugged the drive in and opened a root shell through the Proxmox web UI. Running lsblk gave me a list of all attached drives, and I identified the USB stick by its size (sda).

After unmounting it just to be safe (unmount /dev/sda1), I wiped the old filesystem:

wipefs -a /dev/sda

Then created a fresh ext4 filesystem:

mkfs.ext4 /dev/sda

A quick lsblk confirmed the drive now had a clean ext4 filesystem at the root, with no partitions — exactly what I wanted.

Configuring the drive for use

Next, I moved the USB stick to the old Proxmox server so I could copy the backups onto it. This machine had multiple drives, so the USB showed up as sdb. Another lsblk made it easy to spot.

Before I could access it, I needed to mount it:

mkdir -p /mnt/usb
mount /dev/sdb /mnt/usb

With the drive mounted, I added it to Proxmox as a storage location: Datacenter → Storage → Add → Directory

  • ID: usb-backup
  • Directory: /mnt/usb
  • Content: Backup

Once added, it appeared in the sidebar and was ready to use.

Creating the backup

My existing backups were a few days old, so I figured I might as well generate a fresh one. I selected my Home Assistant VM and went to: Backup → Backup Now

  • Storage: usb-backup
  • Mode: Stop
  • Compression: ZSTD

When the backup finished, I shut down the VM so that it would not make any further updates and not conflict with the new VM I was going to restore.

Back in the root shell, I unmounted the USB drive:

umount /mnt/usb

The first attempt hung for a bit, so I canceled it and tried again. The second attempt returned immediately. With the backup created, it was time to try restoring it.

Restoring the backup

On the new Proxmox host, I repeated the same steps as before: identify the USB drive, mount it, and add it as a storage directory. Once that was done, I selected “usb-backup” in the sidebar and opened the “Backups” tab. The backup file appeared right away.

Clicking “Restore” brought up the restore dialog. I changed the storage to local-lvm so the VM would be created in the right place, and I made sure the VM ID matched the original. Everything else I left at the defaults.

After the restore completed, I opened the VM’s “Hardware” tab and double‑checked the network device. I wanted the MAC address to match the old machine so my router would assign the same IP. Once that was set, I booted the VM.

To my surprise, it came up without any issues. The router recognized it, assigned the correct IP, and Home Assistant picked up right where it left off. All my devices and automations continued running as if nothing had ever changed. I was genuinely impressed with how smooth the process was.

With the first restore successful, I unmounted the USB drive again and repeated the process for my three LXC containers (Nginx Proxy Manager, AdGuard, and Uptime Kuma). I only needed to mount and unmount the USB drive — the rest of the setup didn’t need to be repeated each time. Just like the VM, each restore was quick and uneventful. It gave me a lot more confidence in Proxmox’s backup system — it just works.

Proxmox Setup

A couple months ago, my Home Assistant instance — which had been running happily on a Raspberry Pi — suddenly stopped responding. After some digging, I discovered the SD card had died and taken the whole system with it. I’d been meaning to try Proxmox for a while, and this felt like the perfect excuse to finally jump in.

I had an old desktop in the basement, so I installed Proxmox on it and spun up a Home Assistant VM along with a few containers. It worked surprisingly well, but the machine only had 2 cores (4 threads) and 16 GB of RAM. After getting a taste of what Proxmox could do, I wanted something with more headroom — and this time I wanted to document the process properly instead of stumbling through it.

Choosing the Hardware

I looked at a few mini pcs and ended up finding a KAMRUI Pinova P2 Mini PC on sale for about $500. It looked like a great fit for a compact homelab node:

  • 12 cores / 16 threads
  • 32 GB DDR4
  • 1 TB NVMe SSD
  • Windows 11 Pro preinstalled (OEM key – I saved it just in case, though I don’t expect to use it)

For the price, it seemed like a solid Proxmox box, so I grabbed it.

Preparing for the Install

With the hardware picked out, the next step was getting Proxmox onto a USB stick so I could install it. The first step was to download the latest Proxmox VE ISO (9.1 at the time of writing) from proxmox.com. I flashed it onto a USB drive using Balena Etcher.

With the installer ready, it was time to convince the Mini PC to actually boot from USB.

Installing Proxmox

When I first powered it on, the machine went straight into Windows without giving me a chance to enter the BIOS. After some digging, I found the” Advanced Startup Options” menu in Windows, which let me reboot into “Windows Recovery” which then gave me an option to reboot directly into the BIOS.

Once there, I changed the boot order so USB devices were first. After saving and rebooting, the Proxmox installer finally appeared.

I followed along with a great walkthrough from WunderTech (https://www.youtube.com/watch?v=lFzWDJcRsqo). The installation steps were straightforward:

  • Select Graphic Installer
  • Accept EULA
  • Select Disk to install on
    • I only have the SSD it came with installed
    • In “Options” you can select the file system
      • I stuck with ext4 since it’s simple and reliable, and I didn’t need anything more advanced for this setup
  • Set country and timezone
  • Create root password + email
    • This is the root password we will use to access the proxmox instance
    • The email is used for admin notifications
  • Set network settings
    • If we want to choose a specific management interfaces we can, otherwise just select the first one
    • Set the IP Address (CIDR) which we want the host to request
      • We still need to set up a DHCP assignment on the router later on
      • Im using 192.168.1.10
    • Set hostname to some internal host name
      • this will be the name of the node in the interface
      • Im using proxmox2.lan
    • Set Gateway and DNS server to our router IP
  • Review all the selected settings and click Install
  • After the install, reboot the machine

First Boot & Basic Setup

After rebooting, Proxmox displayed the URL for the web interface. I opened it, accepted the self‑signed certificate warning, and logged in as root using the password set up previously.

I then switched to “dark mode” by clicking on my profile in the top right corner, selecting “color theme” and then selecting “Proxmox Dark”.

Running the Post‑Install Script

Next, I used the “PVE Post Install” script from community-scripts.org. I selected the node (proxmox2), opened a shell and ran the “PVE Post Install” script. This was very similar to what was shown in the video referenced earlier but the questions were worded a bit differently.

  • I didnt have to update sources because it told me they were already set
    • but if they were not I would have told it to do so
  • Disable enterprise repo?
    • Yes
  • Disable ceph repo?
    • Yes
  • Add no-subscription repo?
    • Yes
  • Add PveTest repo?
    • No
  • Disable subscription nag?
    • Yes
  • Disable high availability?
    • Yes
  • Disable corosync?
    • Yes (this seem to be related to high availability)
  • Update Proxmox?
    • Yes

After it finished, I rebooted and confirmed there were no pending updates.

Setting Up DHCP Reservation

Next, I wanted to create a DHCP reservation on my router. I prefer to give my homelab machines static DHCP leases so their IPs never change, which helps keep things predictable. Oddly, I couldn’t find the device listed under the IP it was using. Eventually I realized I needed to search byMAC address. To find the MAC address I ran ip link in the root shell and then looked for vmbr0 -> link/ether <MAC>. I later figured out you could also find this under under Node → Network → nic0 → Alternative Name.

Once I had the MAC, I found the DHCP entry (under a different IP), set it to static, and updated it to the IP I wanted. It turned out the router had already given it a different IP, which explained why I couldn’t find it at first.

Notification Setup

Finally, Proxmox can send alerts for backups, errors, and updates, so I wanted to make sure I’d actually see them. I created a dedicated email account on my personal mail server and used that to configure SMTP. I went to Datacenter → Notifications → Notification Targets → Add → SMTP and filled out the settings.

  • Endpoint name: Rirak-Mail-Server
  • Server: mail.rirak.com
  • Username: [email protected]
  • Password: <email password>
  • From Address: [email protected]
  • Recipient: root@pam
    • This means the root user. Others could be added too.

After saving, I sent a test email to confirm everything worked. Then I updated the default matcher: Notification Matchers → Default Matcher → Modify → Targets to Notify → Select Rirak-Mail-Server.

Wrap Up

What started as a dead SD card turned into a full Proxmox rebuild — and a much more capable homelab node. The KAMRUI Mini PC has plenty of headroom, and the setup process was smoother than I expected once I got past the BIOS quirks.

Next up, I’ll migrate my Home Assistant VM and start moving the rest of my services over. I’m also planning to document the VM setup, backups, and container configuration as I go.