Shelf Lighting

A few weeks ago, my parents picked up some new shelves they wanted to use for decorations. While helping them set everything up, my father and I quickly realized the shelves would look much nicer with lighting. LED strips immediately came to mind — simple, clean, and easy to control with an ESP8266 running ESPHome.

I sketched out a quick design and got to work. The only oddity was that I ended up using an IRLZ44N MOSFET, which is wildly oversized for this project but the best choice I could find for 3.7v logic control. Because of the shelf layout, I also decided to run two LED strips in parallel.

Materials

Most of the parts came from my existing stash, but I did need to buy the LED strip, power supply and transistors.

  • ESP8266 (Wemos D1 Mini clone) – pack of 10 from Amazon
  • 6.56FT 640LEDs 5V LED Strip – from Amazon
  • 5V 3A 15W Power Supply Adapter – from Amazon
  • IRLZ44N MOSFET Transistor – pack of 10 from Amazon
  • 330 ohm Resistor – pack of 100 from Amazon
  • 22 AWG solid core hookup wire – for sensor connections (from Amazon)
  • Breadboard & jumper wires – for prototyping
  • Soldering iron – to make permanent connections
  • 3D printer (Creality CR-6 SE) – for controller and sensor cases

Optional but nice to have:

  • Heat shrink tubing – to clean up exposed joints (from Amazon)
  • JST connectors & crimping tool – for detachable, neat wiring (from Amazon)
  • Proto board – for organizing components and wiring (from Amazon)
  • Mount Screw Terminal Block Connector – for cleaner proto board connections (from Amazon)

Flashing the Board

Once everything arrived, I grabbed a D1 Mini from my parts bin and started flashing ESPHome. Historically, I’ve had trouble getting ESPHome to talk to CH340‑based boards on Windows 11 — and this time was no different. Last time I had pulled out an old Windows 10 laptop to flash it but I decided to try to do some more digging and fix my Windows 11 setup.

After digging around, I found a Reddit thread recommending a different CH340G driver version. The GitHub repo looked sketchy at first, but it turned out to be from the official NodeMCU account. I uninstalled the old driver, installed the new one, and suddenly the ESPHome Web Flasher connected instantly.

I flashed the initial ESPHome image, connected it to Wi‑Fi, and moved on.

Configuring the Board

Initial Setup

With the device online, ESPHome Device Builder immediately discovered it. I took control of the device, renamed it, compiled the config, and flashed it OTA.

With the flash complete, it was time to customize the configuration. I added in some basic sensors for wifi signal and uptime as well as giving it a static IP (this required configuring a DHCP reservation in the router). The flashing tool usually relies on mDNS to discover devices but I have that disabled so instead I use a static IP.

Here is my initial config (most of these are self explanatory, but I’ve gone through what each of these is in detail in a previous post).
esphome:
  name: "esp-dining-room-shelf-light"
  friendly_name: Dining Room Shelf LED Strip
  min_version: 2025.11.0
  name_add_mac_suffix: false

esp8266:
  board: d1_mini

# Enable logging
logger:

# Enable Home Assistant API
api:
  encryption:
    key: !secret api_key

# Allow Over-The-Air updates
ota:
  platform: esphome
  password: !secret ota_password

wifi:
  ssid: !secret wifi_ssid
  password: !secret wifi_password
  manual_ip:
    static_ip: 192.168.1.198
    gateway: 192.168.1.1
    subnet: 255.255.255.0
  ap:
    password: !secret ap_password

binary_sensor:
 - platform: status
   name: "Controller Status"

sensor:
  - platform: wifi_signal
    name: "WiFi Signal Sensor"
  - platform: uptime
    type: seconds
    name: Uptime Sensor

text_sensor:
  - platform: wifi_info
    ssid:
      name: ESP Connected SSID
    mac_address:
      name: ESP Mac Wifi Address
  - platform: version
    name: "ESPHome Version"
    hide_timestamp: true

Home Assistant Configuration

With the basic sensors set up, it was time to bring the device into Home Assistant. I went to Settings -> Devices & Services and since I already had the ESPHome integration installed, Home Assistant immediately detected the new board.

I clicked the button to add the new device, assigned a location and then added it to Home Assistant. I then went to the device page where I was able to see the new device and all the sensors I had configured. At this point, the board was talking to Home Assistant but it couldn’t control anything yet.

Configuring the Light Entity

With the basic tests complete, it was time to configure the light entity so that the board would be able to control an LED strip. Back in “ESP Home Device Builder” I added a PWM output on GPIO4 (D2) and created a monochromatic light entity tied to that output.

# --- OUTPUT DRIVER ---
output:
  - platform: esp8266_pwm
    id: white_strip_output
    pin: GPIO4  	 # D2
    frequency: 1000 Hz   # Smooth dimming, no audible noise

# --- LIGHT ENTITY ---
light:
  - platform: monochromatic
    name: "Dining Room Shelf LED Strip"
    output: white_strip_output
    default_transition_length: 0.3s
    gamma_correct: 1.0      # COB strips look more natural with gamma 1.0
    restore_mode: RESTORE_AND_OFF

After compiling and flashing, Home Assistant showed the new light entity (clicking on it displays the dimmer).

Hardware Setup

With the software side complete, I moved on to the hardware. I decided to mount everything on a small proto board using screw terminals for clean wiring. After some layout planning, I settled on a final arrangement.

Soldering the Board

I started by placing all the components on the proto board to confirm spacing and plan out the wiring on the back.

Once satisfied, I pulled everything off and soldered the components one by one. First came the resistor and the 3-pin screw mount terminal.

Next I added the transistor and bottom 2-pin screw terminal. And then finally the top 2-pin screw terminal. I reused the resistor legs to bridge the gate connection. They were perfectly positioned, and it saved me from running a tiny jumper wire.

That was the easy part, next came all the connections on the back. First I added the ground cables, then the drain for the transistor and finally the power wires. This part took the longest, but the end result was tidy and solid.

Soldering The ESP8266

Compared to the proto board, soldering wires to the D1 Mini was trivial. A few quick joints and it was ready.

Testing the Connections

Before committing to installation, I tested everything on the bench. I used alligator clips to connect an LED strip and verified power delivery, MOSFET switching and dimming behavior.

Everything worked as expected. I then soldered leads onto the LED strips themselves and added heat shrink for strain relief.

Creating a Case

With all the hardware ready, I needed a case to mount this on the back of the shelf. I reused an ESP8266 case design from a previous project and modeled a matching enclosure for the proto board. I then cut out slots for all the connections I needed and added some mounting holes on the side so that I could attach it.

With the model ready, I printed the parts and test fit everything.

Then I put the ESP8266 and proto board into the case, connected them up, and attached a barrel jack adapter for power.

It all fit nicely, so I went ahead and printed some covers for the case

One corner post interfered with a screw terminal, so I trimmed it slightly. Once assembled, the case looked clean and compact.

Final Assembly and Testing

With everything looking good, I wanted to do one more test on the workbench before moving forward with installation. Since I planned to run two LED strips in parallel, I braided two sets of power wires and added JST connectors to the strips and the splitter.

A final bench test confirmed everything worked with both strips connected.

Installation

With everything tested it was time to proceed with the installation. Here’s the shelf before installation.

I cleared the shelves, removed the glass, and flipped the frame over to attach the LEDs. I then peeled the backing and attached the two LED strips along the center supports of the shelf. I’m not adding any additional mounting hardware for now, so time will tell how well the sticky backing holds up. I then routed the power cables over the back wall and secured them with some sticky wire clips.

Then I found some screws and attached the case which had the ESP and proto board in it. The splitter I created for the LED strips had a bit of extra wire, which I tucked away (but better to have extra than not enough). I connected together all the wires and secured them with wire clips as well.

A quick test showed the lighting worked beautifully.

After reinstalling the glass and decorations, the final effect looked great. Here is how it looked (the camera struggled a bit with glare).

Wrap up

The finished project turned out really nicely, and my parents were thrilled with the result. It also leaves room for future Home Assistant automations, like turning the lights on at sunset or when motion is detected.

What I liked most about this build is that it was a practical application of things I’ve done before. Nothing overly complex, but a clean, satisfying project that solved a real need.

Setting up Headscale and an Exit Node

With my new Proxmox box set up, I wanted to configure my own VPN so I could securely access my home network while away and also route my traffic through my home internet connection. This would let me benefit from my AdGuard setup, maintain a consistent “home” location, and have a fully private connection anywhere.

After some research, I landed on WireGuard, and then on Tailscale, which builds on WireGuard with automatic key management and device coordination. Even better, I learned that I could self‑host the control server using Headscale, giving me a completely independent, private VPN mesh.

Installing Headscale

Running the Install Script

The initial setup was straightforward thanks to the script available on community-scripts.org. I selected the root node, opened a shell and ran the “Headscale” script.

  • Enable Diagnostics → No, Opt out
  • Install type → Default
  • Storage pool for container template → Local
  • Storage pool for container → local-lvm
  • Add headscale-admin ui → Yes

The script created the LXC container, downloaded the Debian template, installed Headscale and the admin UI, and confirmed network connectivity.

Once complete, it provided URLs for the API and admin interface. I opened the admin UI to confirm it was running, then created a DHCP reservation so the container would always get the same IP.

Editing the Config File

With the container running, the next step was configuring Headscale itself. The script places the config at:

/etc/headscale/config.yaml

I opened a shell on the Headscale container, installed vim, and edited the file:

apt install vim
vim /etc/headscale/config.yaml

The only change I made was updating:

server_url: https://private.rirak.com

This will be the public entry point for my VPN.

DNS & Reverse Proxy Setup

With the server URL set, I needed DNS and SSL in place so clients could reach Headscale securely.

Creating DNS Records

In Cloudflare, I added an A record:

  • Name: private
  • Target: internal IP of the Headscale container
  • Proxy: DNS only

Creating the Initial SSL Certificate (DNS Challenge)

I use Nginx Proxy Manager for reverse proxying and SSL certificates.

Because this was a new domain with no existing certificate, I used the DNS challenge for the initial issuance. The DNS challenge works even before the proxy is serving traffic, which makes it ideal for initial certificate creation. This required creating a Cloudflare API token with:

DNS → Edit
Zone → Read
Zone DNS Settings → Read

In Cloudflare, Manage Account → Account API Tokens → Create Token. Then create a token with these permissions:

Then click next to review and then create the token. Upon creation, the website gives us the key which will be used for auth in the next step.

Next it was time to jump to Nginx Proxy Manager. I clicked “Add Proxy Host” and filled out the form.

  • Domain Name: private.rirak.com
  • Scheme: http
  • Forward Hostname / IP: Headscale container IP
  • Forward Port: 80
  • Websocket Support: Enabled

Then under the SSL Tab

  • SSL Certificate: Request New Certificate
  • Force SSL: Enabled
  • Use DNS Challenge: Enabled
  • DNS Provider: Cloudflare
  • Credentials File Content: dns_cloudflare_api_token=XXX

After saving, NPM generated the certificate and the UI showed that the certificate was active.

Switching to HTTP Challenge

Once the certificate existed, I deleted the proxy entry and recreated it — this time without the DNS challenge. Now it can renew via the HTTP challenge.

I also deleted the old certificate under the certificates tab and went back to Cloudflare to remove the API token.

With that configured, I rebooted Headscale to make sure it would work with the new configuration. Headscale produces a windows help page for configuring connections so this is a good way to check its running successfully. I tried to visit https://private.rirak.com/windows and the help page came up confirming the configuration was working correctly.

Setting Up the Headscale Admin UI

With Headscale configured, I wanted to try out the Admin UI. The admin UI is available at:

http://<LXC-IP>/admin/

Accessing it through the external domain returned a 403 (likely intentional), but the internal IP worked fine. To get the UI authenticated, I needed to create an API key. So back in the Headscale node shell I ran

headscale apikey create

This returned an api key which I copied and saved for use in the UI. The UI opened up to a settings page which asked for the api key. Optionally I could overwrite the api url but it was already set to the correct IP.

I saved the settings and then refreshed the page. The full sidebar now populated confirming that the authentication worked.

Creating a User

Next I needed to set up a user for myself, so that I could move on to configuring devices. There are two ways to do this.

  • Via UI
    • Navigate to the “Users” tab
    • Click create
    • Enter Name
    • Click the checkmark to create
  • Via CLI
    • headscale users create alex

Device Setup (Android)

With the server ready, it was time to connect my first device. First, I downloaded the Tailscale app from the App store. When you first open the app up, it requests permissions to set up a VPN connection, I accepted. Next I was brought to a login screen, but I want to use my own control server rather than the normal Tailscale control server, so I exited out of the login page. I then tapped the cog in the top right corner to enter the settings page.

Then I selected “Accounts”, clicked the 3 dots in the top right corner, and selected “Use an alternate server”

On the screen which came up, I entered the URL of my Headscale server. Upon clicking “Add Account” I was redirected to the “Machine Registration Page” which provided a registration command. This is where an Admin needs to register this device.

I took the registration command it gave me, and jumped over to the shell on the Tailscale node. I updated the user, and ran the registration command.

headscale nodes register --key <KEY> --user alex

It auto assigned my device the random hostname of “invalid-dfborob5”. Apparently this is a common issue with Android devices on Tailscale. I decided to rename it. First I listed the devices, then I ran a rename command (where identifier is the id of the node).

headscale nodes list
headscale nodes rename alex-s25 --identifier 1

With that complete, I checked the Admin UI, and confirmed that I was able to see my new device node there.

Back on the phone, I was asked for notification permission, and then it showed the main screen confirming I was connected.

Device Setup (Debian & Exit Node)

With my phone connected to the VPN, the next step was setting up an exit node so the phone could route all its traffic through my home network. For this, I used a Debian LXC container running the Tailscale client. I chose an LXC container instead of a full VM because it’s lightweight and more than sufficient for running Tailscale. This part was more involved, since I had to configure the container itself before installing Tailscale.

Downloading the Debian Template

I decided to use Debian 12 because it’s well‑documented, stable, and works cleanly with Tailscale. Proxmox has an official template repository, so downloading it was easy:

  • Under the root node, in the storage section, select “Local” and then in that window “CT Templates”
  • Click “Templates”
  • Search for “Debian 12”
  • Select it and click “Download”

Creating the LXC Container

Next I created a new container using the Debian 12 template. I right clicked on my root node and selected “Create LC”. I then filled out the settings and chose to use the Debian 12 image I had downloaded.

Once the container is created, select the container → options and double click features. Enable nesting, then click “ok”. This is required by systemd‑networkd and TUN.

Enabling Networking

When I first booted the container, it didn’t get an IP address. LXC containers sometimes need explicit network configuration. I opened a shell on the debian machine created a systemd‑networkd file

touch /etc/systemd/network/10-eth0.network

in that file I added the following content

[Match]
Name=eth0

[Network]
DHCP=yes

Save that file, then restart networking

systemctl restart systemd-networkd

I also want to make sure this happens automatically on each boot, so I ran the following commands

systemctl enable systemd-networkd
systemctl enable systemd-networkd-wait-online

After a reboot, the container correctly received an IP.

Enabling TUN Support

Next I needed to set up TUN since Tailscale uses a TUN device to create its encrypted WireGuard tunnel. LXC containers don’t expose /dev/net/tun by default, so I had to enable it manually. I opened a shell on the root node and opened up the config

vim /etc/pve/lxc/105.conf

I added:

lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
lxc.apparmor.profile: unconfined

The AppArmor change is necessary because TUN access is blocked by the default profile. After restarting the container, I verified TUN was available:

ls -l /dev/net/tun

Installing Tailscale

With the container set up, it was time to install Tailscale. My container did not have curl installed so first I installed that and then Tailscale:

apt install curl
curl -fsSL https://tailscale.com/install.sh | sh

The install took a bit of time but finished successfully. Next I brought up the client:

tailscale up --login-server https://private.rirak.com

This produced a registration URL. I copied that URL from the shell, pasted it in my browser, and that brought me to the familiar “Device Registration” page with a node registration command. I opened a new tab, opened a shell to the headscale node and ran the registration command

headscale nodes register --key <KEY> --user home

The shell from the Debian machine, now displayed a success message. Jumping back to the admin UI, I could now see two nodes.

Configuring Exit Node

With the Debian container registered, I could now turn it into a fully functional exit node.

Advertising and Approving Routes

By default, Tailscale won’t route traffic through a node unless it explicitly advertises itself as an exit node. In the headscale shell, I can list the current routes to see that there are none available (headscale nodes list-routes).

Back in the Debian machine shell, I updated the client to advertise itself as an exit node. I disabled Tailscale DNS because I already use AdGuard on my home network and didn’t want Tailscale to override my DNS settings.

tailscale set --advertise-exit-node
tailscale set --accept-dns=false

Going back to the headscale shell and re-running the same list routes command, now returns the routes being advertised by the node but they are not approved yet. Tailscale requires admin approval for any advertised routes.

I can go ahead and approve the route (-i is the node id and -r is the specific route)

headscale nodes approve-routes -i 2 -r "0.0.0.0/0"

Now if I list the routes again, they show up as approved

Now the node is recognized as an exit node.

Enabling IP Forwarding

Next we need to set up IP forwarding on the Debian machine. Without this, the node can advertise routes but won’t forward packets. In the Debian machine shell

echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding=1" >> /etc/sysctl.conf
sysctl -p

Enabling NAT

Finally, NAT is required so traffic from the VPN can reach the internet:

apt install iptables -y
iptables -t nat -A POSTROUTING -o tailscale0 -j MASQUERADE
iptables -A FORWARD -i tailscale0 -j ACCEPT
iptables -A FORWARD -o tailscale0 -j ACCEPT

and to make it persistent across restarts

apt install iptables-persistent -y

This ensures traffic from the VPN is translated correctly and survives reboots.

Testing the Exit Node

With all that setup complete, it was time to see if my phone would be able to use the exit node. I disconnected my phone from Wi‑Fi and checked my public IP. I then clicked the “Exit Node” banner in the Tailscale app and selected tailscale-node. Refreshing the IP check showed my home IP confirming everything worked.

Exposing Local Devices

The one last tweak I wanted to make to the setup is to allow access to my home router via the VPN. With my current setup, there is no way to access the router config from outside the house so this would make that possible.

On the Debian machine shell, I ran the following command to start exposing a route to the router.

tailscale set --advertise-routes=192.168.1.1/32

Back in the Headscale shell, if I list routes again there is a new unapproved route. I approved it similar to before. We need to list the previous route too to keep it approved.

headscale nodes approve-routes -i 2 -r "0.0.0.0/0,192.168.1.1/32"

Now back on my android device, when connected to the VPN and using the exit node, I can go to 192.168.1.1 and I get to my router’s config page. One thing to note here, if the client is on a network that also uses 192.168.1.x, local routing will override the VPN route. But on a mobile or commercial network this should work fine.

Wrap Up

The setup was involved and took a few attempts to figure out but now I have a fully private, self‑hosted VPN mesh with a working exit node and remote access to key devices. Next, I’m considering adding Tailscale to more containers and experimenting with RustDesk over the VPN.

Proxmox: Adding an SSD and Configuring Backups

With all my VMs and containers moved over to the new Proxmox machine, the old server was finally free and so was its hardware. It had been running on an Inland 1 TB NVMe SSD that was more than a few years old, but still usable. Rather than let it sit around, I figured it would make a great secondary drive for the new server, especially for backups.

Adding the SSD

Hardware Installation

I pulled the SSD out of the old machine and opened up the mini PC. The case was straightforward to open, and the empty M.2 slot was easy to spot. I inserted the SSD, secured it with a screw, and added a thermal pad on top for good measure.

Drive Configuration

With the drive installed, I reassembled the mini PC and booted it up. From a root shell, I ran lsblk to identify the new drive. In my case, it showed up as nvme1n1, still containing the old partitions from when it was the boot drive in the previous machine.

To clean it up, I went to: Node → Disks → (select drive) → Wipe Disk
Once wiped, I clicked Initialize Disk with GPT to give it a fresh partition table.

I planned to split the drive into two halves — one for backups and one for VM storage — but decided to reboot first. When the machine came back up, the drive names had swapped (nvme1n1 became nvme0n1), which threw me off for a moment. Apparently this is normal behavior depending on PCIe enumeration order.

Since the Proxmox UI doesn’t support partitioning NVMe drives directly, I switched back to the shell and created two partitions manually:

parted /dev/nvme0n1 --script mklabel gpt
parted /dev/nvme0n1 --script mkpart primary 0% 500GB
parted /dev/nvme0n1 --script mkpart primary 500GB 100%

A quick lsblk confirmed the new partitions.

Adding the Drive to Proxmox

For the VM storage, I wanted an LVM-Thin pool. I went to Node → Disks → LVM-Thin → Create: Thinpool. I selected the first new partition, gave it a name, and then clicked “Create”. Proxmox reformatted the partition and added it to the storage list. Now, when creating a new VM, I can choose secondary-storage as the disk location.

Next I wanted to add the second partition to be used for backups. I went to Node → Disks → Directory → Create: Directory. I selected the second partition, named it, set the filesystem to ext4, and clicked “Create”. Similar to before, this reformatted the partition and added it to the storage list.

Setting up backups

With the new drive ready, I set up an automated backup schedule. I wanted all my machines backed up, so I went to: Datacenter → Backup → Add.

  • Storage: local-backups (the new backup drive)
  • Mode: Stop
  • Schedule: sat *-1..7 04:00
  • Selection Mode: All
  • Compression: ZSTD

In the “Retention” tab, I configured how many backups to keep, then clicked “Create”.

Finally, I selected the new backup job and clicked “Run now” to test it. Once it completed successfully, everything was set.

The new Proxmox box now has all the initial setup I need to run my critical containers. From here, I’m planning to explore additional containers and continue growing the homelab.

Proxmox: Restoring VMs

With my new Proxmox host set up, it was time to bring over my existing VMs and containers. Since I had already experimented with Proxmox on my old desktop, I had a handful of machines ready to go — Home Assistant, a few LXCs, and some supporting services. I might do a separate post on how those were originally set up, but this felt like the perfect chance to test the backup and restore process for real.

I’d configured backups during my initial Proxmox trial, but you never really know how reliable a backup system is until you have to depend on it. My plan was simple: copy the backups to a USB drive, plug it into the new machine, and see how smoothly the restore process went.

Setting up the USB

I had a 32 GB USB drive lying around that seemed big enough for the backups. It was formatted as NTFS from its previous life in Windows, but since I’d be using it between Linux servers, I decided to reformat it as ext4.

With the new server still on my workbench, I plugged the drive in and opened a root shell through the Proxmox web UI. Running lsblk gave me a list of all attached drives, and I identified the USB stick by its size (sda).

After unmounting it just to be safe (unmount /dev/sda1), I wiped the old filesystem:

wipefs -a /dev/sda

Then created a fresh ext4 filesystem:

mkfs.ext4 /dev/sda

A quick lsblk confirmed the drive now had a clean ext4 filesystem at the root, with no partitions — exactly what I wanted.

Configuring the drive for use

Next, I moved the USB stick to the old Proxmox server so I could copy the backups onto it. This machine had multiple drives, so the USB showed up as sdb. Another lsblk made it easy to spot.

Before I could access it, I needed to mount it:

mkdir -p /mnt/usb
mount /dev/sdb /mnt/usb

With the drive mounted, I added it to Proxmox as a storage location: Datacenter → Storage → Add → Directory

  • ID: usb-backup
  • Directory: /mnt/usb
  • Content: Backup

Once added, it appeared in the sidebar and was ready to use.

Creating the backup

My existing backups were a few days old, so I figured I might as well generate a fresh one. I selected my Home Assistant VM and went to: Backup → Backup Now

  • Storage: usb-backup
  • Mode: Stop
  • Compression: ZSTD

When the backup finished, I shut down the VM so that it would not make any further updates and not conflict with the new VM I was going to restore.

Back in the root shell, I unmounted the USB drive:

umount /mnt/usb

The first attempt hung for a bit, so I canceled it and tried again. The second attempt returned immediately. With the backup created, it was time to try restoring it.

Restoring the backup

On the new Proxmox host, I repeated the same steps as before: identify the USB drive, mount it, and add it as a storage directory. Once that was done, I selected “usb-backup” in the sidebar and opened the “Backups” tab. The backup file appeared right away.

Clicking “Restore” brought up the restore dialog. I changed the storage to local-lvm so the VM would be created in the right place, and I made sure the VM ID matched the original. Everything else I left at the defaults.

After the restore completed, I opened the VM’s “Hardware” tab and double‑checked the network device. I wanted the MAC address to match the old machine so my router would assign the same IP. Once that was set, I booted the VM.

To my surprise, it came up without any issues. The router recognized it, assigned the correct IP, and Home Assistant picked up right where it left off. All my devices and automations continued running as if nothing had ever changed. I was genuinely impressed with how smooth the process was.

With the first restore successful, I unmounted the USB drive again and repeated the process for my three LXC containers (Nginx Proxy Manager, AdGuard, and Uptime Kuma). I only needed to mount and unmount the USB drive — the rest of the setup didn’t need to be repeated each time. Just like the VM, each restore was quick and uneventful. It gave me a lot more confidence in Proxmox’s backup system — it just works.

Proxmox Setup

A couple months ago, my Home Assistant instance — which had been running happily on a Raspberry Pi — suddenly stopped responding. After some digging, I discovered the SD card had died and taken the whole system with it. I’d been meaning to try Proxmox for a while, and this felt like the perfect excuse to finally jump in.

I had an old desktop in the basement, so I installed Proxmox on it and spun up a Home Assistant VM along with a few containers. It worked surprisingly well, but the machine only had 2 cores (4 threads) and 16 GB of RAM. After getting a taste of what Proxmox could do, I wanted something with more headroom — and this time I wanted to document the process properly instead of stumbling through it.

Choosing the Hardware

I looked at a few mini pcs and ended up finding a KAMRUI Pinova P2 Mini PC on sale for about $500. It looked like a great fit for a compact homelab node:

  • 12 cores / 16 threads
  • 32 GB DDR4
  • 1 TB NVMe SSD
  • Windows 11 Pro preinstalled (OEM key – I saved it just in case, though I don’t expect to use it)

For the price, it seemed like a solid Proxmox box, so I grabbed it.

Preparing for the Install

With the hardware picked out, the next step was getting Proxmox onto a USB stick so I could install it. The first step was to download the latest Proxmox VE ISO (9.1 at the time of writing) from proxmox.com. I flashed it onto a USB drive using Balena Etcher.

With the installer ready, it was time to convince the Mini PC to actually boot from USB.

Installing Proxmox

When I first powered it on, the machine went straight into Windows without giving me a chance to enter the BIOS. After some digging, I found the” Advanced Startup Options” menu in Windows, which let me reboot into “Windows Recovery” which then gave me an option to reboot directly into the BIOS.

Once there, I changed the boot order so USB devices were first. After saving and rebooting, the Proxmox installer finally appeared.

I followed along with a great walkthrough from WunderTech (https://www.youtube.com/watch?v=lFzWDJcRsqo). The installation steps were straightforward:

  • Select Graphic Installer
  • Accept EULA
  • Select Disk to install on
    • I only have the SSD it came with installed
    • In “Options” you can select the file system
      • I stuck with ext4 since it’s simple and reliable, and I didn’t need anything more advanced for this setup
  • Set country and timezone
  • Create root password + email
    • This is the root password we will use to access the proxmox instance
    • The email is used for admin notifications
  • Set network settings
    • If we want to choose a specific management interfaces we can, otherwise just select the first one
    • Set the IP Address (CIDR) which we want the host to request
      • We still need to set up a DHCP assignment on the router later on
      • Im using 192.168.1.10
    • Set hostname to some internal host name
      • this will be the name of the node in the interface
      • Im using proxmox2.lan
    • Set Gateway and DNS server to our router IP
  • Review all the selected settings and click Install
  • After the install, reboot the machine

First Boot & Basic Setup

After rebooting, Proxmox displayed the URL for the web interface. I opened it, accepted the self‑signed certificate warning, and logged in as root using the password set up previously.

I then switched to “dark mode” by clicking on my profile in the top right corner, selecting “color theme” and then selecting “Proxmox Dark”.

Running the Post‑Install Script

Next, I used the “PVE Post Install” script from community-scripts.org. I selected the node (proxmox2), opened a shell and ran the “PVE Post Install” script. This was very similar to what was shown in the video referenced earlier but the questions were worded a bit differently.

  • I didnt have to update sources because it told me they were already set
    • but if they were not I would have told it to do so
  • Disable enterprise repo?
    • Yes
  • Disable ceph repo?
    • Yes
  • Add no-subscription repo?
    • Yes
  • Add PveTest repo?
    • No
  • Disable subscription nag?
    • Yes
  • Disable high availability?
    • Yes
  • Disable corosync?
    • Yes (this seem to be related to high availability)
  • Update Proxmox?
    • Yes

After it finished, I rebooted and confirmed there were no pending updates.

Setting Up DHCP Reservation

Next, I wanted to create a DHCP reservation on my router. I prefer to give my homelab machines static DHCP leases so their IPs never change, which helps keep things predictable. Oddly, I couldn’t find the device listed under the IP it was using. Eventually I realized I needed to search byMAC address. To find the MAC address I ran ip link in the root shell and then looked for vmbr0 -> link/ether <MAC>. I later figured out you could also find this under under Node → Network → nic0 → Alternative Name.

Once I had the MAC, I found the DHCP entry (under a different IP), set it to static, and updated it to the IP I wanted. It turned out the router had already given it a different IP, which explained why I couldn’t find it at first.

Notification Setup

Finally, Proxmox can send alerts for backups, errors, and updates, so I wanted to make sure I’d actually see them. I created a dedicated email account on my personal mail server and used that to configure SMTP. I went to Datacenter → Notifications → Notification Targets → Add → SMTP and filled out the settings.

  • Endpoint name: Rirak-Mail-Server
  • Server: mail.rirak.com
  • Username: [email protected]
  • Password: <email password>
  • From Address: [email protected]
  • Recipient: root@pam
    • This means the root user. Others could be added too.

After saving, I sent a test email to confirm everything worked. Then I updated the default matcher: Notification Matchers → Default Matcher → Modify → Targets to Notify → Select Rirak-Mail-Server.

Wrap Up

What started as a dead SD card turned into a full Proxmox rebuild — and a much more capable homelab node. The KAMRUI Mini PC has plenty of headroom, and the setup process was smoother than I expected once I got past the BIOS quirks.

Next up, I’ll migrate my Home Assistant VM and start moving the rest of my services over. I’m also planning to document the VM setup, backups, and container configuration as I go.

Laundry Monitoring Automation, Part 3: Final Config and Installation

In Part 2, I took the breadboard proof-of-concept from Part 1 and turned it into real hardware — soldered connections, 3D-printed cases, and wiring neat enough to survive in the laundry room.

With the hardware ready, it was finally time to move out of the workshop and into the basement. This part of the project was about making it real: mounting the controller and sensors on the machines, dialing in the vibration sensitivity, and wiring up the logic inside Home Assistant so it actually did something useful.

Updating the ESPHome Configuration

Before mounting everything, I wanted to make sure the ESPHome firmware was current. When I opened my config, I was greeted with a big yellow deprecation warning: my old API key format was no longer supported.

I also took the opportunity to add a couple of extra “maintenance” sensors — WiFi signal and uptime — so I could keep an eye on the device’s health once it was mounted out of reach.

After generating a new encryption key to replace the API key and updating the YAML, I flashed the board and it reconnected to Home Assistant without a hitch.

Full ESPHome Configuration

# Config for the ESP Device
esphome:
  name: esp-laundry-bot
  friendly_name: ESP Laundry Bot
  min_version: 2025.5.0
  name_add_mac_suffix: false

# Specifies the board being used
esp8266:
  board: esp01_1m

# Enable logging on the serial port
# This is useful for seeing logs in HA and the Device Builder
logger:

# Enable the ESP API
# HA and the Device Builder use this to read the device
api:
  encryption:
    key: !secret api_key

# Allow Over-The-Air updates from the Device Builder
# Also set a password for OTA for security
ota:
- platform: esphome
  password: !secret ota_password

# Wifi config to connect to my network
# Sets a static IP to avoid needing to obtain an IP
# Also sets a password for the backup Access Point
wifi:
  ssid: !secret wifi_ssid
  password: !secret wifi_password
  manual_ip:
    static_ip: 192.168.1.251
    gateway: 192.168.1.1
    subnet: 255.255.255.0
  ap:
    password: !secret ap_password

binary_sensor:
# Status of the device
 - platform: status
   name: "LaundryBot"
# Config for attached SW-420 sensor
 - platform: gpio
   pin: GPIO4 #D2
   name: "washer"
   device_class: vibration
   filters:
   - delayed_on: 10ms
   - delayed_off: 1sec
# Config for other attached SW-420 sensor
 - platform: gpio
   pin: GPIO14 #D5
   name: "dryer"
   device_class: vibration
   filters:
   - delayed_on: 10ms
   - delayed_off: 1sec

sensor:
# Sensor for the strength of the WiFi signal
  - platform: wifi_signal
    name: "WiFi Signal Sensor"
# Sensor providing the device uptime in seconds
  - platform: uptime
    type: seconds
    name: Uptime Sensor

text_sensor:
# Sensor providing the WiFi SSID and the device's Mac Address
  - platform: wifi_info
    ssid:
      name: ESP Connected SSID
    mac_address:
      name: ESP Mac Wifi Address
# Sensor providing the current ESPHome Version on the device
  - platform: version
    name: "ESPHome Version"
    hide_timestamp: true

[collapse]

Mounting the Controller and Sensors

With the firmware updated, it was finally time to leave the workbench. I mounted the ESP case on the wall near the washer and dryer, close enough to power but out of the way and the sensors on the back of the machines. I attached them using double sided gorilla tape.

Once everything was mounted, I routed the cables neatly back to the controller and connected them using the JST plugs I’d added earlier.

With the hardware in place, I tweaked the tiny potentiometers on the SW-420 boards to dial in their sensitivity. A little turn too far caused constant false positives; too far the other way and they’d miss vibrations entirely. After a few small adjustments while watching Home Assistant logs, both sensors settled into a sweet spot.

Seeing the cases mounted neatly and the sensors responding reliably made the project finally feel like a real, working system.

Creating Home Assistant Automations

In our house, everyone does their own laundry, so a simple “send a notification to all phones” wasn’t going to cut it. I wanted a way for each person to decide whether they wanted to be notified for a particular load.

After some experimenting, I landed on a system that was a bit more involved but felt natural: as soon as the machine starts running, Home Assistant sends an actionable notification to everyone. That notification gives each person the option to opt in or opt out for that cycle.

Behind the scenes, a helper list keeps track of who tapped “yes.” Each tap is saved via an MQTT event, and when the machine finishes, the automation checks that list, sends notifications only to those who opted in, and then clears the list for next time.

For simplicity, I duplicated this setup for both the washer and the dryer rather than trying to build a single generic version.

Implementation

To make the opt-in system work, I started by creating two input_text helpers in Home Assistant. The first, Laundry Notification Users, holds a list of everyone who could be notified. The second, Opted-in Washer Notification Users, stores just the people who actually opted in for the current cycle.

With the helpers in place, I broke the logic into three separate automations:

  • Detect when the machine starts and send the initial actionable notification to everyone
  • Handle the responses to that notification, adding anyone who tapped “yes” to the Opted-in list.
  • Detect when the machine finishes and send the final notification only to those on the Opted-in list, then clear the list for the next cycle.

Detect machine running & send notification

When the washer (or dryer) has been vibrating for 5 minutes, Home Assistant sends an actionable notification to everyone in the Laundry Notification Users list. That notification includes “Opt In” and “Opt Out” buttons so each person can choose for that cycle. The 5 minute delay is to help eliminate any false positives.

Washer Started Notification

alias: Washer Started Notification
description: ""
triggers:
  - trigger: state
    entity_id:
      - binary_sensor.esp_laundry_bot_washer
    from: "off"
    to: "on"
    for:
      hours: 0
      minutes: 5
      seconds: 0
conditions: []
actions:
  - repeat:
      for_each: "{{ users }}"
      sequence:
        - data:
            message: Washer is running. Notify you when it's done?
            data:
              actions:
                - action: washer_notification_yes_{{ repeat.item }}
                  title: "Yes"
                - action: washer_notification_no_{{ repeat.item }}
                  title: "No"
          action: notify.notify_{{ repeat.item }}
variables:
  users: >
    {{ states('input_text.laundry_notification_users').split(',') | map('trim')
    | reject('equalto', '') | list }}
mode: single

[collapse]

Handle notification responses

Next came the response handler. When someone taps one of the buttons on the actionable notification, Home Assistant fires a mobile_app_notification_action event. My automation listens for those events.

When processing the event, I parse the action ID to extract the user’s name (which I built into the action when sending the notification). If the action starts with “yes,” I add that user to the opt-in helper list. If the action starts with “no,” I do the opposite — check the helper list and remove the user if they’re on it.

Handle Washer Notification Responses

alias: Handle Washer Notification Responses
description: ""
triggers:
  - event_type: mobile_app_notification_action
    trigger: event
conditions:
  - condition: template
    value_template: |
      {{ trigger.event.data.action.startswith('washer_notification_yes_') or
         trigger.event.data.action.startswith('washer_notification_no_') }}
actions:
  - variables:
      user: "{{ trigger.event.data.action.split('_')[-1] }}"
  - choose:
      - conditions:
          - condition: template
            value_template: >-
              {{
              trigger.event.data.action.startswith('washer_notification_yes_')
              }}
        sequence:
          - data:
              entity_id: input_text.opted_in_washer_notification_users
              value: >
                {% set users =
                states('input_text.opted_in_washer_notification_users').split(',')
                | map('trim') | list %} {% if user not in users %}
                  {{ (users + [user]) | join(',') }}
                {% else %}
                  {{ users | join(',') }}
                {% endif %}
            action: input_text.set_value
      - conditions:
          - condition: template
            value_template: >-
              {{ trigger.event.data.action.startswith('washer_notification_no_')
              }}
        sequence:
          - data:
              entity_id: input_text.opted_in_washer_notification_users
              value: >
                {% set users =
                states('input_text.opted_in_washer_notification_users').split(',')
                | map('trim') | list %} {{ users | reject('equalto', user) |
                join(',') }}
            action: input_text.set_value
mode: single

[collapse]

Detect machine done & send final notification

Finally, I needed an automation to close the loop. When the washer (or dryer) stops running, Home Assistant already flips the “running” boolean off. My automation watches for that state change, but to avoid false triggers from short pauses I make it wait two minutes before firing.

When it does run, it pulls the Opted-in list from the helper and loops through each person, sending a “Finished” notification only to those who opted in. Once the messages go out, it clears the helper list so the next load starts fresh.

Washer Done Notification

alias: Washer Done Notification
description: ""
triggers:
  - entity_id: binary_sensor.esp_laundry_bot_washer
    from: "on"
    to: "off"
    for:
      minutes: 2
    trigger: state
conditions: []
actions:
  - repeat:
      for_each: "{{ users }}"
      sequence:
        - data:
            message: Washer is done!
          action: notify.notify_{{ repeat.item }}
  - data:
      entity_id: input_text.opted_in_washer_notification_users
      value: ""
    action: input_text.set_value
variables:
  users: >
    {{ states('input_text.opted_in_washer_notification_users').split(',') |
    map('trim') | reject('equalto', '') | list }}
mode: single

[collapse]

Wrap Up

What started as a quiet frustration — missed laundry beeps in the basement — turned into a build that taught me a ton. Along the way I learned how finicky vibration sensors can be, how ESPHome has matured into a powerful tool, and how breaking big problems into smaller automations saves endless headaches.

This final phase — installing, tuning, and wiring up the automations — was the most satisfying. Seeing the notifications pop up exactly when the machines finished felt like magic, even though I knew all the YAML and solder joints behind it.

There are still things I’d like to refine. I could make the opt-in logic more generic so I don’t have two separate sets of helpers and automations for washer and dryer. I’ll also have to see how reliable the vibration sensors prove to be. But for now, it works: a tidy little system that reliably tells us when our laundry is done and only pings the people who care.

If you missed the earlier parts, you can find them here: Part 1: From Idea to First Signals and Part 2: Building.

Laundry Monitoring Automation, Part 2: Soldering and 3D Printing

In Part 1, I built a proof-of-concept using an ESP8266 and vibration sensors to detect washer and dryer activity. The breadboard prototype had proved the idea worked, but it looked more like a science fair project than a laundry automation device. It was time to make it permanent. That meant dragging out the soldering iron, wrestling with too-thick wires, and learning to use heat shrink tubing along the way.

Of course, once the wiring was done, I needed a proper home for the controller and sensors. My 3D printer got a workout as I iterated through case designs — some good, some cracked, and some that never quite fit.

This part of the project was all about turning a fragile prototype into something that could survive real use, and as usual, the process wasn’t as straightforward as I first imagined.

Soldering

The Controller

With the breadboard test behind me, it was time to make things permanent. The ESP8266 D1 Mini only has one 5V pin and one ground pin, but I needed power for both sensors. My first thought was to try cramming two wires into a single hole — turns out, they were too thick for that.

So I improvised: I soldered one wire in normally, then cut the excess and attached another wire from the backside. Not the prettiest solution, but it worked.

Once everything was connected, I looked over the joints. Most were fine, but the extra wires I tacked onto the back left exposed sections that I wasn’t happy with. At first, I figured I’d just leave them — but the more I looked, the more it bothered me. This was the perfect excuse to try out some heat shrink tubing.

A quick Amazon order later, I slipped the tubing on, heated it up with the side of my soldering iron, and the results looked so much cleaner. Definitely worth the extra step.

It wasn’t flawless, but at least now the controller wiring looked sturdy enough to move on to the next stage.

The Sensors

Next up were the SW-420 vibration sensors. These modules came with header pins pre-soldered, which was convenient for breadboarding but not ideal for my ultimate use case. I considered removing the headers, but they were stubbornly attached, and I didn’t want to risk damaging the board.

So instead, I decided to solder the wires directly onto the existing headers. This worked, but it was fiddly; the joints ended up with more exposed metal than I liked, and the connections didn’t look as neat as I wanted. Once again, heat shrink tubing came to the rescue. Sliding it over the messy joints and shrinking it down made everything look far cleaner — and much sturdier.

The sensors were now wired up solidly and ready to be paired with the controller.

3D Printing

The ESP Case

With the wiring done, I needed a proper case for the controller. I dug up an old model I had lying around and modified it with a few cutouts for the wires. Version 1 came off the printer looking decent… until I tried fitting the ESP inside.

The holes were too small, the support platform blocked the bottom wires, and the cover clips pressed right on top of my solder joints. In short: it didn’t fit.

So I started over. For Version 2, I rebuilt the case from scratch. I fixed the tilt in the model, added full-height holes for the wires, moved the clips away from the solder joints, and even added some lettering to the cover.

The new case worked better and I was able to fit the D1 mini and cables, but the cover gave me headaches. I printed it with supports, which fused too tightly, and I ended up cracking it while trying to pry it apart. Not ideal. The ESP also did not sit as deep and snugly as I had hoped due to the wire underneath not allowing it to go all the way down but it was good enough.

I wasn’t ready to give up on the lettering, though. For Version 3, I changed the design so the text was cut into the cover instead of raised, and I printed it without support. This time, it came off cleanly without cracking.

The lettering was subtle, but it worked, and the case finally felt solid enough to house the controller.

The Sensor Case

After finishing the controller case, I turned my attention to the vibration sensors. I found a case model on Thingiverse that looked promising, with cutouts for the adjustment screw and indicator LEDs.

I printed it out… and the sensor didn’t fit. Too tight, wrong alignment, and no room to slide it in cleanly. So, back to the design software. I stretched the model, shifted the pillar that holds the sensor, and shortened the height a bit so it would sit snug. The first reprint was better, but still not quite right. It took one more iteration to finally get the case fitting the SW-420 modules properly.

The result was a simple but sturdy enclosure that kept the sensors secure while still exposing the adjustment screw. It wasn’t fancy, but it worked — and the sensors finally had a proper home.

Finishing Touches

With the controller and sensors each in their own cases, the last step was cleaning up the wiring. At this point, I had a jumble of wires sticking out in all directions, and I didn’t want to leave them loose.

So, I twisted the wires together into neat bundles, then crimped on JST connectors. That way, the sensors could be easily plugged in or disconnected without having to resolder anything.

It was a small detail, but it made the whole project feel much more polished. The wiring was tidy, modular, and finally looked like something I could install next to the washer and dryer.

Ready to Install

After plenty of soldering, shrinking, printing, and re-printing, the hardware was finally taking shape. The ESP had a case that fit, the sensors had snug enclosures, and the wiring was cleaned up with connectors.

What started as a breadboard full of loose jumpers now looked like a real device. It was finally ready to move out of the workshop and into the laundry room.

In the next part, I’ll tackle the final configuration, mounting the sensors to the machines, and getting the system up and running in Home Assistant.

Laundry Monitoring Automation, Part 1: From Idea to First Signals

The Problem

For a while, I’ve wished there was a way to get reliable notifications when my washer and dryer finished. Both machines live in the basement, and the end-of-cycle beeps aren’t loud enough to reach upstairs.

The obvious solution was to plug them into a smart outlet and track power usage. But my dryer runs on a 240-volt outlet, and I couldn’t find a reasonably priced smart plug that could handle it.

That meant I’d need some sort of sensor. The two common options I kept coming across were:

  • a sound sensor to listen for the end-of-cycle chime
  • a vibration sensor that would detect the machine running and then stopping.

I didn’t like the idea of listening for sounds — it felt finicky and prone to mistakes — so vibration sensors it was. The hardware side seemed simple enough, but the thought of writing custom software to make sense of the readings had me procrastinating for months.

Choosing the Approach

The turning point came when I revisited ESPHome. I’d tried it a few years ago for a different project, but at the time the flashing process was confusing and I couldn’t quite get it working. I eventually gave up and went with Tasmota, which was easier back then.

Fast forward to today: ESPHome has grown up a lot. It integrates directly with Home Assistant, flashing devices is much smoother, and I’d recently had a good experience setting up an ESPHome Bluetooth proxy. That gave me the confidence to give it another shot.

Around that same time, I found a project on GitHub called LaundryBot, which was almost exactly what I’d been planning. The author used an ESP32, but since I had a pile of ESP8266 boards on hand (and I didn’t need Bluetooth anyway), I figured I’d adapt it.

So, the plan became: pair of SW-420 vibration sensors feeding into an ESP8266 running ESPHome, reporting washer and dryer status updates straight into Home Assistant. With the concept settled, it was time to see if I could actually get it working.

Materials

For this project, I mostly used parts I already had lying around, with a few extras picked up along the way:

  • ESP8266 (Wemos D1 Mini clone) – pack of 10 from Amazon
  • SW-420 vibration sensor modules – pack of 5 from Amazon
  • 22 AWG solid core hookup wire – for sensor connections (from Amazon)
  • Breadboard & jumper wires – for prototyping
  • Soldering iron – to make permanent connections
  • 3D printer (Creality CR-6 SE) – for controller and sensor cases

Optional but nice to have:

  • Heat shrink tubing – to clean up exposed joints (from Amazon)
  • JST connectors & crimping tool – for detachable, neat wiring (from Amazon)

Initial Setup

With the plan in place, I grabbed a Wemos D1 Mini (an ESP8266 clone) from my parts bin and started setting up ESPHome. The GitHub repo I found suggested wiring up the sensor to an LED on the board for testing, but since my board didn’t have that, I decided to skip ahead and just try flashing ESPHome directly.

At first, everything looked promising: I plugged in the D1 Mini, Windows recognized it, and the drivers installed without complaint. But when I tried flashing through the ESPHome Device Builder in Home Assistant, it refused to connect.

I thought maybe the board wasn’t in flash mode, so I tried all sorts of combinations: grounding D3, grounding D8, adding resistors, swapping cables, different USB ports, even trying different boards. Nothing. After a few hours of chasing advice from forums and blog posts, I was ready to call the board defective.

In a last-ditch attempt, I dusted off my old Windows 10 laptop, plugged the board in, and opened the ESP Home Web Flasher (link). To my complete surprise, it connected instantly and flashed on the first try. Just like that, the board came alive, connected to WiFi, and popped up in Home Assistant.

That was the breakthrough I needed to put the project back on track — and finally get me excited to see some real signals.

Proof of Concept

Setting Up the Software

With the firmware running, I wanted to see if the ESP could actually talk to Home Assistant. To start small, I added a few simple ESPHome sensors: device status, WiFi signal, and uptime. That way I’d know right away if the board was online and working.

binary_sensor:
  - platform: status
    name: "LaundryBot"
sensor:
  - platform: wifi_signal
    name: "WiFi Signal"
  - platform: uptime
    type: seconds
    name: "Uptime Sensor"

The first time those values popped up in Home Assistant, it felt like a small victory — proof that the setup was alive.

Next came the vibration sensors. I Googled which pins were safe on the D1 Mini (link) and picked GPIO4 (D2) and GPIO14 (D5). I added them as binary sensors for “Washer” and “Dryer,” then flashed the config again.

binary_sensor:
  - platform: status
    name: "LaundryBot"
  - platform: gpio
    pin: GPIO4 #D2
    name: "washer"
    device_class: vibration
    filters:
     - delayed_on: 10ms
     - delayed_off: 1sec
  - platform: gpio
    pin: GPIO14 #D5
    name: "dryer"
    device_class: vibration
    filters:
     - delayed_on: 10ms
     - delayed_off: 1sec

Seeing those two new entities appear in Home Assistant — even if they were still “unavailable” with nothing connected — felt like another big step forward.

Trying out the Hardware

With the software set, it was time for some hands-on testing. I set everything up on a breadboard: ESP resting on loose headers, sensors connected with jumper wires.

When I gave the sensors a shake… nothing. I thought I’d wired them wrong. After a little research, I learned the SW-420 is only sensitive along one axis. In other words, I’d been shaking them the wrong way.

Once I slid the breadboard back and forth along the sensor’s axis, the logs lit up with vibration events.

Not everything was perfect — sometimes a sensor would “stick” in the high state and needed another shake to reset — but seeing those signals show up in Home Assistant was the real proof I needed. The idea was solid, and it was time to start building it for real.

Ready to Build

After a few dead ends and plenty of trial and error, I had a working proof of concept: an ESP8266 on a breadboard, two vibration sensors, and signals flowing into Home Assistant. The setup was imperfect but it was enough to confirm the approach would work.

In the next part, I’ll take this breadboard prototype and turn it into proper hardware: soldered connections, 3D-printed cases for the controller and sensors, and wiring that can stand up to real use in the laundry room.