Packet Radio, Part 1: The Stack

This is the first in a short series about VHF packet radio. Later posts will cover a specific station build and on-air operation; this one is a reference — a map of the protocols and software involved, and how they relate. Packet has a lot of acronyms. Most of them make more sense once you see how they layer onto each other — and which pieces are drop-in alternatives, like Direwolf standing in for a hardware TNC.

HF packet has its own quirks (different tones, LSB instead of FM, sparser activity) and may get its own post later.

Packet Radio and the OSI model

Here’s the short version; the rest of the post walks down the stack.

OSI layer Packet-radio equivalent Example software / hardware
7 Application BBS (FBB, BPQMail) BPQMail, BPQTerminal, Paracon
4 Transport NET/ROM circuits BPQ32
3 Network NET/ROM routing (NODES broadcasts, quality metric) BPQ32
2 Data link AX.25 (UI frames + connected mode) Direwolf or BPQ32’s AX.25 stack
1 Physical 1200 baud AFSK / Bell 202 tones over FM VHF VHF radio + hardware or software TNC

Layer 1 — Physical: the radio and the modem

At the bottom is an ordinary VHF FM radio, used as a pipe for audio tones. The computer generates two audio tones — one called mark and one called space — feeds that audio into the radio’s mic input, and the radio transmits it as ordinary FM voice. It has no idea it’s carrying data. The receiving station does the reverse. This scheme of encoding data by shifting between audio frequencies is called audio frequency-shift keying, or AFSK.

The bits are not a direct 1:1 mapping to the two tones. Amateur packet uses NRZI encoding on top of AFSK: per Wikipedia’s Packet radio article, “a data zero bit is encoded by a change in tones and a data one bit is encoded by no change in tones.” The data rides in the transitions between mark and space, not in which tone is present at any given instant.

The dominant flavor on VHF is 1200 bits per second using Bell 202 tones — 1200 Hz for mark, 2200 Hz for space — a standard originally designed for 1970s telephone modems (Wikipedia: Packet radio). The practical consequence is that 1200 bit/s is very slow by modern standards, which is why almost everything above this layer is built around short messages and efficient retries rather than throughput.

Between the radio and the computer sits a TNC (Terminal Node Controller). A TNC does two jobs:

  1. Modem — generate Bell 202 tones on transmit, decode them on receive.
  2. Framer — group bits into discrete, self-contained units called AX.25 frames. A frame bundles a destination address, a source address, a payload, and a checksum into one atomic chunk that either arrives intact or is thrown out.

Everything above this layer thinks in frames, not bits.

A TNC can be dedicated hardware — the Mobilinkd TNC3 is a common modern example — or it can be software running on the host computer, paired with a USB sound-card-plus-PTT interface like the Digirig Mobile for audio and PTT to the radio. The software TNC most stations reach for today is Direwolf by WB2OSZ, which describes itself as a “software ‘soundcard’ AX.25 packet modem/TNC and APRS encoder/decoder.”

Direwolf exposes its framed output to host applications over two interfaces:

AX.25 is the link-layer protocol that amateur packet runs on. It was derived from layer 2 of the X.25 protocol suite in the early 1980s and adapted for amateur use — most notably, its addresses are callsigns instead of the numeric addresses X.25 used.

SSIDs. Each AX.25 address is a callsign plus a 4-bit Secondary Station Identifier, giving a range of 0 through 15. SSIDs let a single licensee run several logically distinct services at the same station — one sysop’s station might present as W9CPZ-4 for the NET/ROM node switch, W9CPZ-11 for the BBS, and W9CPZ-12 for a chat server.

Two modes of operation. AX.25 supports both a connected virtual-circuit mode and a connectionless datagram mode:

  • UI frames (unconnected) — broadcast, one-shot, no acknowledgement. If somebody hears them, great; if not, they’re gone. APRS, beacons, and NET/ROM’s own routing broadcasts all ride on UI frames.
  • Connected mode — a reliable, ordered, point-to-point session between two specific stations. “Connecting to” another packet station means setting up one of these.

Digipeaters: a layer-2 repeater

Before NET/ROM existed, the only way to extend a packet station’s reach beyond one RF hop was the digipeater: Wikipedia defines one as “a repeater node in a packet radio network [that] performs a store and forward function, passing on packets of information from one node to another.”

Mechanically, this works through AX.25’s address field: alongside source and destination, a frame can carry a list of digipeater callsigns to be relayed through. A station that sees its own callsign in that list rebroadcasts the frame; a station that doesn’t, ignores it. The sender specifies the path in advance, every frame carries it, and the digipeaters themselves hold no state about the network.

This is the classic layer-2 extension mechanism, and it still exists, but it has obvious limits: the sender has to know the path ahead of time, and there is no way to route around a hop that becomes unreachable. That is the problem NET/ROM set out to solve.

Layer 3 — Network: NET/ROM

NET/ROM is a protocol used extensively by radio amateurs that adds a routing layer on top of AX.25. NET/ROM nodes form a mesh: each one maintains a table of the other nodes it knows about, learns new ones by listening, and connects user-visible “node switches” together with internode AX.25 links.

Each node periodically broadcasts a NODES frame — a UI frame listing which other nodes it can reach and how good each path looks. Every other node that hears it updates its routing table accordingly. Packet-Radio.net describes the mechanism as “self learning, building routing tables from broadcasts heard from other nodes.”

The “how good” part is the quality metric — a number from 0 to 255 that, per Packet-Radio.net, “is used by the software to select the route to use when more than one route exists between two nodes.” Higher is better. The standard default for a freshly heard neighbor is 192. When a route spans multiple hops, the route quality is the product of the link qualities divided by 256 — so two hops at quality 192 come out to 144, and the numbers degrade as hops chain together.

From the user’s perspective, the topology does not have to be known in advance. Connect to the local NET/ROM node, type C DESTINATION, and the network finds the path.

Digipeater vs NET/ROM node, at a glance

Aspect Digipeater NET/ROM node
Layer 2 (data link) 3/4 (network + transport)
Knows the topology No Yes — learns from NODES broadcasts
Path selection Caller specifies it Node chooses, based on quality metric
Failure handling Silent drop Can try alternate routes if one exists
Typical user call C DEST VIA W9CPZ C DEST (connect to local node first)

Layer 4 — Transport

NET/ROM does not stop at routing. It also provides a transport-layer circuit — a reliable, ordered, end-to-end connection between two nodes, layered on top of whatever AX.25 hops it took to get there. “Reliable and ordered” is the same guarantee TCP makes: data sent at one end comes out the other end exactly once, in the order it was sent, with retransmissions hidden from the application.

Unlike TCP, though, NET/ROM preserves message boundaries. The Linux netrom(4) man page notes that NET/ROM “only supports connected mode” and presents itself to applications as a SOCK_SEQPACKET socket, which socket(2) defines as “a sequenced, reliable, two-way connection-based data transmission path … for datagrams of fixed maximum length.” TCP by contrast uses SOCK_STREAM, which is an undifferentiated byte stream where send boundaries are not preserved.

Layer 7 — Applications

On top of that transport sit the actual applications that make packet radio useful. The dominant one — and the focus of the rest of this series — is the BBS (Bulletin Board System), a multiuser mailbox and message store. FBB (F6FBB) and BPQMail by G8BPQ are the two most commonly seen on the air today.

Hierarchical BBS addressing

BBSes add their own addressing on top of everything else: the hierarchical format CALL.#REGION.STATE.COUNTRY.CONTINENT, as documented by Ohio Packet. A concrete example from my area is W9CPZ.#WWA.WA.USA.NOAM. The # prefix on the local-area component is a convention adopted to avoid collisions with numeric postal codes used as local identifiers in some countries.

The forwarding rule is postal-routing in reverse: broadest on the right, most specific on the left. Each BBS along the way consults a forward file of suffixes it knows how to forward toward. Per qsl.net/w6fff, the BBS “will attempt to find a match … starting with the left-most item in the address field” and falls back rightward when a more specific component doesn’t match. The design goal, per Ohio Packet, is that “an intermediate BBS system does not need to know how to get directly to the destination BBS, only the next ‘hop.’”

Concretely: a neighboring BBS that knows about #WWA (Western Washington) keeps a W9CPZ.#WWA.WA.USA.NOAM message local; one that only recognizes .USA.NOAM punts it toward a long-haul gateway.

This addressing is an application-layer convention. NET/ROM moves individual sessions between two specific node callsigns; the BBS hierarchy describes where a message should ultimately land, regardless of which links carried it. A message may pass through several BBSes on its way, each opening its own connected-mode session to the next — sometimes a direct AX.25 link, sometimes a NET/ROM circuit over multiple hops — until it finally lands in a mailbox.

Coming up in part 2

Part 2 will walk through a specific VHF station build: the radio, the TNC, and a minimal bpq32.cfg. Part 3 will cover the on-air side — learning routes dynamically from a neighboring NET/ROM node, bringing up the BBS, and forwarding mail to another sysop.

Sharing Claude Code Project Configs Across Claude Squad Sessions

I’ve been using Claude Code, Anthropic’s CLI for AI-assisted coding, to help with a side project. Recently I started running multiple sessions in parallel using claude-squad, which spins up isolated instances via git worktrees. It’s a great workflow, but I ran into a minor annoyance.

The Problem

When using claude-squad, each worktree gets its own .claude/ directory. Claude Code stores per-project permissions in .claude/settings.local.json - this file stays in .gitignore since it contains local preferences rather than project configuration.

The problem: every time claude-squad creates a new worktree, you start with a fresh settings.local.json. Any permissions you’ve granted (like allowing ./gradlew build to run without prompting) need to be re-approved in each session.

1
2
3
4
$ git worktree list
/Users/youruser/checkouts/myproject b44b5eb [main]
/Users/youruser/.claude-squad/worktrees/feature-branch bd9bc1a [feature-branch]
/Users/youruser/.claude-squad/worktrees/bugfix a1c2d3e [bugfix]

Three worktrees means three separate permission configs. Tedious.

The Solution

Git’s post-checkout hook fires when a new worktree is created. By detecting when the previous HEAD is null (all zeros), we can identify new worktree creation and automatically symlink the settings file back to the main worktree.

Create the hook

Add this script to .git/hooks/post-checkout:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#!/bin/bash

# post-checkout hook: Symlink shared Claude settings for new worktrees
#
# When a new worktree is created via `git worktree add`, this hook
# creates a symlink to the main worktree's .claude/settings.local.json
# so all Claude Code instances share the same permissions config.

PREV_HEAD="$1"
NEW_HEAD="$2"
BRANCH_CHECKOUT="$3"

# Detect new worktree: previous HEAD is null (all zeros)
NULL_REF="0000000000000000000000000000000000000000"

if [ "$PREV_HEAD" = "$NULL_REF" ]; then
# This is a new worktree or clone
MAIN_WORKTREE="$HOME/checkouts/myproject"
SHARED_SETTINGS="$MAIN_WORKTREE/.claude/settings.local.json"
LOCAL_CLAUDE_DIR=".claude"
LOCAL_SETTINGS="$LOCAL_CLAUDE_DIR/settings.local.json"

# Only proceed if we're in a worktree (not the main worktree itself)
CURRENT_DIR="$(pwd)"
if [ "$CURRENT_DIR" != "$MAIN_WORKTREE" ] && [ -f "$SHARED_SETTINGS" ]; then
mkdir -p "$LOCAL_CLAUDE_DIR"

# Remove existing file if present (not a symlink)
if [ -f "$LOCAL_SETTINGS" ] && [ ! -L "$LOCAL_SETTINGS" ]; then
rm "$LOCAL_SETTINGS"
fi

# Create symlink if not already present
if [ ! -L "$LOCAL_SETTINGS" ]; then
ln -s "$SHARED_SETTINGS" "$LOCAL_SETTINGS"
echo "Created symlink: $LOCAL_SETTINGS -> $SHARED_SETTINGS"
fi
fi
fi

Update MAIN_WORKTREE to point to your primary checkout location.

Make it executable

1
chmod +x .git/hooks/post-checkout

Fix existing worktrees (optional)

For worktrees that already exist, create the symlink manually:

1
2
mkdir -p .claude
ln -sf ~/checkouts/myproject/.claude/settings.local.json .claude/settings.local.json

Verification

Create a new worktree and check for the symlink:

1
2
3
4
5
6
7
$ git worktree add ../test-worktree main
Created symlink: .claude/settings.local.json -> /Users/youruser/checkouts/myproject/.claude/settings.local.json

$ ls -la ../test-worktree/.claude/settings.local.json
lrwxr-xr-x 1 youruser staff 69 Dec 29 16:57 .claude/settings.local.json -> /Users/youruser/checkouts/myproject/.claude/settings.local.json

$ git worktree remove ../test-worktree

To verify Claude Code is actually reading from the symlink, you can add a test permission to the main settings file and confirm it works in the worktree session without prompting.

Result

All worktrees now automatically share the same Claude Code permissions. Grant a permission once in any session, and it applies everywhere. The hook is idempotent (safe to run multiple times) and non-destructive (won’t overwrite existing non-symlink files without explicit removal).

One less thing to think about when spinning up parallel Claude sessions.

Replacing the batteries in a Cyberpower UPS

When the batteries in my aging Cyberpower CP1500PFCLCD finally gave up the ghost, I was faced with a decision: spend nearly as much as the UPS is worth on a replacement battery pack, or find a cheaper alternative.

The Problem

The CP1500PFCLCD is an old but still functional UPS. While official and aftermarket replacement battery packs are readily available, they cost more than the current value of the unit itself. Official Cyberpower replacement packs run around $50-70, and third-party compatible sets are in a similar price range. For a UPS that can be replaced entirely for not much more, this seemed like poor economics to install new batteries with mediocre capacity.

The stock battery configuration uses two 12V 9Ah sealed lead-acid batteries connected in series to provide the 24V system voltage. A quick voltage test confirmed the 24v configuration:

Old battery pack voltage test showing 26.29V on a Fluke multimeter

The meter shows 26.29V on the old pack - close to nominal, but the battery runtime had dropped to a measly 5 minutes under light load.

The Solution

Rather than purchasing a wholly new UPS, I went into the local autoparts store to look for the cheapest 12v pair I could buy. The 12V 18Ah sealed lead-acid batteries I found are double the amp-hour capacity of the stock batteries. By trading in the old batteries, I avoided paying the core fee on the new ones.

New ES17-12 batteries from O'Reilly Auto Parts displayed with old battery cores and Super Start battery box

The catch? The new batteries are physically larger than the stock batteries and won’t fit inside the UPS case. The solution was a cheap external battery box which provides plenty of room for the larger cells.

Installation

The installation process is straightforward if you’re comfortable working with low-voltage DC wiring:

Preparing the External Battery Box

The first step was wiring the new batteries in series to produce the required 24V. As you can see, there’s plenty of extra space in the box for future expansion:

ES17-12 batteries installed in external battery box with initial series wiring showing extra space for future expansion

The batteries are connected in series - positive terminal of the first battery to negative terminal of the second battery. The remaining positive and negative terminals become the output that connects to the UPS.

Routing the Leads

After completing the internal wiring, I routed the output leads through the side port of the battery box and protected them with some nylon wire loom:

Completed battery box wiring showing series connection with blue wire loom protecting the output leads

Testing and Connection

Before connecting to the UPS, I verified the voltage output from the new battery configuration:

Voltage test of new battery configuration showing 25.48V on Fluke multimeter, confirming proper assembly

The meter shows 25.48V - right where it should be for a freshly assembled 24V battery pack. With voltage confirmed and polarity verified, I connected the leads to the UPS and powered it on.

Results

Completed installation with external battery box connected to Cyberpower CP1500PFCLCD UPS via wire loom protected leads, UPS display showing 100% charge

The UPS immediately recognized the new batteries and began charging. The results exceeded expectations:

  • Battery runtime estimate jumped from approximately 5 minutes to approximately 70 minutes
  • The UPS recognized and began charging the new batteries without issue
  • The external battery box has additional room for another pair of batteries if I choose to expand capacity further in the future

Tradeoffs

There is one notable drawback to using significantly larger capacity batteries:

  • The UPS takes considerably longer to fully charge the 18Ah batteries compared to the original 9Ah pack. This is expected given the doubled capacity, but it’s worth noting if you experience frequent power outages.

Conclusion

For the cost of the batteries and a battery box, I’ve extended the life of an otherwise functional UPS and more than doubled its runtime. While this approach sacrifices the compact form factor of the original design, it provides excellent value and leaves room for future expansion. The longer charge time is a minor inconvenience compared to the extended runtime during outages.

Installing Windows 10 in Qemu with KVM

These are the steps I arrived at in order to install a Windows 10 Guest on an Ubuntu 18.04 host using virt-manager to install a Qemu VM with KVM acceleration and VirtIO drivers.

Prerequisites

Create a directory to work in and install the tools we will need. The virtio-win ISO image contains the drivers we will need in order to make Windows bootable.

1
2
3
4
mkdir windows
cd windows/
sudo apt-get install virt-manager
wget https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.160-1/virtio-win-0.1.160.iso

You will also need to copy your Windows 10 installer ISO into your windows/ directory.

Disk Setup

This creates a blank image that will be attached as a virtual hard drive to the Windows instance. We use qcow2 because it supports some nice extensions above those on the raw format, like thin provisioning of storage.

1
qemu-img create -f qcow2 windows_10_x64.qcow2 75G

Creating the VM

Call virt-install to create the VM.

1
2
3
4
5
6
7
8
9
10
11
12
13
virt-install \
--name=windows10 \
--ram=2048 \
--cpu=host \
--vcpus=2 \
--os-type=windows \
--os-variant=win10 \
--disk path=windows_10_x64.qcow2,format=qcow2,bus=virtio \
--disk en_windows_10_enterprise_x64_dvd_6851151.iso,device=cdrom,bus=ide \
--disk virtio-win-0.1.160.iso,device=cdrom,bus=ide \
--network network=default,model=virtio \
--graphics vnc,password=trustworthypass,listen=0.0.0.0

Verify the running VMs with the virsh command:

1
2
3
4
5
~/windows $ virsh list
Id Name State
----------------------------------------------------
4 windows10 running

Connecting to the VM

Now that the VM is running, you need to connect to it and install Windows. VNC to the host machine on port 5900 using the password you specified in the virt-install command. You did change that password, didn’t you?

On Mac OS you can use open to call the built in VNC client.

1
open vnc://<host>:5900

Windows Install

There will be a few extra steps to install Windows above and beyond a normal install. Because the accelerated VirtIO drivers required to interface with the virtual disk controller are not bundled with Windows, we need to load them into the installer before it will be able to talk to the virtual hard drive.

When asked to do an Upgrade or a Custom install, select Custom Install.
Windows Setup screen showing Custom Install option selected

Select Load Driver to point the installer to your driver file.
Windows Setup disk selection screen with no drives found, highlighting the Load Driver button

Navigate to E:\viostor\w10\amd64
Browse for Folder dialog showing viostor driver folder structure with w10/amd64 directory selected

There should only be one option, the VirtIO SCSI Controller.
Driver selection screen showing Red Hat VirtIO SCSI controller driver

The installer should now see your virtual disk. Hit next to let Windows automatically partition it.
Windows disk selection screen now showing Drive 0 Unallocated Space with 75.0 GB available

Next steps

Once the installer finishes and you get into Windows you may want to do a few more things

  • Install the other drivers on the virtio-win ISO (network, etc…)
  • Apply Updates
  • Enable RDP

Connecting the First Alert Zcombo Smoke Alarm to Home Assistant

After some trial and error, I was able to connect my First Alert Zcombo Alarms to Home Assistant via a USB Aeotec Z-Stick.

Ensuring the device is excluded

If the device was previously added to the network, or if you just want to be sure you are starting from scratch, start by excluding the device.

  • Slide the battery tray out on the alarm.
  • In the Home Assistant Z-Wave console, press the Remove Node button.
  • While holding the the Test button on the alarm, slide in the battery tray. Continue to hold the Test button for a few seconds, releasing after the first beep
  • Wait 10-20 seconds for the Alarm to beep a second time. This indicates the exclusion worked.

Joining the network

The procedure to add the device back to the network is very similar.

  • Slide the battery tray out on the alarm.
  • In the Home Assistant Z-Wave console, press the Add Node button.
  • While holding the the Test button on the alarm, slide in the battery tray. Continue to hold the Test button for a few seconds, releasing after the first beep
  • Wait 10-20 seconds for the Alarm to beep a second time. This indicates the network join has succeeded. If you hear three bursts of three tones, this is an error and you should exclude the device and try again.

If you are adding multiple devices, I recommend setting a descriptive name for the device just added in the entity registry before adding more devices. Without setting a custom name right away, you will end up with multiple devices with the same name which will be difficult to sort out later.

Healing the network

The Zcombo devices are somewhat unique in the sense that they do not seem to support waking up on an interval to report their status. Most battery operated Z-Wave devices can be configured to wake on interval but I have yet to find a way to make it work for Zcombo. This lack of wake support seems to be corroborated by other people on the internet.

However, if you want to heal the network or manually force the device to wake up and report its status, this is possible. Simply repeat the battery slide/Test button procedure as before.

  • Slide the battery tray out on the alarm.
  • If you want to heal the network, you can press the Heal Network button in the Home Assistant Z-Wave console. If you are not concerned with healing and just want the device to report it’s status, you do not need to press any Home Assistant buttons.
  • While holding the the Test button on the alarm, slide in the battery tray. Continue to hold the Test button for a few seconds, releasing after the first beep
  • Wait 10-20 seconds for the Alarm to beep a second time. This indicates the device has woken up.

If all has gone well, the devices status in Home Assistant will change to Initializing (Session). This will persist for 15-30 seconds, then the device will switch to the Sleeping state.

ESP8266 and Micropython

In traditional “cheap electronics” fashion, the unit i’ve purchased is verbosely marketed on Amazon as

HiLetgo 2pcs ESP8266 NodeMCU LUA CP2102 ESP-12E Internet WIFI Development Board Open source Serial Wireless Module Works Great with Arduino IDE/Micropython

In more helpful terms:

  • ESP-12E is the SMT module on the board, which consists of an ESP8266 MCU
  • CP2102 is the USB to UART chip on the board
  • NodeMCU is the firmware that comes on the board, which contains and embedded LUA interpreter

Flashing MicroPython

The tutorials on the MicroPython site list a flashing command similar to the one below. However, those commands did not work for me. A helpful Amazon reviewer pointed out setting --flash_mode dio is required in order to correctly flash this particular board.

Reportedly it is also important to erase the flash before writing a new image. This overwrites all blocks with 0xFF bytes.

1
2
esptool.py --port /dev/ttyUSB0 --baud 115200 erase_flash
esptool.py --port /dev/ttyUSB0 --baud 115200 write_flash --flash_size=detect --flash_mode dio 0 esp8266-20171101-v1.9.3.bin

After flashing is complete, open the serial REPL:

1
screen /dev/ttyUSB0 115200

Or access the REPL through the python app rshell:

1
2
pip3 install rshell
rshell -p /dev/ttyUSB0 -b 115200 repl

Micropython REPL Keyboard Shortcuts

Once in the REPL, there are a number of useful keyboard shortcuts:

  • Ctrl-A on a blank line will enter raw REPL mode. This is like a permanent paste mode, except that characters are not echoed back.
  • Ctrl-B on a blank like goes to normal REPL mode.
  • Ctrl-C cancels any input, or interrupts the currently running code.
  • Ctrl-D on a blank line will do a soft reset.