Sidecar-First: Making the Personal Stack Distributable
The audit post left the personal stack pinned to a single machine reachable over a single mesh. That works for me, but it doesn’t generalize. If I wanted to give this to a friend — or run it on a second device of my own — the current shape isn’t distributable.
This is the recommendation for what distributable looks like.
Sidecar-first: the API is the unit, not the transport
The mistake I want to avoid is welding the API to a specific network. Right now the assumption is “you reach the service over Tailscale.” That’s a transport choice baked into the service. It shouldn’t be.
The fix is to ship the service as a sidecar — a Docker container that exposes its API on a local port and doesn’t care how the client got there. Tailscale, plain LAN, a Tor onion, a reverse SSH tunnel, a Cloudflare Tunnel — all the same to the sidecar. The transport is somebody else’s problem.
That gives a clean layering:
- The sidecar — the actual service, in a container, listening on
127.0.0.1:<port>. - The transport — whatever gets a client packet to that port. Pluggable.
- The client — the phone or laptop, which only needs an address and a key.
Once the sidecar is transport-agnostic, distributing the stack stops being a network problem and becomes a packaging problem.
The pairing ritual: a QR code, like every messenger got right
Pairing a new device is where most self-hosted projects fall apart. Typing IP addresses and pasting tokens into mobile apps is hostile. Signal, WhatsApp Web, Tailscale itself — they all converged on the same UX: you scan a QR code on a trusted device.
The pairing payload is small. Two fields:
- The current reachable address. Whatever transport is active —
100.x.y.z:8080for Tailscale,192.168.1.20:8080for the LAN,abc...onion:80for Tor. - A unique secret. A JWT or pre-shared key generated by the sidecar at install time. The sidecar accepts requests authenticated by it; nothing else gets in.
The phone scans, stores both, and is paired. Re-pairing is the same ritual. Revoking is rotating the secret on the sidecar and re-scanning on the devices that should still have access.
This is the part that makes the stack give-able. A friend installs the sidecar on their Mac mini, runs one command, scans a QR with their phone. They are now running their own copy. No account, no cloud, no server-side state I have to operate.
The bridge binary: opinionated defaults, escape hatches
The transport layer needs an opinionated default or nobody finishes the install. The default I’d ship is Tailscale, because it works on every platform, the auth flow is clean, and the free tier covers any single-person mesh.
But Tailscale is a default, not a requirement. The packaging that owns this choice is a small CLI — call it soul-bridge — whose only job is “set up the transport for this sidecar.”
soul-bridge init # interactive, picks Tailscale by default
soul-bridge init --headscale # point at a self-hosted Headscale coordinator
soul-bridge init --nebula # use Slack's Nebula instead
soul-bridge init --lan-only # no overlay, just the local network
soul-bridge status # which transports are currently up
soul-bridge address # print the current best address (for QR rendering)
The bridge owns the messy parts: bringing up the tunnel, registering the node, writing the right address into the QR payload, telling the sidecar what its currently reachable address is.
Two-tier shape: easy mode is Tailscale, pro mode is Headscale or Nebula. Same sidecar underneath either. Same pairing UX on the phone. The transport is a swappable lower layer.
The hub idea: try local first, fall back gracefully
The point of all this is that when I walk out the door in the morning, the app on my phone should just work. No mode switching, no “you’re on Wi-Fi now, please reconnect.”
The client picks the best available transport on a fallback chain:
- Local Wi-Fi first. If the sidecar is reachable on the LAN, use it. Lowest latency, no overlay overhead, traffic never leaves the house.
- Tailscale or your chosen overlay second. Off the LAN, the mesh takes over. Same API, same key, different route.
- A self-hosted relay third, if one exists. If the home connection is down or the device is on a network that blocks WireGuard, an optional relay (running on a small VPS, owned by you) can broker the connection. This is the only tier that adds a third-party-shaped component, and even then the third party is you.
The client doesn’t need to know which tier it’s on. It tries them in order, picks the first that responds, caches the choice, and re-evaluates when the network changes. The user-facing experience is “the app works”; the engineering underneath is a transport selector.
Why this shape
The constraint I keep coming back to: I don’t want to be in the operations business for anyone else’s copy of this stack. If a friend installs it, their stack should not depend on mine being up.
Sidecar-first gives that. The container is theirs. The pairing secret is theirs. The transport choice is theirs. The relay, if they want one, is theirs. The only thing I ship is the package and the bridge binary, both stateless.
It also gives me a clean migration path for my own setup. The sidecar I run today on the Mac mini is the same image a future me runs on a colocated box, or on a Pi at my parents’ house, or on a tiny VPS for the relay tier. The packaging is the contract; everything else is a deployment detail.
That’s the recommendation: transport-agnostic sidecar, QR-based pairing, opinionated bridge binary, fallback chain on the client. Build the unit, then let the network be a configuration choice.