audience: operators
End-to-end deploy example — one TDX host
A worked, copy-pasteable runbook that stands up a complete zipnet
instance on a single TDX-capable host reachable at
ubuntu@tdx-host. The topology is the minimum viable deployment:
three committee servers, one aggregator, one reference publisher,
all co-located as separate TDX guests (plus one non-TDX process for
the aggregator) on the same physical host.
Use this recipe for staging, integration, or a demo. For production, split the three committee servers onto three independently-operated TDX hosts — the steps per host are identical; only the bootstrap wiring changes.
What you are about to build
ubuntu@tdx-host (one physical TDX server)
┌──────────────────────────────────────────────────────────────┐
│ TDX guest #1 TDX guest #2 TDX guest #3 │
│ zipnet-server-1 zipnet-server-2 zipnet-server-3 │
│ │ │ │ │
│ └────── Raft / mosaik group (committee) ──┘ │
│ │ │
│ ┌─────────────▼──────────────┐ │
│ │ zipnet-aggregator (no TDX) │ │
│ └─────────────┬──────────────┘ │
│ │ │
│ ┌────────────▼────────────┐ │
│ │ TDX guest #4 │ │
│ │ zipnet-client (demo) │ │
│ └─────────────────────────┘ │
└──────────────────────────────────────────────────────────────┘
The instance name used throughout is demo.tdx. Swap it for your
own namespaced name before running anything in production
(<org>.<purpose>.<env>; see
Quickstart — naming the instance).
Prerequisites
On your workstation:
- A checkout of this repo.
- Rust 1.93 (
rustup showconfirmsrust-toolchain.toml). - SSH access to the host:
ssh ubuntu@tdx-hostreturns a shell. scpandrsyncavailable locally.
On ubuntu@tdx-host:
- Bare-metal or cloud host with Intel TDX enabled in BIOS and a TDX
kernel installed.
ls /dev/tdx_guestexists on the host and the kernel modulekvm_intelis loaded withtdx=Y. If you are unsure, rundmesg | grep -i tdx. qemu-system-x86_64at a version the mosaik launcher supports (8.2+). The launcher script will tell you if the local QEMU is too old.- A user that can access
/dev/kvmand/dev/tdx_guestwithout root. On Ubuntu, addubuntuto thekvmandtdxgroups. tmux(used below to keep each role’s logs visible). Any process supervisor works — systemd user units,screen,nohup. The commands that follow usetmuxbecause it is the lowest-ceremony option.- Outbound UDP to the internet for iroh / QUIC and mosaik relays. No inbound ports need to be opened — mosaik’s hole-punching layer handles reachability.
Two small decisions fixed for this example:
| Knob | Value used here | Why |
|---|---|---|
ZIPNET_INSTANCE | demo.tdx | Short, obvious, collision-unlikely. Rename freely. |
ZIPNET_COMMITTEE_SECRET | openssl rand -hex 32 once, pasted into the env for all three servers | Shared admission secret for the committee. Clients and the aggregator must not see this value. |
ZIPNET_MIN_PARTICIPANTS | 1 | So the single demo client triggers rounds. Raise to >=2 for real anonymity. |
ZIPNET_ROUND_PERIOD | 3s | Enough headroom on a shared host to see logs land in order. |
Step 1 — Build the TDX artifacts on your workstation
From the repo root, build everything release-mode. The build.rs
scripts in zipnet-server and zipnet-client invoke the mosaik
TDX builder and drop launchable artifacts under
target/release/tdx-artifacts/.
cargo build --release
When this finishes you have:
target/release/
zipnet-aggregator # plain binary; runs on any host
tdx-artifacts/
zipnet-server/ubuntu/
zipnet-server-run-qemu.sh # self-extracting launcher
zipnet-server-mrtd.hex # 48-byte committee measurement
zipnet-server-vmlinuz
zipnet-server-initramfs.cpio.gz
zipnet-server-ovmf.fd
zipnet-client/alpine/
zipnet-client-run-qemu.sh
zipnet-client-mrtd.hex # 48-byte client measurement
zipnet-client-vmlinuz
zipnet-client-initramfs.cpio.gz
zipnet-client-ovmf.fd
Record both mrtd.hex values — these are the MR_TDs you will
publish to readers alongside the instance name.
SERVER_MRTD=$(cat target/release/tdx-artifacts/zipnet-server/ubuntu/zipnet-server-mrtd.hex)
CLIENT_MRTD=$(cat target/release/tdx-artifacts/zipnet-client/alpine/zipnet-client-mrtd.hex)
echo "committee MR_TD: $SERVER_MRTD"
echo "client MR_TD: $CLIENT_MRTD"
Step 2 — Copy artifacts to the host
ssh ubuntu@tdx-host 'mkdir -p ~/zipnet/{server,client,aggregator,logs}'
rsync -avz --delete \
target/release/tdx-artifacts/zipnet-server/ubuntu/ \
ubuntu@tdx-host:~/zipnet/server/
rsync -avz --delete \
target/release/tdx-artifacts/zipnet-client/alpine/ \
ubuntu@tdx-host:~/zipnet/client/
scp target/release/zipnet-aggregator \
ubuntu@tdx-host:~/zipnet/aggregator/
The launcher scripts are self-extracting — they embed kernel,
initramfs, and OVMF. You do not need to copy the raw vmlinuz /
initramfs / ovmf.fd files unless you plan to repackage.
Step 3 — Pick a committee secret
On the TDX host, once, generate the shared committee secret and park it in a file you will source into each server’s environment. Anyone with this value can join the committee, so treat it as a root credential.
ssh ubuntu@tdx-host
# on the host
umask 077
openssl rand -hex 32 > ~/zipnet/committee-secret
chmod 600 ~/zipnet/committee-secret
Step 4 — Start the first committee server and capture its PeerId
The first server has no one to bootstrap against, so it starts
without ZIPNET_BOOTSTRAP. Its startup line prints
peer=<hex>… — capture that and reuse it as the bootstrap hint for
every following process.
Open a tmux session on the host and start server 1:
# on the host
tmux new-session -d -s zipnet-s1 -n server-1
tmux send-keys -t zipnet-s1:server-1 "
ZIPNET_INSTANCE=demo.tdx \
ZIPNET_COMMITTEE_SECRET=\$(cat ~/zipnet/committee-secret) \
ZIPNET_SECRET=server-1-seed \
ZIPNET_MIN_PARTICIPANTS=1 \
ZIPNET_ROUND_PERIOD=3s \
ZIPNET_ROUND_DEADLINE=15s \
RUST_LOG=info,zipnet_node=info \
~/zipnet/server/zipnet-server-run-qemu.sh 2>&1 | tee ~/zipnet/logs/server-1.log
" C-m
Wait five or ten seconds for the TDX guest to come up, then pull the PeerId out of the log:
# on the host
BOOTSTRAP=$(grep -oE 'peer=[0-9a-f]{10,}' ~/zipnet/logs/server-1.log | head -1 | cut -d= -f2)
echo "bootstrap peer: $BOOTSTRAP"
If $BOOTSTRAP is empty, the guest has not finished booting — the
first round of QEMU + TDX can take 30 s on a cold host. Re-run the
grep after a beat.
What if I don’t see the
peer=line? The self-extracting launcher prints its own boot banner first. The zipnet line (zipnet up: network=<universe> instance=demo.tdx peer=...) only appears once the binary inside the guest has announced. If it is still missing after a minute,less ~/zipnet/logs/server-1.logand look for QEMU-level errors — typically TDX not enabled, or/dev/kvmpermissions.
Step 5 — Start the remaining two committee servers
Each server gets a distinct ZIPNET_SECRET (so each derives a
unique PeerId) and bootstraps against server 1.
# on the host — still inside your SSH session
tmux new-session -d -s zipnet-s2 -n server-2
tmux send-keys -t zipnet-s2:server-2 "
ZIPNET_INSTANCE=demo.tdx \
ZIPNET_COMMITTEE_SECRET=\$(cat ~/zipnet/committee-secret) \
ZIPNET_SECRET=server-2-seed \
ZIPNET_BOOTSTRAP=$BOOTSTRAP \
ZIPNET_MIN_PARTICIPANTS=1 \
ZIPNET_ROUND_PERIOD=3s \
ZIPNET_ROUND_DEADLINE=15s \
RUST_LOG=info,zipnet_node=info \
~/zipnet/server/zipnet-server-run-qemu.sh 2>&1 | tee ~/zipnet/logs/server-2.log
" C-m
tmux new-session -d -s zipnet-s3 -n server-3
tmux send-keys -t zipnet-s3:server-3 "
ZIPNET_INSTANCE=demo.tdx \
ZIPNET_COMMITTEE_SECRET=\$(cat ~/zipnet/committee-secret) \
ZIPNET_SECRET=server-3-seed \
ZIPNET_BOOTSTRAP=$BOOTSTRAP \
ZIPNET_MIN_PARTICIPANTS=1 \
ZIPNET_ROUND_PERIOD=3s \
ZIPNET_ROUND_DEADLINE=15s \
RUST_LOG=info,zipnet_node=info \
~/zipnet/server/zipnet-server-run-qemu.sh 2>&1 | tee ~/zipnet/logs/server-3.log
" C-m
Within 15–30 s, one of the three servers should log
committee: opening round at index I_1. That one is the current
Raft leader; the other two are followers. Which server wins the
election is not deterministic — do not special-case the first
server as “always the leader”.
Confirm the committee is healthy:
# on the host
grep -E 'zipnet up|leader|round' ~/zipnet/logs/server-*.log | tail -20
Step 6 — Start the aggregator
The aggregator is the only non-TDX process. It bootstraps against any committee server and must not be given the committee secret.
# on the host
tmux new-session -d -s zipnet-agg -n aggregator
tmux send-keys -t zipnet-agg:aggregator "
ZIPNET_INSTANCE=demo.tdx \
ZIPNET_SECRET=aggregator-seed \
ZIPNET_BOOTSTRAP=$BOOTSTRAP \
ZIPNET_FOLD_DEADLINE=2s \
RUST_LOG=info,zipnet_node=info \
~/zipnet/aggregator/zipnet-aggregator 2>&1 | tee ~/zipnet/logs/aggregator.log
" C-m
A healthy aggregator settles quickly and logs
aggregator booting; waiting for collections to come online
within a few seconds.
Step 7 — Start the reference client
# on the host
tmux new-session -d -s zipnet-c1 -n client-1
tmux send-keys -t zipnet-c1:client-1 "
ZIPNET_INSTANCE=demo.tdx \
ZIPNET_BOOTSTRAP=$BOOTSTRAP \
ZIPNET_MESSAGE='hello from ubuntu@tdx-host' \
ZIPNET_CADENCE=1 \
RUST_LOG=info,zipnet_node=info \
~/zipnet/client/zipnet-client-run-qemu.sh 2>&1 | tee ~/zipnet/logs/client-1.log
" C-m
Within one ZIPNET_ROUND_PERIOD (3s here) after the aggregator
bonds, the Raft leader should print:
INFO zipnet_node::committee: committee: opening round at index I_1
INFO zipnet_node::roles::server: submitted partial unblind at I_2
INFO zipnet_node::committee: committee: round finalized round=r1 participants=1
Step 8 — Verify end-to-end
From the host, tail all four log streams at once:
# on the host
tail -F ~/zipnet/logs/server-*.log ~/zipnet/logs/aggregator.log ~/zipnet/logs/client-1.log
You are looking for:
| Signal | Where | Meaning |
|---|---|---|
zipnet up: network=<universe> instance=demo.tdx | every role | Universe join and instance binding succeeded. |
mosaik_groups_leader_is_local = 1 on exactly one server (Prometheus or log line) | server logs | Committee has a single Raft leader. |
aggregator: forwarded aggregate to committee round=rN participants=1 | aggregator | Client envelopes reached the aggregator and were folded. |
committee: round finalized round=rN participants=1 | whichever server is leader | End-to-end round closed; broadcast published into the Broadcasts collection. |
Once you see round finalized with a non-zero participants
count, the topology is working.
Cleanup
# on the host
for s in zipnet-s1 zipnet-s2 zipnet-s3 zipnet-agg zipnet-c1; do
tmux kill-session -t $s 2>/dev/null || true
done
Each TDX guest emits a departure announcement over gossip on
SIGTERM and Raft tolerates a majority remaining; kill-session
sends SIGTERM to the foreground QEMU process, which in turn
signals the guest.
If a guest is wedged, pkill -f zipnet-server-run-qemu.sh is safe
— all in-memory state is disposable in v1.
What to change for a real deployment
This example collapses a three-node committee onto one host to keep the runbook short. To roll the same shape into production:
- Replace
ubuntu@tdx-hostwith three separate TDX hostsubuntu@tdx-1,ubuntu@tdx-2,ubuntu@tdx-3run by three independent operators (or at minimum, with three independent blast radii). Geographic separation is the point. - Run the aggregator on a fourth, non-TDX but well-connected host. Clients will often use it as a bootstrap; pick something with a stable address.
- Swap
tmuxfor systemd unit files — one per role — so crash recovery is automatic. See Running a committee server for the full production env matrix. - Bump
ZIPNET_MIN_PARTICIPANTSto at least2. A single client produces no anonymity. - Publish the instance name, universe
NetworkId, and the two MR_TDs ($SERVER_MRTD,$CLIENT_MRTD) to your users through release notes or a signed announcement. That is the entire onboarding handoff; see What you need from the operator for the matching reader side.
See also
- Quickstart — stand up an instance — the conceptual walk-through this page makes concrete.
- Running a committee server — every env var and metric for the server role.
- Running the aggregator — capacity planning and the single-aggregator caveat.
- Running a client — the reference client you ship to publishers.
- Rotations and upgrades — rolling a new MR_TD, rotating the committee secret, retiring an instance.
- Monitoring and alerts — what to watch once the topology above is in production.