Quickstart — stand up an instance
audience: operators
This page walks you from a fresh checkout to a live zipnet instance that external publishers can reach with one line of code. Read Deployment overview first for the architectural background; this page assumes it.
Who runs a zipnet instance
Typical deployments:
- A rollup or app offering an encrypted mempool. The team runs the committee; user wallets publish sealed transactions; the sequencer or builder reads them ordered and opaque-to-sender, and decrypts at block-build time via whatever mechanism they prefer (threshold decryption, TEE unsealing).
- An MEV auction team hosting a permissioned order-flow channel. The team runs the committee; whitelisted searchers publish intents; every connected builder reads the same ordered log.
- A governance coalition running anonymous signalling. The coalition runs the committee; delegated wallets signal anonymously; anyone can tally.
What’s common: you want a bounded participant set — which you authenticate via TEE attestation and a ticket class — to publish messages without any single party (yourself included) being able to link message to sender. You run the committee and the aggregator. Participants bring their own TEE-attested client software, typically from a TDX image you also publish.
One-paragraph mental model
Zipnet runs as one service among many on a shared mosaik universe
— a single NetworkId that hosts zipnet alongside other mosaik
services (signers, storage, oracles). Your job as an operator is to
stand up an instance of zipnet under a name you pick (e.g.
acme.mainnet) and keep it running. External agents bind to your
instance with Zipnet::bind(&network, "acme.mainnet") — they compile
the name in from their side, so there is no registry to publish to
and nothing to advertise. Your servers simply need to be reachable.
What you’re running
A minimum instance is:
| Role | Count | Hosted where |
|---|---|---|
| Committee server | 3 or more (odd) | TDX-enabled hosts you operate |
| Aggregator | 1 (v1) | Any host with outbound UDP |
| (optional) Your own publishing clients | any | TDX-enabled if the instance is gated |
All of these join the same shared mosaik universe. The committee and aggregator advertise on the shared peer catalog; external publishers reach them through mosaik’s discovery without any further config from you.
What defines your instance
Your instance is fully identified by three pieces of configuration:
| # | Field | Notes |
|---|---|---|
| 1 | instance name | Short, stable, namespaced string (e.g. acme.mainnet). Folds into the committee GroupId, submit StreamId, and broadcasts StoreId. |
| 2 | universe NetworkId | Almost always zipnet::UNIVERSE. Override only if you run an isolated federation. |
| 3 | ticket class | What publishers must present: TDX MR_TD, JWT issuer, or both. Also folds into GroupId. |
Round parameters (num_slots, slot_bytes, round_period,
round_deadline) are configured per-instance via env vars and
published at runtime in the LiveRoundCell collection that
publishers read. They are immutable for the instance’s lifetime —
bumping any of them requires a new instance name.
Items 1 and 3 fold into the instance’s derived IDs. Change either and the instance’s identity changes, meaning publishers compiled against the old values can no longer bond. See Designing coexisting systems on mosaik for the derivation.
Minimal smoke test
Before you touch hardware, confirm the pipeline works end-to-end on your laptop. The deterministic check is the integration test that exercises three committee servers + one aggregator + two clients over real mosaik transports in one tokio runtime:
cargo test -p zipnet-node --test e2e one_round_end_to_end
A green run in roughly 10 seconds tells you the crypto, consensus, round lifecycle, and mosaik transport are all healthy in your checkout. If it fails, nothing else on this page is going to work — investigate before touching hardware.
Exercising the binaries directly (optional)
If you want to watch the three role binaries run as separate processes — useful for shaking out systemd units, env vars, or firewall rules — bootstrap them by hand on one host. Localhost discovery over fresh iroh relays is slow, so give the first round up to a minute to land.
# terminal 1 — seed committee server; grab its peer= line from stdout
ZIPNET_INSTANCE="dev.local" \
ZIPNET_COMMITTEE_SECRET="dev-committee-secret" \
ZIPNET_SECRET="seed-1" \
./target/debug/zipnet-server
# terminals 2+3 — remaining committee servers, bootstrapped off #1
ZIPNET_INSTANCE="dev.local" \
ZIPNET_COMMITTEE_SECRET="dev-committee-secret" \
ZIPNET_SECRET="seed-2" \
ZIPNET_BOOTSTRAP=<peer_id_from_terminal_1> \
./target/debug/zipnet-server
# terminal 4 — aggregator
ZIPNET_INSTANCE="dev.local" \
ZIPNET_BOOTSTRAP=<peer_id_from_terminal_1> \
./target/debug/zipnet-aggregator
# terminal 5 — reference publisher
ZIPNET_INSTANCE="dev.local" \
ZIPNET_BOOTSTRAP=<peer_id_from_terminal_1> \
ZIPNET_MESSAGE="hello from the smoke test" \
./target/debug/zipnet-client
A healthy run prints round finalized on the committee servers
within a minute and the client’s payload echoes back on the
subscriber side. TDX is off in this mode — production instances
re-enable it (see below).
What every server process does for you
When zipnet-server starts it:
- Joins the shared universe network (
zipnet::UNIVERSE, or whatever you setZIPNET_UNIVERSEto). - Derives every instance-local id from
ZIPNET_INSTANCE— committeeGroupId, the submit stream, the broadcasts collection, the registries. - Bonds with its peers using the committee secret and TDX measurement.
- Advertises itself on the shared peer catalog via mosaik’s standard
/mosaik/announcegossip. Publishers that compile in the same instance name reach the sameGroupIdand bond automatically. - Accepts rounds from the aggregator and replicates broadcasts through the committee Raft group.
You do not configure streams, collections, or group ids by hand, and you do not publish an announcement anywhere. The instance name is the only piece of identity you manage; everything else is either derived or taken care of by mosaik.
Building a TDX image (production path)
For production, every committee server and every publishing client runs inside a TDX guest. Mosaik ships the image builder — you do not compose QEMU, OVMF, kernels, and initramfs yourself, and you do not compute MR_TD by hand.
In the committee server crate’s build.rs:
// crates/zipnet-server/build.rs
fn main() {
mosaik::tee::tdx::build::ubuntu()
.with_default_memory_size("4G")
.build();
}
Add to Cargo.toml:
[dependencies]
mosaik = { version = "0.3", features = ["tdx"] }
[build-dependencies]
mosaik = { version = "0.3", features = ["tdx-builder-ubuntu"] }
After cargo build --release you get, in
target/release/tdx-artifacts/zipnet-server/ubuntu/:
| Artifact | What it’s for |
|---|---|
zipnet-server-run-qemu.sh | Self-extracting launcher. This is what you invoke on a TDX host. |
zipnet-server-mrtd.hex | The 48-byte measurement. Publishers pin against this. |
zipnet-server-vmlinuz | Raw kernel, in case you repackage. |
zipnet-server-initramfs.cpio.gz | Raw initramfs. |
zipnet-server-ovmf.fd | Raw OVMF firmware. |
Mosaik computes MR_TD at build time by parsing the OVMF, the kernel and the initramfs according to the TDX spec — the same value the TDX hardware will report at runtime. You ship this hex string alongside your announcement; a client whose own image does not measure to the same MR_TD cannot join the instance. See users/handshake-with-operator for the matching client-side flow.
Alpine variant (mosaik::tee::tdx::build::alpine(), feature
tdx-builder-alpine) produces a ~5 MB image versus Ubuntu’s ~25 MB,
at the cost of musl. Use Alpine for publishers where image size
matters; keep Ubuntu for committee servers unless you have a specific
reason otherwise.
Instance naming and your users’ handshake
Publishers bond to your instance by knowing three things: the
universe NetworkId, the instance name, and (if TDX-gated) the
MR_TD of your committee image. That is the complete handoff — no
registry, no dynamic lookup, no on-network advertisement.
Publish these via whatever channel suits your users: release notes,
a docs page, direct handoff in a setup email. Users bake the
instance name (or its derived UniqueId) into their code at compile
time.
Instance names share a flat namespace per universe. Two operators
picking the same name collide in the committee group and neither
works correctly — mosaik has no mechanism to prevent this and no way
to tell you it happened. Namespace aggressively: <org>.<purpose>.<env>,
for example acme.mixer.mainnet. If in doubt, include an
irrevocable random suffix once and forget about it
(acme.mixer.mainnet.8f3c1a).
Retiring an instance is just stopping every server under that name.
Publishers still trying to bond will see ConnectTimeout; they
update their code to the new name and rebuild.
Going live
Once the smoke test passes on staging hardware:
- Build your production TDX images (committee + client). Publish
the two
mrtd.hexvalues to whatever channel your users consume (docs site, release notes, signed announcement). - Stand up three TDX committee servers on geographically separate
hosts, with the production
ZIPNET_INSTANCEandZIPNET_COMMITTEE_SECRET. - Stand up the aggregator on a non-TDX but well-connected host.
- Verify the committee has elected a leader and the aggregator is
bonded to the submit stream. Your own aggregator metrics are the
easiest check; on the committee side, exactly one server should
report
mosaik_groups_leader_is_local = 1. - Hand publishers your instance name, one universe bootstrap
PeerId, and (if TDX-gated) your committee MR_TD. That is the entirety of their onboarding.
Running many instances side by side
Operators routinely run several instances — production, a public testnet, internal dev — on the same universe. Each has its own instance name, its own committee, its own MR_TD pin, its own ACL. Hosts can host one or many; the binary multiplexes them:
systemctl start zipnet-server@acme-mainnet
systemctl start zipnet-server@preview.alpha
systemctl start zipnet-server@dev.ops
Each unit sets a different ZIPNET_INSTANCE; they share the universe
and the discovery layer, and appear to publishers as three distinct
Zipnet::bind targets.
Next reading
- Running a committee server — every environment variable and what it does.
- Running the aggregator — the untrusted-but-load-bearing node.
- Rotations and upgrades — retiring an instance, rebuilding TDX images, rotating committee secrets.
- Monitoring and alerts — the metrics that matter in production.
- Incident response — stuck rounds, split brain, expired MR_TDs.
- Security posture checklist — what committee operators must protect.
- Designing coexisting systems on mosaik — the shared-universe model in full, for operators who want to understand why the instance is the unit of identity.