Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Quickstart — stand up an instance

audience: operators

This page walks you from a fresh checkout to a live zipnet instance that external publishers can reach with one line of code. Read Deployment overview first for the architectural background; this page assumes it.

Who runs a zipnet instance

Typical deployments:

  • A rollup or app offering an encrypted mempool. The team runs the committee; user wallets publish sealed transactions; the sequencer or builder reads them ordered and opaque-to-sender, and decrypts at block-build time via whatever mechanism they prefer (threshold decryption, TEE unsealing).
  • An MEV auction team hosting a permissioned order-flow channel. The team runs the committee; whitelisted searchers publish intents; every connected builder reads the same ordered log.
  • A governance coalition running anonymous signalling. The coalition runs the committee; delegated wallets signal anonymously; anyone can tally.

What’s common: you want a bounded participant set — which you authenticate via TEE attestation and a ticket class — to publish messages without any single party (yourself included) being able to link message to sender. You run the committee and the aggregator. Participants bring their own TEE-attested client software, typically from a TDX image you also publish.

One-paragraph mental model

Zipnet runs as one service among many on a shared mosaik universe — a single NetworkId that hosts zipnet alongside other mosaik services (signers, storage, oracles). Your job as an operator is to stand up an instance of zipnet under a name you pick (e.g. acme.mainnet) and keep it running. External agents bind to your instance with Zipnet::bind(&network, "acme.mainnet") — they compile the name in from their side, so there is no registry to publish to and nothing to advertise. Your servers simply need to be reachable.

What you’re running

A minimum instance is:

RoleCountHosted where
Committee server3 or more (odd)TDX-enabled hosts you operate
Aggregator1 (v1)Any host with outbound UDP
(optional) Your own publishing clientsanyTDX-enabled if the instance is gated

All of these join the same shared mosaik universe. The committee and aggregator advertise on the shared peer catalog; external publishers reach them through mosaik’s discovery without any further config from you.

What defines your instance

Your instance is fully identified by three pieces of configuration:

#FieldNotes
1instance nameShort, stable, namespaced string (e.g. acme.mainnet). Folds into the committee GroupId, submit StreamId, and broadcasts StoreId.
2universe NetworkIdAlmost always zipnet::UNIVERSE. Override only if you run an isolated federation.
3ticket classWhat publishers must present: TDX MR_TD, JWT issuer, or both. Also folds into GroupId.

Round parameters (num_slots, slot_bytes, round_period, round_deadline) are configured per-instance via env vars and published at runtime in the LiveRoundCell collection that publishers read. They are immutable for the instance’s lifetime — bumping any of them requires a new instance name.

Items 1 and 3 fold into the instance’s derived IDs. Change either and the instance’s identity changes, meaning publishers compiled against the old values can no longer bond. See Designing coexisting systems on mosaik for the derivation.

Minimal smoke test

Before you touch hardware, confirm the pipeline works end-to-end on your laptop. The deterministic check is the integration test that exercises three committee servers + one aggregator + two clients over real mosaik transports in one tokio runtime:

cargo test -p zipnet-node --test e2e one_round_end_to_end

A green run in roughly 10 seconds tells you the crypto, consensus, round lifecycle, and mosaik transport are all healthy in your checkout. If it fails, nothing else on this page is going to work — investigate before touching hardware.

Exercising the binaries directly (optional)

If you want to watch the three role binaries run as separate processes — useful for shaking out systemd units, env vars, or firewall rules — bootstrap them by hand on one host. Localhost discovery over fresh iroh relays is slow, so give the first round up to a minute to land.

# terminal 1 — seed committee server; grab its peer= line from stdout
ZIPNET_INSTANCE="dev.local" \
ZIPNET_COMMITTEE_SECRET="dev-committee-secret" \
ZIPNET_SECRET="seed-1" \
./target/debug/zipnet-server

# terminals 2+3 — remaining committee servers, bootstrapped off #1
ZIPNET_INSTANCE="dev.local" \
ZIPNET_COMMITTEE_SECRET="dev-committee-secret" \
ZIPNET_SECRET="seed-2" \
ZIPNET_BOOTSTRAP=<peer_id_from_terminal_1> \
./target/debug/zipnet-server

# terminal 4 — aggregator
ZIPNET_INSTANCE="dev.local" \
ZIPNET_BOOTSTRAP=<peer_id_from_terminal_1> \
./target/debug/zipnet-aggregator

# terminal 5 — reference publisher
ZIPNET_INSTANCE="dev.local" \
ZIPNET_BOOTSTRAP=<peer_id_from_terminal_1> \
ZIPNET_MESSAGE="hello from the smoke test" \
./target/debug/zipnet-client

A healthy run prints round finalized on the committee servers within a minute and the client’s payload echoes back on the subscriber side. TDX is off in this mode — production instances re-enable it (see below).

What every server process does for you

When zipnet-server starts it:

  1. Joins the shared universe network (zipnet::UNIVERSE, or whatever you set ZIPNET_UNIVERSE to).
  2. Derives every instance-local id from ZIPNET_INSTANCE — committee GroupId, the submit stream, the broadcasts collection, the registries.
  3. Bonds with its peers using the committee secret and TDX measurement.
  4. Advertises itself on the shared peer catalog via mosaik’s standard /mosaik/announce gossip. Publishers that compile in the same instance name reach the same GroupId and bond automatically.
  5. Accepts rounds from the aggregator and replicates broadcasts through the committee Raft group.

You do not configure streams, collections, or group ids by hand, and you do not publish an announcement anywhere. The instance name is the only piece of identity you manage; everything else is either derived or taken care of by mosaik.

Building a TDX image (production path)

For production, every committee server and every publishing client runs inside a TDX guest. Mosaik ships the image builder — you do not compose QEMU, OVMF, kernels, and initramfs yourself, and you do not compute MR_TD by hand.

In the committee server crate’s build.rs:

// crates/zipnet-server/build.rs
fn main() {
    mosaik::tee::tdx::build::ubuntu()
        .with_default_memory_size("4G")
        .build();
}

Add to Cargo.toml:

[dependencies]
mosaik = { version = "0.3", features = ["tdx"] }

[build-dependencies]
mosaik = { version = "0.3", features = ["tdx-builder-ubuntu"] }

After cargo build --release you get, in target/release/tdx-artifacts/zipnet-server/ubuntu/:

ArtifactWhat it’s for
zipnet-server-run-qemu.shSelf-extracting launcher. This is what you invoke on a TDX host.
zipnet-server-mrtd.hexThe 48-byte measurement. Publishers pin against this.
zipnet-server-vmlinuzRaw kernel, in case you repackage.
zipnet-server-initramfs.cpio.gzRaw initramfs.
zipnet-server-ovmf.fdRaw OVMF firmware.

Mosaik computes MR_TD at build time by parsing the OVMF, the kernel and the initramfs according to the TDX spec — the same value the TDX hardware will report at runtime. You ship this hex string alongside your announcement; a client whose own image does not measure to the same MR_TD cannot join the instance. See users/handshake-with-operator for the matching client-side flow.

Alpine variant (mosaik::tee::tdx::build::alpine(), feature tdx-builder-alpine) produces a ~5 MB image versus Ubuntu’s ~25 MB, at the cost of musl. Use Alpine for publishers where image size matters; keep Ubuntu for committee servers unless you have a specific reason otherwise.

Instance naming and your users’ handshake

Publishers bond to your instance by knowing three things: the universe NetworkId, the instance name, and (if TDX-gated) the MR_TD of your committee image. That is the complete handoff — no registry, no dynamic lookup, no on-network advertisement.

Publish these via whatever channel suits your users: release notes, a docs page, direct handoff in a setup email. Users bake the instance name (or its derived UniqueId) into their code at compile time.

Instance names share a flat namespace per universe. Two operators picking the same name collide in the committee group and neither works correctly — mosaik has no mechanism to prevent this and no way to tell you it happened. Namespace aggressively: <org>.<purpose>.<env>, for example acme.mixer.mainnet. If in doubt, include an irrevocable random suffix once and forget about it (acme.mixer.mainnet.8f3c1a).

Retiring an instance is just stopping every server under that name. Publishers still trying to bond will see ConnectTimeout; they update their code to the new name and rebuild.

Going live

Once the smoke test passes on staging hardware:

  1. Build your production TDX images (committee + client). Publish the two mrtd.hex values to whatever channel your users consume (docs site, release notes, signed announcement).
  2. Stand up three TDX committee servers on geographically separate hosts, with the production ZIPNET_INSTANCE and ZIPNET_COMMITTEE_SECRET.
  3. Stand up the aggregator on a non-TDX but well-connected host.
  4. Verify the committee has elected a leader and the aggregator is bonded to the submit stream. Your own aggregator metrics are the easiest check; on the committee side, exactly one server should report mosaik_groups_leader_is_local = 1.
  5. Hand publishers your instance name, one universe bootstrap PeerId, and (if TDX-gated) your committee MR_TD. That is the entirety of their onboarding.

Running many instances side by side

Operators routinely run several instances — production, a public testnet, internal dev — on the same universe. Each has its own instance name, its own committee, its own MR_TD pin, its own ACL. Hosts can host one or many; the binary multiplexes them:

systemctl start zipnet-server@acme-mainnet
systemctl start zipnet-server@preview.alpha
systemctl start zipnet-server@dev.ops

Each unit sets a different ZIPNET_INSTANCE; they share the universe and the discovery layer, and appear to publishers as three distinct Zipnet::bind targets.

Next reading