Deployment overview
audience: operators
A zipnet deployment runs as one service among many on a shared
mosaik universe — a single NetworkId that hosts zipnet alongside
other mosaik services. What you stand up is an instance of zipnet
under a short, namespaced name you pick (e.g. acme.mainnet).
Multiple instances coexist on the same universe concurrently, each
with its own committee, ACL, round parameters, and committee MR_TD.
If you haven’t yet, read the Quickstart — it walks you end-to-end from a fresh checkout to a live instance. This page gives the architectural background the runbooks later in this section refer back to.
The shared universe model
- The universe constant is
zipnet::UNIVERSE = unique_id!("mosaik.universe"). Override only for an isolated federation viaZIPNET_UNIVERSE; in the common case, leave it alone. - All your nodes — committee servers, aggregator, clients — join that
same universe. Mosaik’s standard peer discovery (
/mosaik/announcegossip plus the Mainline DHT bootstrap) handles reachability. You don’t configure streams, groups, or IDs by hand. - The instance is identified by
ZIPNET_INSTANCE(e.g.acme.mainnet). Every sub-ID — committeeGroupId, submitStreamId, broadcastsStoreId— is derived from that name, so typos surface asConnectTimeoutrather than a config error.
Publishers bond to your instance knowing only three things: the
universe NetworkId, the instance name, and (for TDX-gated
deployments) your committee MR_TD. You hand those out in release
notes or docs; there is no on-network registry to publish to and
nothing to advertise.
Three node roles
A zipnet deployment has three kinds of nodes. You — the operator — will run at least the first two. The third is optional (most publishers are external users running their own clients).
| Role | Count | Trust status | Resource profile |
|---|---|---|---|
| Committee server | 3 or more (odd) | any-trust: at least one must be honest for anonymity; all must be up for liveness in v1 | low CPU, modest RAM, stable identity, low churn |
| Aggregator | 1 (v1) | untrusted for anonymity, trusted for liveness | higher CPU + bandwidth, can churn |
| Publishing client | many | TDX-attested in production; untrusted for liveness | ephemeral; any churn is tolerated |
What every node needs
- Outbound UDP to the internet (iroh / QUIC transport) and to mosaik relays.
- A few MB of RAM; committee servers need more during large-round replay.
- A clock within a few seconds of the rest of the universe (Raft tolerates skew but not arbitrary drift).
ZIPNET_INSTANCE=<name>set to the same instance name on every node in that deployment.
What only committee servers need
- A stable
PeerIdacross restarts. SetZIPNET_SECRETto any string — it is hashed with blake3 to derive the node’s long-term iroh identity. Rotating it invalidates every bond. - Access to the shared committee secret, passed as
ZIPNET_COMMITTEE_SECRET. This gates admission to the Raft group. Distribute it out of band (vault, secrets manager, k8s secret). Anyone holding it can join the committee — treat it like a root credential. - In production, a TDX host. Mosaik ships the TDX image builder;
you call
mosaik::tee::tdx::build::ubuntu()from yourbuild.rsand get a launch script, initramfs, OVMF, and a precomputed MR_TD at build time. See the Quickstart’s TDX section. - Durable storage is not required in v1 (state is in memory). A restarted server rejoins and catches up by snapshot.
What only aggregators need
- More network bandwidth than committee servers. The aggregator receives every client envelope and emits a single aggregate per round.
- A stable
PeerIdis strongly recommended — clients often use the aggregator as a discovery bootstrap. - The aggregator does not need the committee secret. It is untrusted for anonymity.
What only clients need
- The universe
NetworkId, instance name, and (for TDX-gated instances) your committee MR_TD. That is the whole handshake. - A TDX host if the instance is TDX-gated. See Security posture checklist.
How the three talk
clients ── ClientEnvelope stream ─────► aggregator
│
AggregateEnvelope stream
│
▼
committee servers
│
Raft-replicated apply
│
▼
Broadcasts collection (readable by anyone)
Clients and the aggregator are not members of the committee’s Raft group; they observe the final broadcasts through a replicated collection.
Minimum viable deployment
Three committee servers + one aggregator + a handful of clients is the smallest deployment where anonymity holds meaningfully. Two committee servers will technically run but any one of them can deanonymize the set — stick to three or more.
TDX host A TDX host B TDX host C
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ zipnet- │ │ zipnet- │ │ zipnet- │
│ server #1 │ │ server #2 │ │ server #3 │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
└────────────────────┼────────────────────┘
│ Raft / mosaik group
▼
┌───────────────────┐
│ zipnet-aggregator │ (non-TDX host, well-connected)
└─────────┬─────────┘
│
▼
external publishers
(TDX where gated, else
operator-trusted hosts)
Each box runs ZIPNET_INSTANCE=acme.mainnet and joins
zipnet::UNIVERSE over iroh; mosaik discovery wires the rest.
Running many instances side by side
Operators routinely run several instances — production, a public testnet, internal dev — on the same universe. Each has its own instance name, its own committee, its own MR_TD pin, its own ACL. Hosts can host one or many; run a separate unit per instance:
systemctl start zipnet-server@acme-mainnet
systemctl start zipnet-server@preview.alpha
systemctl start zipnet-server@dev.ops
Each unit sets a different ZIPNET_INSTANCE; they share the universe
and the discovery layer, and appear to publishers as three distinct
Zipnet::bind targets.
See also
- Running a committee server
- Running the aggregator
- Running a client
- Rotations and upgrades
- Designing coexisting systems on mosaik — full rationale for the shared-universe model, for operators who want to understand why the instance is the unit of identity.