Architecture
audience: contributors
This chapter is the concrete instantiation of the pattern described in Designing coexisting systems on mosaik for zipnet v1. It maps the paper’s three-part architecture (§2) onto mosaik primitives and identifies which of those primitives form the public surface on the shared universe versus the private plumbing that may live on a derived sub-network.
The reader is assumed to have read the ZIPNet paper, the mosaik book, and design-intro.
Deployment model recap
Zipnet runs as one service among many on the shared mosaik universe
zipnet::UNIVERSE = unique_id!("mosaik.universe"). A deployment is
a single zipnet instance: one committee, one ACL, one set of round
parameters, one operator. Many instances coexist on the universe.
An instance is identified by a short operator-chosen name
(acme.mainnet). Every public id in the instance descends from the
instance salt:
INSTANCE = blake3("zipnet." + instance_name) // root UniqueId
COMMITTEE = INSTANCE.derive("committee") // Group<M> key material
SUBMIT = INSTANCE.derive("submit") // ClientToAggregator StreamId
REGISTER = INSTANCE.derive("register") // ClientRegistrationStream StreamId
BROADCASTS = INSTANCE.derive("broadcasts") // Vec<BroadcastRecord> StoreId
LIVE = INSTANCE.derive("live-round") // Cell<LiveRound> StoreId
CLIENT_REG = INSTANCE.derive("client-registry") // Map StoreId
SERVER_REG = INSTANCE.derive("server-registry") // Map StoreId
Consumers recompute the same derivations from the same name; no on-wire registry is involved. See design-intro — Instance-salt discipline.
Public surface (what lives on UNIVERSE)
The instance’s outward-facing primitives decompose into two functional roles:
- write-side —
ClientRegistrationStream+ClientToAggregator. Ticket-gated, consumed by the aggregator. External TEE clients use these to join a round and submit sealed envelopes. - read-side —
LiveRoundCell+Broadcasts+ClientRegistry+ServerRegistry. Read-only ambient round state every external agent needs in order to seal envelopes and interpret finalized rounds.
Integrators bind via the facade:
let network = Arc::new(Network::new(zipnet::UNIVERSE).await?);
let zipnet = Zipnet::bind(&network, "acme.mainnet").await?;
let receipt = zipnet.publish(b"hello").await?;
let mut log = zipnet.subscribe().await?;
The facade hides StreamId / StoreId / GroupId entirely; they
never cross the zipnet crate boundary.
Internal plumbing (optional derived private network)
Everything that is not part of the advertised surface is deployment-
internal. In v1 it all runs on UNIVERSE alongside the public surface;
this is the simplest place to start. A future deployment topology may
move the high-churn channels onto a derived private Network keyed
off INSTANCE.derive("private"):
AggregateToServers— aggregator → committee fan-out- any footprint-scheduling gossip
- round-scheduler chatter
The committee Group<CommitteeMachine> itself stays on UNIVERSE
because LiveRoundCell / Broadcasts / the two registries are
backed by it; bridging collections across networks is worse than the
extra catalog noise. See
design-intro — Narrow public surface.
Data flow
shared universe (public surface)
+--------+ ClientToAggregator +-------------+ AggregateToServers +-------------+
| Client | (stream) | Aggregator | (stream) [*] | Committee |
| TEE | --------------------> | role | -------------------> | Group<M> |
+--------+ +-------------+ +-------------+
| | |
| ClientRegistrationStream | |
+----------------------------------->| |
| |
+-------------------+---------------------+--------------+
| |
ClientRegistry (Map<ClientId, ClientBundle>) ServerRegistry (Map<ServerId, ServerBundle>)
| |
+-------------------------+------------------------------+
|
LiveRoundCell (Cell<LiveRound>)
|
Broadcasts (Vec<BroadcastRecord>)
[*] may migrate to a derived private network in a future topology.
All four collections are declare::collection!-declared with intent-
addressed StoreIds. The three streams are declare::stream!-declared
the same way. In v1 every derived id salt is a literal string; a
forthcoming Deployment-shaped convention (see
design-intro §The three conventions)
will replace the literal strings with chained .derive() calls off
INSTANCE.
Pipeline per round
t₀ t₁ t₂ t₃
| | | |
leader: ──── OpenRound ─── committed ─── LiveRoundCell mirrored ─── Broadcasts appended
│ (to followers) (on finalize)
▼
clients: read LiveRoundCell, seal envelope, send on ClientToAggregator
│
┌─────────────────────────────────────┘
▼
aggregator: fold envelopes until fold_deadline, send AggregateEnvelope
│
┌─────────────────────────────────────┘
▼
any committee server: receive, group.execute(SubmitAggregate)
│
▼
every committee server: see committed aggregate, compute its partial,
group.execute(SubmitPartial)
│
▼
state machine: all N_S partials gathered → finalize() → apply() pushes
BroadcastRecord
│
▼
apply-watcher on each server: mirror to LiveRoundCell / Broadcasts
Round latency is dominated by fold_deadline + one Raft commit round
trip per SubmitAggregate and one per SubmitPartial.
Participant roles
Clients
Implemented in zipnet_node::roles::client. Each client is an
Arc<Network> bonded to UNIVERSE, tagged zipnet.client, carrying a
zipnet.bundle.client ticket on its PeerEntry. Event loop:
loop {
live.when().updated().await;
let header = live.get();
if header.round == last { continue; }
if !header.clients.contains(&self.id) { retry registration; continue; }
let bundles = servers.get_all_in(header.servers);
let sealed = zipnet_core::client::seal(
self.id, &self.dh, msg, header.round, &bundles, params,
)?;
envelopes.send(sealed.envelope).await?;
}
Aggregator
Implemented in zipnet_node::roles::aggregator. ClientRegistry
writer. ClientToAggregator consumer. AggregateToServers producer.
Does not join the committee group.
loop {
live.when().updated().await;
let header = live.get();
let mut fold = RoundFold::new(header.round, params);
let close = tokio::time::sleep(fold_deadline);
loop {
tokio::select! {
_ = &mut close => break,
Some(env) = envelopes.next() => {
if env.round != header.round
|| !header.clients.contains(&env.client) {
continue;
}
fold.absorb(&env)?;
}
}
}
if let Ok(agg) = fold.finish() {
aggregates.send(agg).await?;
}
}
Committee servers
Implemented in zipnet_node::roles::server. Joins
Group<CommitteeMachine> as a Writer of ServerRegistry,
LiveRoundCell, and Broadcasts; reads ClientRegistry. Single
tokio::select! over three sources:
group.when().committed().advanced()— drives the apply-watcher.AggregateToServers::consumer— feeds inbound aggregates viaexecute(SubmitAggregate).- A periodic tick — leader-only round driver that opens new rounds
via
execute(OpenRound).
Why a dedicated Group<CommitteeMachine> and not just collections
The collections are each backed by their own internal Raft group. In
principle all round orchestration could be pushed into a bespoke
collection. We use a dedicated StateMachine because:
- Round orchestration needs domain transitions (Open → Aggregate → Partials → Finalize). These are hostile to Map / Vec / Cell CAS operations.
- Apply-time validation (e.g. rejecting aggregates that name non-
roster clients) reads more clearly in
apply(Command)than spread across collection CAS sequences. signature()is a clean place to pin wire / parameter version so incompatible nodes never form the same group.
The collections still pull their weight: they are the public-facing state external agents read without joining the committee group.
Identity universe
All IDs are 32-byte blake3 digests, via mosaik’s UniqueId. The
aliases used in v1:
| Alias | Derivation | Scope |
|---|---|---|
NetworkId | zipnet::UNIVERSE = unique_id!("mosaik.universe") | shared universe |
INSTANCE | blake3("zipnet." + instance_name) | one per deployment |
GroupId | mosaik-derived from GroupKey(INSTANCE.derive("committee")) + ConsensusConfig + signature() + validators | one per deployment’s committee |
StreamId / StoreId | INSTANCE.derive("submit"), INSTANCE.derive("broadcasts"), etc. in the target layout | one per public primitive |
ClientId | blake3_keyed("zipnet:client:id-v1", dh_pub) | stable across runs iff dh_pub is persisted |
ServerId | blake3_keyed("zipnet:server:id-v1", dh_pub) | same |
PeerId | iroh’s ed25519 public key | one per running Network |
ClientId / ServerId are not iroh PeerIds. They’re stable
across restarts iff the X25519 secret is persisted. In v1 (mock TEE
default) every client run generates a fresh identity; in the TDX
path the secret is sealed and ClientId becomes a long-lived
pseudonym.
Current-state caveat: ZIPNET_SHARD
The v1 binaries (zipnet-server, zipnet-aggregator,
zipnet-client) still take a ZIPNET_SHARD flag and derive a fresh
NetworkId from unique_id!("zipnet.v1").derive(shard). This
predates the UNIVERSE + instance-salt design and will be retired as
the binaries migrate to Zipnet::bind on UNIVERSE. Treat it as a
pre-migration artifact; new code should not replicate the pattern.
The e2e integration test exercises this path today.
Boundary between zipnet-proto / zipnet-core / zipnet-node
zipnet-proto— wire types, crypto primitives, XOR. No mosaik types, no async, no I/O. Anything that could be reused by an alternative transport lives here.zipnet-core— Algorithm 1/2/3 as pure functions. Depends on proto; no async, no I/O. The pure-DC-net round-trip test lives here.zipnet-node— mosaik integration. OwnsCommitteeMachine, alldeclare!items, all role loops. Everything async, everything I/O.zipnet— SDK facade. Wrapszipnet-nodebehindZipnet::bind(&network, "instance_name"); hides mosaik types from consumers.
See Crate map for the full workspace layout and design-intro — Narrow public surface for the rationale behind the facade boundary.
Cross-references
- Design intro — the generalised pattern this page instantiates.
- Committee state machine — commands,
queries,
signature()versioning. - Mosaik integration notes — the specific 0.3.17 footguns this architecture bumps into.
- Threat model — anonymity and integrity claims anchored to the state-machine guarantees above.