Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Architecture

audience: contributors

This chapter is the concrete instantiation of the pattern described in Designing coexisting systems on mosaik for zipnet v1. It maps the paper’s three-part architecture (§2) onto mosaik primitives and identifies which of those primitives form the public surface on the shared universe versus the private plumbing that may live on a derived sub-network.

The reader is assumed to have read the ZIPNet paper, the mosaik book, and design-intro.

Deployment model recap

Zipnet runs as one service among many on the shared mosaik universe zipnet::UNIVERSE = unique_id!("mosaik.universe"). A deployment is a single zipnet instance: one committee, one ACL, one set of round parameters, one operator. Many deployments coexist on the universe.

A deployment is identified by a content + intent addressed UniqueId derived from a Config (operator-chosen name + shuffle window + 32-byte init salt) and the datum type’s TYPE_TAG / WIRE_SIZE. Every public id in the deployment descends from that root:

  DEPLOYMENT   = blake3("zipnet|" || name || "|type=" || TYPE_TAG ||
                        "|size=" || WIRE_SIZE || "|window=" || window ||
                        "|init=" || init)                 // root UniqueId
  COMMITTEE    = DEPLOYMENT.derive("committee")           // Group<M> key material
  SUBMIT       = DEPLOYMENT.derive("submit")              // ClientToAggregator StreamId
  REGISTER     = DEPLOYMENT.derive("register")            // ClientRegistrationStream StreamId
  BROADCASTS   = DEPLOYMENT.derive("broadcasts")          // Vec<BroadcastRecord> StoreId
  LIVE         = DEPLOYMENT.derive("live-round")          // Cell<LiveRound> StoreId
  CLIENT_REG   = DEPLOYMENT.derive("client-registry")     // Map StoreId
  SERVER_REG   = DEPLOYMENT.derive("server-registry")     // Map StoreId

Consumers recompute the same derivations from the same Config + datum schema; no on-wire registry is involved. See design-intro — The underlying principle.

Public surface (what lives on UNIVERSE)

The instance’s outward-facing primitives decompose into two functional roles:

  • write-sideClientRegistrationStream + ClientToAggregator. Ticket-gated, consumed by the aggregator. External TEE clients use these to join a round and submit sealed envelopes.
  • read-sideLiveRoundCell + Broadcasts + ClientRegistry + ServerRegistry. Read-only ambient round state every external agent needs in order to seal envelopes and interpret finalized rounds.

Integrators bind via the facade:

const ACME_MAINNET: zipnet::Config = zipnet::Config::new("acme.mainnet")
    .with_window(zipnet::ShuffleWindow::interactive())
    .with_init([0u8; 32]);

let network    = Arc::new(Network::new(zipnet::UNIVERSE).await?);
let tx         = zipnet::Zipnet::<Note>::submit  (&network, &ACME_MAINNET).await?;
let mut rx     = zipnet::Zipnet::<Note>::receipts(&network, &ACME_MAINNET).await?;
let mut reader = zipnet::Zipnet::<Note>::read    (&network, &ACME_MAINNET).await?;

The facade hides StreamId / StoreId / GroupId entirely; they never cross the zipnet crate boundary.

Internal plumbing (optional derived private network)

Everything that is not part of the advertised surface is deployment- internal. In v1 it all runs on UNIVERSE alongside the public surface; this is the simplest place to start. A future deployment topology may move the high-churn channels onto a derived private Network keyed off DEPLOYMENT.derive("private"):

  • AggregateToServers — aggregator → committee fan-out
  • any footprint-scheduling gossip
  • round-scheduler chatter

The committee Group<CommitteeMachine> itself stays on UNIVERSE because LiveRoundCell / Broadcasts / the two registries are backed by it; bridging collections across networks is worse than the extra catalog noise. See design-intro — Narrow public surface.

Data flow

                    shared universe (public surface)
  +--------+  ClientToAggregator   +-------------+  AggregateToServers  +-------------+
  | Client |  (stream)             |  Aggregator |  (stream) [*]        |  Committee  |
  |  TEE   | --------------------> |   role      | -------------------> |  Group<M>   |
  +--------+                       +-------------+                      +-------------+
       |                                    |                                    |
       |  ClientRegistrationStream          |                                    |
       +----------------------------------->|                                    |
                                            |                                    |
                        +-------------------+---------------------+--------------+
                        |                                                        |
                 ClientRegistry (Map<ClientId, ClientBundle>)    ServerRegistry (Map<ServerId, ServerBundle>)
                        |                                                        |
                        +-------------------------+------------------------------+
                                                  |
                                        LiveRoundCell (Cell<LiveRound>)
                                                  |
                                        Broadcasts (Vec<BroadcastRecord>)

  [*] may migrate to a derived private network in a future topology.

All four collections are declare::collection!-declared with intent- addressed StoreIds. The three streams are declare::stream!-declared the same way. In v1 every derived id salt is a literal string; a forthcoming Deployment-shaped convention (see design-intro §The three conventions) will replace the literal strings with chained .derive() calls off DEPLOYMENT.

Pipeline per round

                t₀         t₁               t₂                    t₃
                 |          |                |                     |
  leader: ──── OpenRound ─── committed ─── LiveRoundCell mirrored  ─── Broadcasts appended
                 │          (to followers)                              (on finalize)
                 ▼
clients:    read LiveRoundCell,  seal envelope,  send on ClientToAggregator
                                                       │
                 ┌─────────────────────────────────────┘
                 ▼
aggregator: fold envelopes until fold_deadline,  send AggregateEnvelope
                                                       │
                 ┌─────────────────────────────────────┘
                 ▼
any committee server: receive,  group.execute(SubmitAggregate)
                                                       │
                                                       ▼
every committee server: see committed aggregate,  compute its partial,
                        group.execute(SubmitPartial)
                                                       │
                                                       ▼
state machine: all N_S partials gathered → finalize()  → apply() pushes
                                                           BroadcastRecord
                                                       │
                                                       ▼
apply-watcher on each server: mirror to LiveRoundCell / Broadcasts

Round latency is dominated by fold_deadline + one Raft commit round trip per SubmitAggregate and one per SubmitPartial.

Participant roles

Clients

Implemented in zipnet_node::roles::client. Each client is an Arc<Network> bonded to UNIVERSE, tagged zipnet.client, carrying a zipnet.bundle.client ticket on its PeerEntry. Event loop:

loop {
    live.when().updated().await;
    let header = live.get();
    if header.round == last { continue; }
    if !header.clients.contains(&self.id) { retry registration; continue; }
    let bundles = servers.get_all_in(header.servers);
    let sealed  = zipnet_core::client::seal(
        self.id, &self.dh, msg, header.round, &bundles, params,
    )?;
    envelopes.send(sealed.envelope).await?;
}

Aggregator

Implemented in zipnet_node::roles::aggregator. ClientRegistry writer. ClientToAggregator consumer. AggregateToServers producer. Does not join the committee group.

loop {
    live.when().updated().await;
    let header = live.get();
    let mut fold = RoundFold::new(header.round, params);
    let close = tokio::time::sleep(fold_deadline);
    loop {
        tokio::select! {
            _ = &mut close => break,
            Some(env) = envelopes.next() => {
                if env.round != header.round
                    || !header.clients.contains(&env.client) {
                    continue;
                }
                fold.absorb(&env)?;
            }
        }
    }
    if let Ok(agg) = fold.finish() {
        aggregates.send(agg).await?;
    }
}

Committee servers

Implemented in zipnet_node::roles::server. Joins Group<CommitteeMachine> as a Writer of ServerRegistry, LiveRoundCell, and Broadcasts; reads ClientRegistry. Single tokio::select! over three sources:

  1. group.when().committed().advanced() — drives the apply-watcher.
  2. AggregateToServers::consumer — feeds inbound aggregates via execute(SubmitAggregate).
  3. A periodic tick — leader-only round driver that opens new rounds via execute(OpenRound).

Why a dedicated Group<CommitteeMachine> and not just collections

The collections are each backed by their own internal Raft group. In principle all round orchestration could be pushed into a bespoke collection. We use a dedicated StateMachine because:

  1. Round orchestration needs domain transitions (Open → Aggregate → Partials → Finalize). These are hostile to Map / Vec / Cell CAS operations.
  2. Apply-time validation (e.g. rejecting aggregates that name non- roster clients) reads more clearly in apply(Command) than spread across collection CAS sequences.
  3. signature() is a clean place to pin wire / parameter version so incompatible nodes never form the same group.

The collections still pull their weight: they are the public-facing state external agents read without joining the committee group.

Identity universe

All IDs are 32-byte blake3 digests, via mosaik’s UniqueId. The aliases used in v1:

AliasDerivationScope
NetworkIdzipnet::UNIVERSE = unique_id!("mosaik.universe")shared universe
DEPLOYMENTblake3("zipnet|" + name + "|type=" + TYPE_TAG + "|size=" + WIRE_SIZE + "|window=" + window + "|init=" + init)one per deployment
GroupIdmosaik-derived from GroupKey(DEPLOYMENT.derive("committee")) + ConsensusConfig + signature() + validatorsone per deployment’s committee
StreamId / StoreIdDEPLOYMENT.derive("submit"), DEPLOYMENT.derive("broadcasts"), etc. in the target layoutone per public primitive
ClientIdblake3_keyed("zipnet:client:id-v1", dh_pub)stable across runs iff dh_pub is persisted
ServerIdblake3_keyed("zipnet:server:id-v1", dh_pub)same
PeerIdiroh’s ed25519 public keyone per running Network

ClientId / ServerId are not iroh PeerIds. They’re stable across restarts iff the X25519 secret is persisted. In v1 (mock TEE default) every client run generates a fresh identity; in the TDX path the secret is sealed and ClientId becomes a long-lived pseudonym.

Current-state caveat: ZIPNET_SHARD

The v1 binaries (zipnet-server, zipnet-aggregator, zipnet-client) still take a ZIPNET_SHARD flag and derive a fresh NetworkId from unique_id!("zipnet.v1").derive(shard). This predates the UNIVERSE + deployment-fingerprint design and will be retired as the binaries migrate to the Zipnet::<D>::* constructors on UNIVERSE. Treat it as a pre-migration artifact; new code should not replicate the pattern. The e2e integration test exercises this path today.

Boundary between zipnet-proto / zipnet-core / zipnet-node

  • zipnet-proto — wire types, crypto primitives, XOR. No mosaik types, no async, no I/O. Anything that could be reused by an alternative transport lives here.
  • zipnet-core — Algorithm 1/2/3 as pure functions. Depends on proto; no async, no I/O. The pure-DC-net round-trip test lives here.
  • zipnet-node — mosaik integration. Owns CommitteeMachine, all declare! items, all role loops. Everything async, everything I/O.
  • zipnet — SDK facade. Wraps zipnet-node behind the typed Zipnet::<D>::{submit, receipts, read}(&network, &Config) constructors; hides mosaik types from consumers.

See Crate map for the full workspace layout and design-intro — Narrow public surface for the rationale behind the facade boundary.

Cross-references