Introduction
audience: all
Zipnet is an anonymous broadcast channel for bounded sets of authenticated participants. A group of clients publish messages onto a shared log; nobody — not even the operators of the infrastructure, acting individually — can tell which client authored which message.
This book documents a working prototype of ZIPNet built as a mosaik-native application. The protocol follows Rosenberg, Shih, Zhao, Wang, Miers, and Zhang (2024) with a small, grep-able set of v1 simplifications tracked in Roadmap to v2.
What zipnet is for
The canonical motivating case is an encrypted mempool: TEE-attested wallets seal transactions and publish them through zipnet; builders read an ordered log of sealed transactions; no party — not even a compromised builder — can link a transaction back to its author until on-chain execution reveals whatever the transaction itself reveals. The encryption layer (threshold decryption, TEE unsealing, plaintext-if-you-want) sits on top; zipnet supplies the anonymous, ordered, sybil-resistant publish channel underneath.
Other deployments in the same shape:
- Permissioned order-flow auctions. Whitelisted searchers publish intents; builders bid without knowing which searcher sent what.
- Anonymous governance signalling. Token-holder wallets cast signals a delegate can tally without learning which wallet sent any given one.
- Private sealed-bid auctions. Bidders publish; outcomes are public; bid-to-bidder linkage is cryptographic.
What zipnet uniquely provides across these:
- Sender anonymity within an attested set. A compromised reader cannot tie a message back to its author unless every committee operator colludes (any-trust).
- Shared ordered view. Every subscriber sees the same log in the same order.
- Sybil resistance. Only TEE-attested clients can publish.
- Censorship resistance at the publish layer. Readers cannot drop messages from specific authors because authorship is unlinkable.
The deployment model in one paragraph
Zipnet runs as one service among many on a shared mosaik universe
— a single NetworkId (zipnet::UNIVERSE) that hosts zipnet
alongside other mosaik services (signers, storage, oracles). An
operator stands up an instance under a short, namespaced string
(e.g. acme.mainnet); multiple instances coexist on the same
universe, each with its own committee, ACL, and round parameters.
Consumers bind to an instance by name with one line of Rust:
Zipnet::bind(&network, "acme.mainnet"). There is no on-network
registry; the operator publishes the instance name (and, if TDX-gated,
the committee MR_TD) via release notes or docs, and consumers compile
it in.
The full rationale is in Designing coexisting systems on mosaik.
Three audiences, three entry points
This book is written for three distinct readers. Every page declares its audience on the first line and respects that audience’s tone. Pick the one that matches you:
- Users — Rust developers building agents that publish into, or read from, a zipnet instance somebody else operates. Start at Quickstart — publish and read.
- Operators — devops staff deploying and maintaining instances. Not expected to read Rust. Start at Deployment overview then Quickstart — stand up an instance.
- Contributors — senior Rust engineers with distsys and crypto background, extending the protocol or the code. Start at Designing coexisting systems on mosaik then Architecture.
See Who this book is for for the tone conventions each audience is held to.
What this prototype is
- A permissioned, any-trust broadcast system: anonymity is preserved as long as at least one committee server is honest; liveness requires every committee server to be honest (in v1).
- Real cryptography — X25519 Diffie–Hellman, HKDF-SHA256, AES-128-CTR pad generation, blake3 falsification tags, ed25519 peer signatures (via iroh).
- Real consensus — the committee runs a modified Raft through
mosaik’s
Group<CommitteeMachine>. - Real networking — the aggregator and the committee communicate through mosaik typed streams; discovery is gossip + pkarr + mDNS; transport is iroh / QUIC.
What this prototype is not
- A production anonymous broadcast system. Ratcheting, footprint scheduling, cover traffic, multi-tier aggregators, and TDX-only builds tracked in the Roadmap to v2 are all deferred.
- Byzantine fault tolerant. Mosaik is explicit about this; zipnet inherits the assumption. See Threat model for the precise statement.
Layout of the source tree
crates/
zipnet SDK facade (Zipnet::bind, UNIVERSE, instance_id!)
zipnet-proto wire types, crypto, XOR
zipnet-core Algorithms 1/2/3 as pure functions
zipnet-node mosaik integration
zipnet-client TEE client binary
zipnet-aggregator aggregator binary
zipnet-server committee server binary
book/ this book
See Crate map for the dependency graph and purity boundaries.