Quickstart — publish and read
audience: users
You bring a mosaik::Network; the SDK layers ZIPNet on top of it as
one service among many on a shared mosaik universe. Every deployment
is a typed Zipnet<D> shuffler identified by a Config you compile
in.
Why you might want this
You’re building something where a bounded, authenticated set of participants needs to publish messages without revealing which participant sent which. The canonical case is an encrypted mempool: TDX-attested wallets seal transactions and publish them through zipnet; builders read an ordered broadcast log of sealed transactions; nobody — not even a compromised builder — can link a transaction to its sender until on-chain execution reveals whatever the transaction itself reveals. The encryption layer (threshold decryption, TEE unsealing, or none) sits on top; zipnet supplies the anonymous, ordered, sybil-resistant publish channel underneath.
Other deployments in the same shape:
- Permissioned order-flow auctions. Whitelisted searchers publish intents; builders bid without knowing which searcher sent what.
- Anonymous governance signalling. Token-holder wallets cast signals a delegate can tally without learning which wallet sent any given one.
- Private sealed-bid auctions. Bidders publish; outcome is public; bid-to-bidder linkage is cryptographic.
What zipnet uniquely provides across these:
- Sender anonymity within an attested set. A compromised reader cannot tie a message back to its author unless every committee operator colludes (any-trust).
- Shared ordered view. Every subscriber sees the same log in the same order. No relay-race asymmetry between readers.
- Sybil resistance. Only TDX-attested clients can publish.
- Censorship resistance at the publish layer. Readers can’t drop messages from specific authors because authorship is unlinkable.
If you’re the operator standing up the deployment rather than using one, read the operator quickstart instead.
The one-paragraph mental model
A mosaik universe is a single shared NetworkId. Many services —
zipnet, multisig signers, secure storage, oracles — live on it
simultaneously. An operator stands up a zipnet deployment by
publishing a Config (instance name + shuffle window + per-deployment
init salt) plus a ShuffleDatum impl describing the message type.
You compile both into your code. Three free-function constructors —
Zipnet::<D>::submit, ::receipts, ::read — give you a typed
write-side, an encrypted-receipt watcher, and a typed read-side
against that deployment. The same Arc<Network> handle can bind to
many deployments and to unrelated services on the same universe; one
endpoint, many bindings.
Cargo.toml
[dependencies]
zipnet = "0.1"
mosaik = "=0.3.17"
tokio = { version = "1", features = ["full"] }
futures = "0.3"
anyhow = "1"
zipnet re-exports mosaik::{Tag, UniqueId, unique_id!} so you
rarely reach for mosaik directly in small agents, but you’ll usually
keep mosaik as a direct dep since you’re the one owning the
Network.
Define your datum type
Every shuffler is parameterised by the type of message it shuffles.
Implement ShuffleDatum for that type:
use zipnet::{DecodeError, ShuffleDatum, UniqueId, unique_id};
pub struct Note(pub [u8; 240]);
impl ShuffleDatum for Note {
const TYPE_TAG: UniqueId = unique_id!("acme.note-v1");
const WIRE_SIZE: usize = 240;
fn encode(&self) -> Vec<u8> {
self.0.to_vec()
}
fn decode(bytes: &[u8]) -> Result<Self, DecodeError> {
<[u8; 240]>::try_from(bytes)
.map(Self)
.map_err(|e| DecodeError(e.to_string()))
}
}
TYPE_TAG is the schema fingerprint — change it whenever the
encoding changes, even if WIRE_SIZE stays the same. WIRE_SIZE is
the exact bytes-on-the-wire size; every value MUST encode to exactly
this length. The SDK rejects mismatched-size encodings.
Constant size is non-negotiable: variable on-wire payload sizes leak
sender identity through traffic analysis. Pad variable application
data to a fixed WIRE_SIZE ceiling at the application layer before
wrapping into your datum type.
Pin the deployment in a const
Every signature-altering input lives in a Config that the operator
publishes and you compile in:
use zipnet::{Config, ShuffleWindow};
const ACME_MAINNET: Config = Config::new("acme.mainnet")
.with_window(ShuffleWindow::interactive())
.with_init([
// 32 bytes the operator generated once at deploy time and
// published in their release notes.
0x7f, 0x3a, 0x9b, 0x1c, /* ... */ 0x00,
]);
ShuffleWindow::interactive() is a latency-optimised preset (1s
rounds, up to 64 participants). archival() is the
anonymity-optimised one (30s rounds, up to 1024 participants). For
custom timings, use ShuffleWindow::custom(...).
The whole Config plus your D::TYPE_TAG and D::WIRE_SIZE folds
into the deployment’s on-wire identity. Mismatched configs between
you and the operator give different GroupIds — you silently don’t
bond, and submit / receipts / read return ConnectTimeout.
Publisher
use std::sync::Arc;
use mosaik::Network;
use zipnet::{UNIVERSE, Zipnet};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let network = Arc::new(Network::new(UNIVERSE).await?);
let tx = Zipnet::<Note>::submit(&network, &ACME_MAINNET).await?;
let id = tx.send(Note([0u8; 240])).await?;
println!("submitted {id:?} — watch the receipts stream for the outcome");
Ok(())
}
Submitter::send returns immediately with a SubmissionId. The
actual landing outcome (Landed / Collided / Dropped) arrives later
on the receipts stream, correlated by the same SubmissionId.
Receipts
Watch the outcome of your own submissions independently of submitting:
use futures::StreamExt;
use zipnet::Zipnet;
let mut rx = Zipnet::<Note>::receipts(&network, &ACME_MAINNET).await?;
while let Some(receipt) = rx.next().await {
println!(
"submission {:?} -> round {} slot {} {:?}",
receipt.submission_id,
receipt.round,
receipt.slot,
receipt.outcome,
);
}
On the wire, every receipt is ECIES-encrypted to your long-term X25519 pubkey. The stream yields only receipts that decrypt and authenticate against your key — other publishers’ receipts arrive on the same wire stream but are silently filtered. A passive observer sees indistinguishable ciphertexts and learns nothing about who submitted what.
Reader
Subscribe to the shuffled output:
use futures::StreamExt;
use zipnet::Zipnet;
let mut rd = Zipnet::<Note>::read(&network, &ACME_MAINNET).await?;
while let Some(note) = rd.next().await {
// note: D fully decoded; tag failures, collisions, and decode
// errors are filtered silently inside the SDK.
println!("got {} bytes", note.0.len());
}
Reader<D> yields fully decoded values. There is no Round /
Message wrapper to pick apart — round-level metadata is an
implementation detail of the protocol, not a consumer concern.
Sharing one Network across deployments
Because all three constructors only take &Arc<Network>, one network
handle can serve many deployments and many services concurrently:
use std::sync::Arc;
use mosaik::Network;
use zipnet::{UNIVERSE, Zipnet};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let network = Arc::new(Network::new(UNIVERSE).await?);
// Two zipnet deployments side by side, possibly with different D.
let prod_tx = Zipnet::<Note>::submit(&network, &ACME_MAINNET).await?;
let testnet_tx = Zipnet::<Note>::submit(&network, &PREVIEW_ALPHA).await?;
// …and unrelated services on the same network.
// let multisig = Multisig::<Vote>::sign(&network, &TREASURY).await?;
// let storage = Storage::<Blob>::write(&network, &ARCHIVE).await?;
let _ = prod_tx.send(Note([1u8; 240])).await?;
let _ = testnet_tx.send(Note([2u8; 240])).await?;
Ok(())
}
Every deployment derives its identifiers disjointly from its own
Config + datum type, so they coexist on the shared peer catalog
without collision. You pay for one mosaik endpoint, one DHT record,
one gossip loop — not one per deployment.
Bring-your-own-config
You keep full control of the mosaik builder; the SDK never constructs
the Network for you:
use std::{net::SocketAddr, sync::Arc};
use mosaik::{Network, discovery};
use zipnet::{UNIVERSE, Zipnet};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let network = Arc::new(
Network::builder(UNIVERSE)
.with_mdns_discovery(true)
.with_discovery(discovery::Config::builder().with_bootstrap(universe_bootstrap_peers()))
.with_prometheus_addr("127.0.0.1:9100".parse::<SocketAddr>()?)
.build()
.await?,
);
let tx = Zipnet::<Note>::submit(&network, &ACME_MAINNET).await?;
let _ = tx.send(Note([0u8; 240])).await?;
Ok(())
}
fn universe_bootstrap_peers() -> Vec<mosaik::PeerId> { vec![] }
Bootstrap peers are universe-level, not zipnet-specific. Any reachable peer on the shared network — a friendly operator’s aggregator, your own relay — works as a starting point. Once you’re bonded, the SDK locates the specific deployment’s committee and aggregator via the shared peer catalog.
What you get back
pub struct SubmissionId(pub u64);
pub struct Receipt {
pub submission_id: SubmissionId,
pub round: zipnet::RoundId,
pub slot: usize,
pub outcome: Outcome,
}
pub enum Outcome { Landed, Collided, Dropped }
Submitter::send returns SubmissionId; Reader yields your D
directly; Receipts yields Receipt. No wrappers around round- or
slot-level metadata you’d then have to unpack.
Error model
pub enum Error {
WrongUniverse { expected: mosaik::NetworkId, actual: mosaik::NetworkId },
ConnectTimeout,
Attestation(String),
Shutdown,
Protocol(String),
}
ConnectTimeout is the one you’ll hit in development — usually a
mismatched Config (different window or init than the operator), an
unreachable bootstrap peer, or an operator whose committee isn’t up
yet. WrongUniverse shows up if your Network was built against a
different universe NetworkId than the SDK expects.
Cover traffic is on by default
An idle Submitter<D> sends a cover envelope each round to widen
the anonymity set. See
Publishing messages
for how to tune or disable it.
Shutdown
drop(tx); // close the writer; receipts/reader stay open
drop(rx); // close the receipts watcher
drop(rd); // close the reader
Handles are independent. Dropping one closes that handle; the others
stay live as long as you (or other tasks) hold them. The Network
stays up while any handle holds it.
Next reading
- What you need from the operator — the fact sheet the operator gives you before writing code.
- Publishing messages — fire-and-forget, cover traffic, retry policy.
- Reading the broadcast log — replay, gap detection, filtering.
- Client identity and registration — stable vs ephemeral peer identities.
- TEE-gated deployments — TDX builds, measurement rollouts.
- Designing coexisting systems on mosaik — the shared-universe / typed-shuffler / content+intent addressing model in full.
- API reference — full type list.