Quickstart — publish and read
audience: users
You bring a mosaik::Network; the SDK layers ZIPNet on top of it as
one service among many on a shared mosaik universe. Every deployment
is identified by an instance name. You bind to the one you want
with Zipnet::bind(&network, instance_name).
Why you might want this
You’re building something where a bounded, authenticated set of participants needs to publish messages without revealing which participant sent which. The canonical case is an encrypted mempool: TDX-attested wallets seal transactions and publish them through zipnet; builders read an ordered broadcast log of sealed transactions; nobody — not even a compromised builder — can link a transaction to its sender until on-chain execution reveals whatever the transaction itself reveals. The encryption layer (threshold decryption, TEE unsealing, or none) sits on top; zipnet supplies the anonymous, ordered, sybil-resistant publish channel underneath.
Other deployments in the same shape:
- Permissioned order-flow auctions. Whitelisted searchers publish intents; builders bid without knowing which searcher sent what.
- Anonymous governance signalling. Token-holder wallets cast signals a delegate can tally without learning which wallet sent any given one.
- Private sealed-bid auctions. Bidders publish; outcome is public; bid-to-bidder linkage is cryptographic.
What zipnet uniquely provides across these:
- Sender anonymity within an attested set. A compromised reader cannot tie a message back to its author unless every committee operator colludes (any-trust).
- Shared ordered view. Every subscriber sees the same log in the same order. No relay-race asymmetry between readers.
- Sybil resistance. Only TDX-attested clients can publish.
- Censorship resistance at the publish layer. Readers can’t drop messages from specific authors because authorship is unlinkable.
If you’re the operator standing up the deployment rather than using one, read the operator quickstart instead.
The one-paragraph mental model
A mosaik universe is a single shared NetworkId. Many services — zipnet,
multisig signers, secure storage, oracles — live on it simultaneously.
An operator can run any number of instances of zipnet (“mainnet”,
“preview.alpha”, “acme-corp”) concurrently on the same universe; each
instance has its own committee, its own ACL, its own round parameters,
and its own ticket class. You pick the one you want by name — the
operator tells you which name to use, and your code bakes it in. No
registry lookup, no runtime discovery of “what instances exist”. The
same Arc<Network> handle can also bind to other services without
needing a second network.
Cargo.toml
[dependencies]
zipnet = "0.1"
mosaik = "=0.3.17"
tokio = { version = "1", features = ["full"] }
futures = "0.3"
anyhow = "1"
zipnet re-exports mosaik::{Tag, unique_id!} so you rarely reach
for mosaik directly in small agents, but you’ll usually keep mosaik
as a direct dep since you’re the one owning the Network.
Publisher
use std::sync::Arc;
use mosaik::Network;
use zipnet::{Zipnet, UNIVERSE};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let network = Arc::new(Network::new(UNIVERSE).await?);
let zipnet = Zipnet::bind(&network, "mainnet").await?;
let receipt = zipnet.publish(b"hello from my agent").await?;
println!("landed in round {} slot {}", receipt.round, receipt.slot);
Ok(())
}
Three lines inside main:
- Create a mosaik network on the shared universe
NetworkId. - Bind to the
mainnetzipnet instance. The SDK resolves the instance salt to concrete stream, collection, and group IDs, installs the client identity, attaches the bundle ticket, and waits until you are in a live round’s roster. publishresolves after the broadcast finalizes.
UNIVERSE is the shared NetworkId that hosts the deployment. Zipnet
exports this constant today; when mosaik ships a canonical universe
constant, this value will be re-exported verbatim. See
Designing coexisting systems on mosaik
for the full rationale.
Subscriber
use std::sync::Arc;
use futures::StreamExt;
use mosaik::Network;
use zipnet::{Zipnet, UNIVERSE};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let network = Arc::new(Network::new(UNIVERSE).await?);
let zipnet = Zipnet::bind(&network, "mainnet").await?;
let mut rounds = zipnet.subscribe().await?;
while let Some(round) = rounds.next().await {
for msg in round.messages() {
println!("round {}: {:?}", round.id(), msg.bytes());
}
}
Ok(())
}
round.messages() yields only payloads that decoded cleanly —
falsification-tag verification and collision filtering happen inside
the SDK. Reach for round.raw() if you need the BroadcastRecord.
Binding to a testnet, devnet, or tenant instance
Instance names are free-form strings; well-known names are
conventions, not types. An operator running a testnet gives you its
instance name (e.g. preview.alpha) along with the universe-level
bootstrap peers and any required TDX measurement.
use std::sync::Arc;
use mosaik::Network;
use zipnet::{Zipnet, UNIVERSE};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let network = Arc::new(Network::new(UNIVERSE).await?);
let zipnet = Zipnet::bind(&network, "preview.alpha").await?;
let _ = zipnet.publish(b"hi from testnet").await?;
Ok(())
}
| Instance name | What operators commonly use it for |
|---|---|
mainnet | Production deployment, long-lived committee |
preview.<tag> | Long-lived testnet on a per-tag TDX image |
dev.<tag> | Per-developer or per-CI-job ephemeral instance |
| anything else | Whatever the operator tells you |
The SDK itself does not dispatch on the name — TDX attestation is
controlled by the tee-tdx Cargo feature on the zipnet crate, not
by the instance name you pick. The table above is naming convention,
not policy.
The instance name is the only piece of zipnet-specific identity the
SDK needs. It fully determines the committee GroupId, the submit
StreamId, the broadcasts StoreId, and the ticket class — all
derived from one salt (see
Designing coexisting systems on mosaik).
A typo in the instance name is silent — your code derives different
IDs than the operator, no one picks up, and bind returns
ConnectTimeout after the bond window elapses. For production,
consider pinning the instance as a compile-time UniqueId constant
using the instance_id! macro, so a typo
is caught at build time:
use zipnet::{Zipnet, UniqueId, UNIVERSE};
const ACME_MAINNET: UniqueId = zipnet::instance_id!("acme.mainnet");
let zipnet = Zipnet::bind_by_id(&network, ACME_MAINNET).await?;
The instance_id! macro and the runtime instance_id function
produce identical bytes for the same name, so the operator’s
ZIPNET_INSTANCE=acme.mainnet env var and your compile-time constant
land on the same UniqueId.
Sharing one Network across services and instances
Because Zipnet::bind only takes &Arc<Network>, one network handle
can simultaneously serve many services and many instances of the same
service:
use std::sync::Arc;
use mosaik::Network;
use zipnet::{Zipnet, UNIVERSE};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let network = Arc::new(Network::new(UNIVERSE).await?);
// two zipnet instances side by side
let prod = Zipnet::bind(&network, "mainnet").await?;
let testnet = Zipnet::bind(&network, "preview.alpha").await?;
// …and unrelated services on the same network
// let multisig = Multisig::bind(&network, "treasury").await?;
// let storage = Storage::bind(&network, "archive").await?;
let _ = prod.publish(b"production message").await?;
let _ = testnet.publish(b"dry-run message").await?;
Ok(())
}
Every instance and every service derives its own IDs from its own salt, so they coexist on the shared catalog without collision. You pay for one mosaik endpoint, one DHT record, one gossip loop — not one per service.
Bring-your-own-config
You keep full control of the mosaik builder; the SDK never constructs
the Network for you:
use std::{net::SocketAddr, sync::Arc};
use mosaik::{Network, discovery};
use zipnet::{Zipnet, UNIVERSE};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let network = Arc::new(
Network::builder(UNIVERSE)
.with_mdns_discovery(true)
.with_discovery(discovery::Config::builder().with_bootstrap(universe_bootstrap_peers()))
.with_prometheus_addr("127.0.0.1:9100".parse::<SocketAddr>()?)
.build()
.await?,
);
let zipnet = Zipnet::bind(&network, "mainnet").await?;
let _ = zipnet.publish(b"hi").await?;
Ok(())
}
fn universe_bootstrap_peers() -> Vec<mosaik::PeerId> { vec![] }
Bootstrap peers are universe-level, not zipnet-specific. Any
reachable peer on the shared network — a mosaik registry node, a
friendly operator’s aggregator, your own relay — works as a starting
point. Once you’re bonded, Zipnet::bind locates the specific
instance’s committee and aggregator via the shared peer catalog.
What you get back
pub struct Receipt {
pub round: zipnet::RoundId,
pub slot: usize,
pub outcome: zipnet::Outcome,
}
pub enum Outcome { Landed, Collided, Dropped }
pub struct Round { /* opaque */ }
impl Round {
pub fn id(&self) -> zipnet::RoundId;
pub fn messages(&self) -> impl Iterator<Item = zipnet::Message>;
pub fn raw(&self) -> &zipnet::BroadcastRecord;
}
pub struct Message { /* opaque */ }
impl Message {
pub fn bytes(&self) -> &[u8];
pub fn slot(&self) -> usize;
}
Almost every application uses Receipt::outcome and
Message::bytes() and ignores the rest.
Error model
pub enum Error {
WrongUniverse { expected: mosaik::NetworkId, actual: mosaik::NetworkId },
ConnectTimeout,
Attestation(String),
Shutdown,
Protocol(String),
}
ConnectTimeout is the one you’ll hit in development — usually a
typo in the instance name (you’re deriving a GroupId nobody is
serving), an unreachable bootstrap peer, or an operator whose
committee isn’t up yet. WrongUniverse shows up if your Network
was built against a different universe NetworkId than the SDK
expects.
Cover traffic is on by default
An idle Zipnet handle sends a cover envelope each round to widen
the anonymity set. See
Publishing messages for
how to tune or disable it.
Shutdown
drop(zipnet); // fine — the driver task exits cleanly
zipnet.shutdown().await?; // if you want to flush pending publishes first
Dropping one Zipnet handle only shuts that binding down; the
Network stays up as long as other handles (or you) hold it. This is
the intended pattern when one process talks to several services or
several instances.
Next reading
- What you need from the operator — the fact sheet the operator gives you before writing code.
- Publishing messages — fire-and-forget, cover traffic, retry policy.
- Reading the broadcast log — replay, gap detection, filtering.
- Client identity and registration — stable vs
ephemeral
ClientId. - TEE-gated deployments — TDX builds, measurement rollouts.
- Designing coexisting systems on mosaik — the shared-universe / instance-name model in full.
- API reference — full type list.