Mosaik integration notes
audience: contributors
Drop-in advice, footguns, and places where the prototype bumped into the mosaik 0.3.17 API. This is a grab-bag — sorted roughly by how likely a contributor is to trip over each item. For the higher-level deployment conventions that sit above mosaik, see design-intro.
Deployment-fingerprint derivation
Every public id in a zipnet deployment descends from a single
content + intent addressed root. The SDK hashes the operator’s
Config together with the datum’s TYPE_TAG and WIRE_SIZE:
use mosaik::UniqueId;
use zipnet::{Config, ShuffleWindow, Zipnet};
const ACME_MAINNET: Config = Config::new("acme.mainnet")
.with_window(ShuffleWindow::interactive())
.with_init([0u8; 32]);
// Pure fn; no I/O. Prints the same 32 bytes on both sides of the
// handshake when Config + datum schema agree.
let deployment: UniqueId = Zipnet::<Note>::deployment_id(&ACME_MAINNET);
// Sub-ids chain with .derive() for structural clarity.
let committee_key = deployment.derive("committee"); // GroupKey material
let submit_stream = deployment.derive("submit"); // StreamId
let broadcasts_store = deployment.derive("broadcasts"); // StoreId
The canonical encoding is
blake3("zipnet|" || name || "|type=" || TYPE_TAG || "|size=" || WIRE_SIZE || "|window=" || window || "|init=" || init); every
field folds in, so a typo in any of them produces a disjoint id and
the two sides cannot bond. Never expose raw StreamId / StoreId /
GroupId values across the zipnet crate boundary — the
Zipnet::<D>::{submit, receipts, read} constructors are the only
supported path.
The declare::stream! predicate direction
Reading the macro source (mosaik-macros/src/stream.rs in the mosaik
repo) reveals the following:
“For
requireandrequire_ticket, the side prefix describes who must satisfy the requirement, not who performs the check.consumer require_ticket: Vmeans consumers need a valid ticket, so the producer runs the validator — route to the opposite side.”
So in our ClientToAggregator stream:
declare::stream!(
pub ClientToAggregator = ClientEnvelope,
"zipnet.stream.client-to-aggregator",
producer require: |p| p.tags().contains(&CLIENT_TAG),
consumer require: |p| p.tags().contains(&AGGREGATOR_TAG),
producer online_when: |c| c.minimum_of(1).with_tags("zipnet.aggregator"),
);
producer require: |p| p.tags().contains(&CLIENT_TAG)→ “the producer must have thezipnet.clienttag” → enforced on the consumer side (aggregator subscribes only to peers taggedzipnet.client).consumer require: |p| p.tags().contains(&AGGREGATOR_TAG)→ “the consumer must have thezipnet.aggregatortag” → enforced on the producer side (client accepts subscribers only if they’re taggedzipnet.aggregator).
Getting this inverted produces symptoms like rejected consumer connection: unauthorized in the producer logs, with consumer PeerEntry
tag counts of 1 that don’t match the expected role. The clue is that the
producer is the one rejecting; consumer-requires apply on the producer.
Without both clauses, any peer on the network could subscribe to your
client’s envelope stream — defeating the point. The ticket-based analog
is require_ticket, which is what you want in the TDX-enabled path.
Group<M>, Map<K,V>, Network are not Clone
All three hold Arc internally but don’t derive or implement Clone.
When you need to share them across spawned tasks, wrap in a fresh Arc:
let group = Arc::new(network.groups()...join());
let network = Arc::new(builder.build().await?);
tokio::spawn({
let group = Arc::clone(&group);
async move { ... group.execute(...).await ... }
});
Group::execute, Group::query, Group::feed return futures that are
'static — they take ownership of the arguments they need at the moment
of call, so passing Arc<Group> + Arc::clone() into each task is the
straightforward pattern.
The server role deliberately keeps the Group inside a single
tokio::select! rather than spawning task-per-responsibility so we avoid
the Arc noise. The integration test in zipnet-node/tests/e2e.rs does the
same.
QueryResultAt<M> doesn’t pattern-match directly
group.query(...).await? returns Result<QueryResultAt<M>, QueryError<M>>
where QueryResultAt<M> is #[derive(Deref)] with Target = M::QueryResult.
You cannot pattern-match QueryResultAt against variants of your
QueryResult. The canonical destructure:
let qr = group.query(Query::LiveRound, Consistency::Weak).await?;
let QueryResult::LiveRound(live) = qr.into() else { return Ok(()) };
QueryResultAt::into is inherent (not From) and returns the
M::QueryResult by value.
Cell write / clear
let cell = LiveRoundCell::writer(&network);
cell.set(header).await?; // atomic replace
cell.clear().await?; // empty
There is no unset — the method is clear. Cell already has
Option-like emptiness semantics, so Cell<T> gives you the “sometimes
present” store you’d expect; no need for Cell<Option<T>>.
StateMachine::apply can’t be async
Apply is synchronous by contract. Side effects that need async (e.g. writing to a collection, sending a stream, issuing another command) must happen in a separate task that watches the commit cursor and reads the state machine via queries:
loop {
tokio::select! {
_ = group.when().committed().advanced() => reconcile().await?,
Some(msg) = stream.next() => forward(msg).await?,
_ = period.tick() => maybe_open_round().await?,
}
}
The apply-watcher in zipnet-node/src/roles/server.rs::reconcile_state is
the canonical implementation in our prototype.
InvalidTicket is a unit struct
mosaik::tickets::InvalidTicket doesn’t have ::new; it’s a bare
struct InvalidTicket;. Return it as:
return Err(InvalidTicket);
Context goes into the tracing log, not into the error, because the
error is opaque at the protocol level.
GroupKey::from(Digest)
GroupKey: From<Secret> where Secret = Digest. The ergonomic
constructor from a caller-provided string:
let key = GroupKey::from(mosaik::Digest::from("my-committee-secret"));
GroupKey::from_secret(impl Into<Secret>) is the same thing; either works.
GroupKey::random() is present but not what you want in production
because every committee member must converge on the same value.
Discovery on localhost
iroh’s pkarr/Mainline DHT bootstrap is unreliable for same-box tests.
For integration tests, cross-call sync_with between every pair of
networks (same pattern as mosaik’s examples/orderbook::discover_all):
async fn cross_sync(nets: &[&Arc<Network>]) -> anyhow::Result<()> {
for (i, a) in nets.iter().enumerate() {
for (j, b) in nets.iter().enumerate() {
if i != j {
a.discovery().sync_with(b.local().addr()).await?;
}
}
}
Ok(())
}
For out-of-process binaries, pass an explicit --bootstrap <peer_id>
pointing at a well-known node.
Tag = UniqueId, no tag! macro
Book examples show tag!("...") but 0.3.17 exports no such macro. Tag
is an alias for UniqueId, so use unique_id!("...") for compile-time
construction:
pub const CLIENT_TAG: Tag = unique_id!("zipnet.client");
Runtime construction is Tag::from("...") via the From<&str> impl on
UniqueId.
Declaring collections that don’t exist at use time
The declare::collection! macro refers to its value type by path, so you
can declare a collection over a type defined later in the same crate:
// src/protocol.rs
use crate::committee::LiveRound;
declare::collection!(
pub LiveRoundCell = mosaik::collections::Cell<LiveRound>,
"zipnet.collection.live-round",
);
LiveRound is defined in src/committee.rs; the macro’s expansion
resolves the path at compile time in the usual way.
Network::builder(...).with_mdns_discovery(true)
mDNS is off by default in 0.3.17. For single-box testing and for clusters on the same LAN, turning it on collapses discovery latency from minutes (DHT bootstrap) to sub-seconds. Costs nothing on WAN deployments where it silently no-ops.
Network::builder(network_id)
.with_mdns_discovery(true)
.with_discovery(discovery::Config::builder().with_tags(tags))
.build().await?;
We enable it unconditionally in NetworkBoot::boot.
TDX gating: install own ticket, require others’
Mosaik’s TDX support composes on both sides of the peer-entry dance. The idiomatic zipnet committee setup:
// On boot, if built with the tee-tdx feature:
network.tdx().install_own_ticket()?; // attach our quote to our PeerEntry
// When joining the committee or a public collection, require peers
// to present a matching TDX quote:
use mosaik::tickets::Tdx;
let tdx_validator = Tdx::new().require_mrtd(expected_mrtd);
// Stack with BundleValidator via multi-require_ticket:
group_builder
.require_ticket(BundleValidator::<ServerBundleKind>::new())
.require_ticket(tdx_validator);
expected_mrtd comes from the reproducible committee-image build and
is published alongside the Config (see
design-intro — A fingerprint convention, not a registry).
In v1, BundleValidator is the only admission check in the non-TDX
path; TDX critical-path enforcement lands in v2
(Roadmap).