Designing coexisting systems on mosaik
audience: contributors
Mosaik composes primitives — Stream, Group, Collection,
TicketValidator. It does not prescribe how a whole service — a
deployment with its own operator, its own ACL, its own lifecycle — is
shipped onto a network and made available to third-party agents. That
convention lives one layer above mosaik and has to be invented per
service family.
This page describes the convention zipnet uses, why it was picked, and what a contributor building the next service on mosaik (multisig signer, secure storage, attested oracle, …) should reuse. It is a mental model, not an API reference: the concrete instantiation is in Architecture.
The problem
A mosaik network is a universe where any number of services run concurrently. Each service:
- is operated by an identifiable organisation (or coalition) and has its own ACL
- ships as a bundle of internally-coupled primitives — usually a
committee
Group, one or more collections backed by that group, and one or more streams feeding it - must be addressable and discoverable by external agents who do not operate it
- co-exists with many other instances of itself (testnet, staging, per-tenant deployments) and with unrelated services on the same wire
The canonical shape zipnet itself was built for is an encrypted mempool — a bounded set of TEE-attested wallets publishing sealed transactions for an unbounded set of builders to read, ordered and unlinkable to sender. Other services built on this pattern (signers, storage, oracles) have the same structural properties.
Nothing about these requirements is in mosaik itself. The library will
happily let you stand up ten Groups and thirty Streams on one
Network; it says nothing about which of them constitute “one zipnet”
versus “one multisig”.
Two axes of choice
Every design in this space picks a point on two axes.
- Network topology. Does a deployment live on its own
NetworkId, or on a shared universe with peers of every other service? - Discovery. How does an agent go from “I want zipnet-acme” to bonded-and-consuming without hardcoded bootstraps or out-of-band config?
Four shapes fall out:
| Shape | Topology | When to pick |
|---|---|---|
| A. Service-per-network | One NetworkId per deployment; agents multiplex many Network handles | Strong isolation, per-service attestation scope, no cross-service state |
| B. Shared meta-network | One universe NetworkId; deployments are overlays of Groups/Streams | Many services per agent, cheap composition, narrow public surface required to tame noise |
| C. Derived sub-networks | ROOT.derive(service).derive(instance) hybrids | Isolation with structured discovery, still multi-network per agent |
| D. Service manifest | Orthogonal: a rendezvous record naming all deployment IDs | Composable with A/B/C; required for discoverable-without-out-of-band-config |
Zipnet picks B for topology, with optional derived private networks for high-volume internal plumbing, and compile-time instance-salt derivation for discovery — no on-network registry required. The rest of this page unpacks why and how.
Narrow public surface
The single most important discipline in this model is that a deployment exposes a small, named, finite set of primitives to the shared network. The ideal is one or two — a stream plus a collection, two streams, a state machine plus a collection, and so on. Everything else is private to the bundle and wired up by the deployment author, who is free to hardcode internal dependencies as aggressively as they like.
Zipnet’s outward surface decomposes cleanly into two functional roles,
even though it carries several declare! types:
- write-side:
ClientRegistrationStreamandClientToAggregator— ticket-gated, predicate-gated, used by external TEE clients to join a round and submit sealed envelopes. - read-side:
LiveRoundCell,Broadcasts, plus the two registries — read-only ambient round state that external agents need in order to seal envelopes and interpret finalized rounds.
An integrator’s mental model is “a way to write, a way to read”. They do not need to know the committee exists, how many aggregators there are, or how DH shuffles are scheduled. Internally the bundle looks like this:
shared network (public surface)
─────────────────────────────────────────────────────────────────
ClientRegistrationStream, ClientToAggregator ─┐
│
LiveRoundCell, Broadcasts, ClientRegistry, ◀─┤
ServerRegistry │
│
─────────────────────────────────────────────────
derived private network (optional) │ (private plumbing)
▼
Aggregator fan-in / DH-shuffle gossip Committee Group<CommitteeState>
Round-scheduler chatter AggregateToServers stream
BroadcastsStore (backs Broadcasts)
The committee Group stays on the shared network because the
public-read collections are backed by it and bridging collections
across networks is worse than the catalog noise. Only the
genuinely high-churn channels belong on a derived private network.
The three conventions
Three things make this pattern work. A contributor starting a new service should reproduce all three.
1. Instance-salt discipline
Every public ID in a deployment descends from one root:
INSTANCE = blake3("zipnet." + instance_name) // compile- or run-time
SUBMIT = INSTANCE.derive("submit") // StreamId
BROADCASTS = INSTANCE.derive("broadcasts") // StoreId
COMMITTEE = INSTANCE.derive("committee") // GroupKey material
...
The top-level instance salt is a flat-string hash: compile-time via
zipnet::instance_id!("acme.mainnet") (which expands to
mosaik::unique_id!("zipnet.acme.mainnet")) and run-time via
zipnet::instance_id("acme.mainnet") produce the same 32 bytes.
Sub-IDs within the instance chain off it with .derive() for
structural clarity.
An agent that knows instance_name can reconstruct every public ID
from a shared declare! module. The consumer-side API is:
let zipnet = Zipnet::bind(&network, "acme.mainnet").await?;
let receipt = zipnet.publish(b"hello").await?;
let mut log = zipnet.subscribe().await?;
Zipnet::bind is a thin constructor that derives the instance-local
IDs and returns a handle wired to them. Raw
StreamId/StoreId/GroupId values are never exposed across the
crate boundary.
2. A Deployment-shaped convention
Authors should declare a deployment’s public surface once, in one
place, so consumers can bind without reassembling ID derivations by
hand. Whether this is a literal declare::deployment! macro or a
hand-written impl Deployment is ergonomics; the constraint is that
the public surface is a declared, named, finite set of primitives —
not “whatever the bundle happens to put on the network today”.
Every deployment crate should export:
- the public
declare::stream!/declare::collection!types for its surface, colocated in a single protocol module - a
bind(&Network, instance_name) -> TypedHandlesfunction - the intended
TicketValidatorcomposition for each public primitive
A service that exposes eight unrelated collections has probably not thought hard enough about its interface.
3. A naming convention, not a registry
Derivation from (service, instance_name) is enough for a consumer
who knows the instance name to bond to the deployment: both sides
compute the same GroupId, StreamIds, and StoreIds, and mosaik’s
discovery layer does the rest. No on-network advertisement is
required — the service does not need to advertise its own existence.
A consumer typically pins the instance as a compile-time constant:
const ACME_ZIPNET: UniqueId = zipnet::instance_id!("acme.mainnet");
let zipnet = Zipnet::bind_by_id(&network, ACME_ZIPNET).await?;
…or by string when convenient:
let zipnet = Zipnet::bind(&network, "acme.mainnet").await?;
The operator’s complete public contract is three items: the universe
NetworkId, the instance name, and (if the instance is TDX-gated)
the MR_TD of the committee image. These travel via release notes,
docs, or direct handoff. Nothing about the binding path touches a
registry.
A directory may exist — a shared collection listing known instances — but it is a devops convenience for humans enumerating deployments, not part of the consumer binding path. Build it if you need it; nothing about the pattern requires it.
What this buys you
- A third-party agent’s mental model collapses to: “one
Network, many services, each bound by instance name.” - Multiple instances of the same service coexist trivially — each derives disjoint IDs from its salt.
- ACL is per-instance, enforced at the edge via
require_ticketon the public primitives; no second ACL layer is needed inside the bundle. - Internal plumbing can move to a derived private network without changing the public surface.
- Private-side schema changes (
StateMachine::signature()bumps) are absorbed behind the instance identity, as long as operators and consumers cut releases against the same version of the deployment crate.
Where the pattern strains
Three things are not free under this convention. Every new service author should be honest about them up front.
Cross-service atomicity is out of scope
There is no way to execute “mix a message AND rotate a multisig
signer” in one consensus transaction. They are different Groups
with different GroupIds, possibly with disjoint membership. If a
service genuinely needs that — rare, but real for some
coordination-heavy cases — the right answer is a fourth primitive
that is itself a deployment providing atomic composition across
services, not an ad-hoc cross-group protocol.
Versioning under stable instance names
If StateMachine::signature() changes, GroupId changes, and
consumers compiled against the old code silently split-brain. Under
multi-instance, the expectation is that “zipnet-acme” is an
operator-level identity that outlives schema changes. Two ways to
reconcile:
- Let the instance salt carry a version (
zipnet-acme-v2), and treat version bumps as retiring the old instance. Clean, but forces consumers to re-pin and release a new build on every upgrade. - Keep the instance name stable across versions and require operators and consumers to cut releases in lockstep against a shared deployment crate version. Avoids churn in instance IDs, at the cost of tighter coupling between operator and consumer release cadences.
Zipnet v1 does not need to resolve this. V2 must.
Noisy neighbours on the shared network
A shared NetworkId means every service’s peers appear in every
agent’s catalog. Discovery gossip, DHT slots, and bond maintenance
scale with the universe, not with the services an agent cares about.
The escape hatch is the derived private network for internal chatter;
the residual cost — peer-catalog size and /mosaik/announce volume —
is paid by everyone. If a service’s traffic would dominate the
shared network (high-frequency metric streams, bulk replication) it
belongs behind its own NetworkId, not on the shared one. Shape A
is the correct call when the narrow-interface argument no longer
outweighs the noise argument.
Checklist for a new service
When adding a service to a shared mosaik universe, use this list:
- Identify the one or two public primitives. If you cannot, the interface is not yet designed.
- Pick a service root:
unique_id!("your-service"). - Define instance-salt conventions: what
instance_namemeans, who picks it, whether it carries a version. - Write a
bind(&Network, instance) -> TypedHandlesthat every consumer uses. Never export rawStreamId/StoreId/GroupIdvalues across the crate boundary. - Decide which internal channels, if any, move to a derived private
Network. Default: only the high-churn ones. - Specify
TicketValidatorcomposition on the public primitives. ACL lives here. - Document your instance-name convention in release notes or docs. Consumers compile it in; you are on the hook for keeping the name stable and the code release version-matched.
- Call out your versioning story before shipping. If you cannot
answer “what happens when
StateMachine::signature()bumps?”, you will regret it.
Cross-references
- Architecture — the concrete instantiation of this pattern for zipnet v1.
- Mosaik integration notes — gotchas and idioms specific to the primitives referenced here.
- Roadmap to v2 — where versioning-under-stable-names and cross-service composition work live.