Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Extending zipnet

audience: contributors

This chapter covers two kinds of extension:

  1. Extending zipnet itself — new commands, collections, streams, ticket classes, or round-parameter knobs within a zipnet deployment.
  2. Building an adjacent service on the shared universe — a new mosaik-native service (multisig signer, secure storage, attested oracle, …) that coexists with zipnet on zipnet::UNIVERSE and reuses the content + intent addressed fingerprint pattern.

The second is the generalisation of the first. The “checklist for a new service” at the end of design-intro is the canonical reference for the second kind; this chapter links to it and concentrates on the concrete how-tos.

Extending zipnet itself

Adding a new command to the committee state machine

  1. Add a variant to Command in crates/zipnet-node/src/committee.rs.
  2. Handle it in apply(). Deterministic only — no I/O, no randomness that isn’t derived from ApplyContext (see Committee state machine — Apply-context usage).
  3. Bump the version tag in CommitteeMachine::signature() (v1v2). This re-scopes the GroupId so mismatched nodes cannot bond. This is a breaking change.
  4. Add a Query variant if the new state needs external read access.
  5. Decide who issues the command. If a non-server peer needs to trigger it, add a declare::stream! channel and a side-task in roles::server that feeds it into group.execute.

Adding a new collection

  1. Declare in crates/zipnet-node/src/protocol.rs:

    declare::collection!(
        pub MyMap = mosaik::collections::Map<K, V>,
        "zipnet.collection.my-map",
    );
  2. Decide writer and reader roles. Writers join the collection’s internal Raft group and bear the leadership election cost.

  3. For TDX-gated collections, compose Tdx::new().require_mrtd(...) onto the collection’s require_ticket alongside the existing BundleValidator — see Mosaik integration — TDX gating.

  4. If the new collection is part of the public surface, think twice. Zipnet’s declared public surface is small (write-side + read-side, see Architecture). A new public collection widens the consumer contract; prefer surfacing via the existing Zipnet::<D>::* constructors instead of growing raw declarations.

  5. Once the target per-deployment layout lands, the literal string will be replaced by DEPLOYMENT.derive("my-map"); structure the name so the migration is a pure rename.

Adding a new typed stream

  1. Declare in protocol.rs. Prefix predicates with producer / consumer per the direction semantics (Mosaik integration — predicate direction).
  2. Use in a role module: MyStream::producer(&network) / MyStream::consumer(&network) returns concrete typed handles.
  3. If this is a high-churn internal channel (aggregator fan-in, DH gossip), it’s a candidate to live on a derived private network rather than the shared universe — see Architecture — Internal plumbing.

Adding a new TicketValidator

  1. Implement mosaik::tickets::TicketValidator on a fresh type. BundleValidator<K> in crates/zipnet-node/src/tickets.rs is the reference shape.

  2. Pick a TicketClass constant. Keep it human-readable ("zipnet.bundle.server", etc.) — ticket classes are intent-addressed and the string is the intent.

  3. Fold a version tag into signature() the same way BundleValidator does:

    fn signature(&self) -> UniqueId {
        K::CLASS.derive("zipnet.my-validator.v1")
    }

    Bumping v1v2 re-scopes the GroupId of every group that stacks this validator. Treat it as a breaking change.

  4. Compose with existing validators via mosaik’s multi- require_ticket — see Mosaik integration — TDX gating for the stacking pattern.

Changing RoundParams

  1. Edit RoundParams::default_v1() in crates/zipnet-proto/src/params.rs.
  2. Bump WIRE_VERSION if the change is semantically meaningful (any client/server disagreement on shape would garble pads otherwise).
  3. CommitteeMachine::signature() already mixes in params fields; every member rederives GroupId and old + new do not bond.
  4. Deploy-time coordination: same procedure as rotating the committee secret.

Adding a TDX attestation requirement

  1. Turn on the tee-tdx feature on zipnet-node, zipnet-server, zipnet-client.

  2. In the deployment-specific main, pre-compute (or hardcode) the expected MR_TD.

  3. Build a validator:

    use mosaik::tickets::Tdx;
    let validator = Tdx::new().require_mrtd(expected_mrtd);
  4. Plumb validator into the server’s run path by stacking it on the committee GroupBuilder::require_ticket and on each collection / stream whose producer you want to TDX-gate.

Swapping the slot assignment function

  1. The slot is picked by zipnet_core::slot::slot_for(client, round, params). Change the body; the caller contract is -> usize.
  2. If you want the footprint scheduling variant, you’ll also want a per-round side channel — see Roadmap — Footprint scheduling.
  3. Deterministic and agreed upon by all nodes. Bump the protocol version tags accordingly.

Running the integration test under heavier parameters

crates/zipnet-node/tests/e2e.rs uses RoundParams::default_v1() and a hardcoded 3-server / 2-client topology. Modify directly; the helpers (cross_sync, run_server, run_client, run_aggregator) are scoped to the test so no cross-cutting refactor is needed.

RUST_LOG=info,zipnet_node=debug cargo test -p zipnet-node --test e2e -- --nocapture

A successful run ends with

zipnet e2e: round r1 finalized with 2/2 messages recovered

Where to put a new role

If you introduce a fourth participant type (say, an “auditor” that archives Broadcasts to cold storage), the idiomatic placement is a new module in crates/zipnet-node/src/roles/ and a sibling crate under crates/zipnet-auditor/ that delegates to it. Follow the zipnet-aggregator binary layout.

Measuring something

Mosaik’s Prometheus metrics are auto-wired; add your own via the metrics crate:

use metrics::{counter, gauge};

counter!("zipnet_rounds_opened_total").increment(1);
gauge!("zipnet_client_registry_size").set(registry.len() as f64);

They will appear at the configured ZIPNET_METRICS endpoint without any scraper-side changes.

Building an adjacent service on the shared universe

Zipnet’s deployment model is a reusable pattern — the full rationale is in design-intro. Any service that wants to coexist on zipnet::UNIVERSE alongside zipnet should reproduce the three conventions:

  1. Content + intent addressed fingerprint. Every public id descends from a single blake3 hash over the operator’s intent (name), the signature-altering content (schema version, wire sizes, consensus config, init salt), and the ACL composition. Expose the fingerprint inputs as a const-constructible Config struct.
  2. A Deployment-shaped convention. Declare the public surface (one or two primitives, ideally) in a single protocol module; export typed Zipnet::<D>::*-style constructors that derive the ids internally.
  3. A fingerprint convention, not a registry. Operator → consumer handshake is universe NetworkId + Config + datum schema + (if TDX-gated) MR_TD. No on-network advertisement required — mosaik’s standard discovery bonds the sides.

Walk the checklist for a new service end-to-end before writing any code. The most common mistake is not answering “what happens when StateMachine::signature() bumps?” before shipping.

When Shape B is the wrong call

A service whose traffic would dominate catalog gossip on the shared universe (high-frequency metric streams, bulk replication) belongs behind its own NetworkId — Shape A in design-intro — Two axes of choice. The narrow-public-surface discipline does not rescue a service whose steady-state traffic is inherently loud; at that point the noise cost dominates the composition benefit.

Optional directory collection

If your operator community wants a human-browsable list of known deployments, ship a sibling Map<InstanceName, InstanceCard> as a devops convenience, not as part of the consumer binding path. See Roadmap — Optional directory collection for the discipline.