Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Connecting to the universe

audience: users

The nuts and bolts of building the Arc<Network> that the Zipnet::<D>::* constructors attach to. The zipnet SDK never constructs the network for you — this is intentional. One network can host zipnet alongside other mosaik services on the shared universe, and you own its lifetime.

The minimum

use std::sync::Arc;
use mosaik::Network;
use zipnet::{Config, ShuffleWindow, UNIVERSE, Zipnet};
use zipnet::{DecodeError, ShuffleDatum, UniqueId, unique_id};
pub struct Note(pub [u8; 240]);
impl ShuffleDatum for Note {
    const TYPE_TAG: UniqueId = unique_id!("acme.note-v1");
    const WIRE_SIZE: usize = 240;
    fn encode(&self) -> Vec<u8> { self.0.to_vec() }
    fn decode(b: &[u8]) -> Result<Self, DecodeError> {
        <[u8; 240]>::try_from(b).map(Self).map_err(|e| DecodeError(e.to_string()))
    }
}

const ACME_MAINNET: Config = Config::new("acme.mainnet")
    .with_window(ShuffleWindow::interactive())
    .with_init([0u8; 32]); // operator-published in real deployments

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let network = Arc::new(Network::new(UNIVERSE).await?);
    let tx      = Zipnet::<Note>::submit(&network, &ACME_MAINNET).await?;

    let _ = tx.send(Note([0u8; 240])).await?;
    Ok(())
}

Network::new(UNIVERSE) produces a network with default mosaik settings — random SecretKey, mDNS off, no bootstrap peers, no prometheus endpoint. Enough for local integration tests; rarely enough for a real deployment.

Bring your own builder

For anything beyond a local experiment, use Network::builder:

use std::{net::SocketAddr, sync::Arc};
use mosaik::{Network, discovery};
use zipnet::{Config, ShuffleWindow, UNIVERSE, Zipnet};
use zipnet::{DecodeError, ShuffleDatum, UniqueId, unique_id};
pub struct Note(pub [u8; 240]);
impl ShuffleDatum for Note {
    const TYPE_TAG: UniqueId = unique_id!("acme.note-v1");
    const WIRE_SIZE: usize = 240;
    fn encode(&self) -> Vec<u8> { self.0.to_vec() }
    fn decode(b: &[u8]) -> Result<Self, DecodeError> {
        <[u8; 240]>::try_from(b).map(Self).map_err(|e| DecodeError(e.to_string()))
    }
}

const ACME_MAINNET: Config = Config::new("acme.mainnet")
    .with_window(ShuffleWindow::interactive())
    .with_init([0u8; 32]);

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let network = Arc::new(
        Network::builder(UNIVERSE)
            .with_mdns_discovery(true)
            .with_discovery(
                discovery::Config::builder()
                    .with_bootstrap(universe_bootstrap_peers()),
            )
            .with_prometheus_addr("127.0.0.1:9100".parse::<SocketAddr>()?)
            .build()
            .await?,
    );

    let tx = Zipnet::<Note>::submit(&network, &ACME_MAINNET).await?;
    let _  = tx.send(Note([0u8; 240])).await?;
    Ok(())
}

fn universe_bootstrap_peers() -> Vec<mosaik::PeerId> { vec![] }

Every argument above is a mosaik concern, not a zipnet one. The full builder reference lives in the mosaik book. The rest of this page covers the fields that matter most for a zipnet user.

UNIVERSE

zipnet::UNIVERSE is the shared NetworkId every zipnet deployment lives on. Today it is mosaik::unique_id!("mosaik.universe"). When mosaik ships its own canonical universe constant, this value will be re-exported verbatim.

If your Network is on a different NetworkId, every Zipnet::<D>::* constructor rejects it with Error::WrongUniverse { expected, actual } before any I/O happens. There is no way to tunnel zipnet over a non-universe network; the SDK hard-checks this.

Bootstrap peers

Universe-level, not zipnet-specific. Any reachable peer on the shared universe works as a bootstrap — a mosaik registry node, a friendly operator’s aggregator, your own persistent relay. The operator does not typically hand out zipnet-deployment-specific bootstrap peers; they publish one set of universe bootstraps that their zipnet deployment (and any other services they host) joins through.

Once your network is bonded to the universe, the Zipnet::<D>::* constructors find the specific deployment’s committee through the shared peer catalog — you do not need to know anything zipnet-specific at network-builder time.

use mosaik::discovery;
use zipnet::UNIVERSE;

let network = mosaik::Network::builder(UNIVERSE)
    .with_discovery(
        discovery::Config::builder()
            .with_bootstrap(vec![
                // universe-level bootstrap peer IDs, operator-supplied
            ]),
    )
    .build()
    .await?;

On first connect with no bootstrap peers you fall back to the DHT. That works, but it is slow (tens of seconds on a cold start). At least one bootstrap peer is a practical requirement for anything beyond local tests.

mDNS

.with_mdns_discovery(true) collapses discovery latency from minutes to seconds on a shared LAN and is harmless elsewhere. Turn it off only if your security posture forbids advertising peers over mDNS.

Secret key

Omit .with_secret_key(...) for a fresh iroh identity per run. Set a stable SecretKey if you want a predictable PeerId across restarts. See Client identity for when each is appropriate.

Reaching the universe from behind NAT

iroh handles NAT traversal through its relay infrastructure. Most residential and office setups need no extra configuration. Things that help when they don’t:

  • Outbound UDP must be allowed. Iroh uses QUIC on UDP.
  • Full-cone NAT or better is easy. Symmetric NAT falls back to relay — still works, with extra latency.
  • UDP-terminating proxies break iroh. Run the agent from a host with raw outbound UDP.

At startup the network logs its relay choice:

relay-actor: home is now relay https://euc1-1.relay.n0.iroh-canary.iroh.link./

Repeated “Failed to connect to relay server” warnings mean your outbound path is broken; discovery mostly still works via DHT, just slow.

Observability for your own agent

use std::{net::SocketAddr, sync::Arc};
use mosaik::Network;
use zipnet::UNIVERSE;

let network = Arc::new(
    Network::builder(UNIVERSE)
        .with_prometheus_addr("127.0.0.1:9100".parse::<SocketAddr>()?)
        .build()
        .await?,
);

Then scrape http://127.0.0.1:9100/metrics — you’ll get mosaik’s metrics plus whatever you emit with the metrics crate. The zipnet SDK does not expose its own top-level metrics endpoint; observability is the network’s job.

One network, many services and deployments

Because the Zipnet::<D>::* constructors only borrow &Arc<Network>, you pay for one mosaik endpoint across every service and deployment you bind:

use std::sync::Arc;
use mosaik::Network;
use zipnet::{Config, ShuffleWindow, UNIVERSE, Zipnet};
use zipnet::{DecodeError, ShuffleDatum, UniqueId, unique_id};
pub struct Note(pub [u8; 240]);
impl ShuffleDatum for Note {
    const TYPE_TAG: UniqueId = unique_id!("acme.note-v1");
    const WIRE_SIZE: usize = 240;
    fn encode(&self) -> Vec<u8> { self.0.to_vec() }
    fn decode(b: &[u8]) -> Result<Self, DecodeError> {
        <[u8; 240]>::try_from(b).map(Self).map_err(|e| DecodeError(e.to_string()))
    }
}

const ACME_MAINNET: Config = Config::new("acme.mainnet")
    .with_window(ShuffleWindow::interactive())
    .with_init([0u8; 32]);

const PREVIEW_ALPHA: Config = Config::new("preview.alpha")
    .with_window(ShuffleWindow::interactive())
    .with_init([0u8; 32]);

async fn run() -> anyhow::Result<()> {
let network = Arc::new(Network::new(UNIVERSE).await?);

let prod    = Zipnet::<Note>::submit(&network, &ACME_MAINNET).await?;
let testnet = Zipnet::<Note>::submit(&network, &PREVIEW_ALPHA).await?;
// Unrelated services on the same universe would bind similarly:
// let multisig = Multisig::<Vote>::sign(&network, &TREASURY).await?;
// let storage  = Storage::<Blob>::write(&network, &ARCHIVE).await?;
let _ = (prod, testnet);
Ok(()) }

Each deployment derives its own IDs from its own Config + datum type, so they coexist on the shared peer catalog without collision. One UDP socket, one DHT record, one gossip loop.

Graceful shutdown

drop(network);

drop cancels everything — open streams, collection readers, bonds. Mosaik emits a gossip departure so the operator’s logs show you leaving cleanly. Dropping individual handles (Submitter / Reader) closes that handle only; other tasks holding clones keep running, and the Network stays up. See Publishing — Shutdown.

Cold-start checklist

If your agent starts but one of the Zipnet::<D>::* constructors returns ConnectTimeout:

  1. The Arc<Network> is on UNIVERSE. If you see WrongUniverse instead, the network was built against a different NetworkId. Switch back to UNIVERSE.
  2. The Config matches the operator’s exactly. Name, window, and init all fold into the deployment’s on-wire identity; any mismatch surfaces as ConnectTimeout, not a structured error. Print Zipnet::<D>::deployment_id(&CONFIG) on both sides to confirm they agree.
  3. D::TYPE_TAG and D::WIRE_SIZE match the operator’s schema. These also fold into the identity — a schema bump produces a different deployment id even at the same name.
  4. Bootstrap PeerIds are reachable. nc -zv <their_host> or whatever the operator tells you to test.
  5. Outbound UDP is allowed. iperf over UDP to a public host.
  6. Your mosaik version matches (=0.3.17). Any minor-version drift changes wire formats.

If none of these resolves it, see Troubleshooting.