Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

TEE-gated deployments

audience: users

Some zipnet deployments require every participant — committee members and publishing clients — to run inside a TDX enclave whose measurement matches the operator’s expected MR_TD. This chapter covers the user side of that setup.

Is the deployment TEE-gated?

Ask the operator. Specifically:

  • Does the committee stack a Tdx validator on its admission tickets?
  • If so, what MR_TD must your client image report?

If the answer to the first question is no, skip this chapter — the rest of the user guide applies unchanged.

How the SDK decides whether to attest

TDX is a Cargo feature on the zipnet crate, not a function of the instance name:

  • tee-tdx disabled (default). The SDK runs a mocked attestation path. Your PeerEntry does not carry a TDX quote. A TDX-gated operator’s committee rejects you at bond time — you see Error::ConnectTimeout (the rejection is silent at the discovery layer) or Error::Attestation if the operator has enabled a stricter surfacing mode.
  • tee-tdx enabled. The Zipnet::<D>::* constructors use mosaik’s real TDX path to generate a quote bound to your current PeerId and attach it to your discovery entry. The committee validates the quote before admitting you.
# Cargo.toml for a user-side agent that must attest.
[dependencies]
zipnet = { version = "0.1", features = ["tee-tdx"] }

With the feature on, your binary only runs correctly inside a real TDX guest. The TDX hardware refuses to quote from a non-TDX machine, so the Zipnet::<D>::* constructors surface that as Error::Attestation("…").

Build-time: produce a TDX image

Add mosaik’s TDX builder to your crate:

[build-dependencies]
mosaik = { version = "=0.3.17", features = ["tdx-builder-alpine"] }
# or: features = ["tdx-builder-ubuntu"]

build.rs:

fn main() {
    mosaik::tee::tdx::build::alpine().build();
}

This produces a bootable TDX guest image at target/<profile>/tdx-artifacts/<crate>/alpine/ plus a precomputed <crate>-mrtd.hex. The operator either uses your MR_TD as their expected value, or — if they pin a specific image — hands you theirs and you rebuild to match.

The mosaik TDX reference covers Alpine vs Ubuntu trade-offs, SSH and kernel customization, and environment-variable overrides.

The operator → user handshake for TDX

A TDX-gated deployment adds one item to the three-item handshake in What you need from the operator:

ItemWhat it is
Committee MR_TDThe 48-byte hex measurement the operator’s committee images use.

The operator hands this out via their release notes, not via the wire. The zipnet SDK does not bake per-instance MR_TD mappings in — there is no table of “acme.mainnet requires MR_TD abc…” inside the crate. Keeping that mapping client-side is the operator’s responsibility, published out of band.

When the operator rotates the image, your old quote stops validating; the fix is to rebuild with the new MR_TD and redeploy. There is no auto-discovery of acceptable measurements on the wire.

Multi-variant deployments

During a rollout, an operator may accept multiple client MR_TDs simultaneously — usually the old and the new during a staged migration. You only need to match one of them. The precomputed hex files in target/<profile>/tdx-artifacts/<crate>/.../ tell you what your image reports; compare against the list the operator publishes.

Sealing secrets inside the enclave

Zipnet’s current SDK does not expose a sealed-storage helper — each Zipnet::<D>::submit generates a fresh per-handle DH identity in process memory. That is fine for the default anonymous-use-case model, where identity is meant to rotate.

If you need stable identity across enclave reboots for a reputation use case, you will need to persist state to TDX sealed storage yourself today. That is out of scope for the SDK and likely to land as a mosaik primitive rather than a zipnet feature; watch the mosaik release notes.

Falling back to non-TDX for development

If you’re writing integration tests and don’t want a TDX VM in the loop, build without the tee-tdx feature and use a deployment whose operator has disabled TDX gating. Typical arrangement:

  • Production and staging: tee-tdx on both sides.
  • Local dev / CI: tee-tdx off on both sides.

The operator runs the dev instance without the Tdx validator on committee admissions; you build your client without the tee-tdx feature. Both sides’ mocks line up.

Failure modes

The error the SDK surfaces when TDX is involved is Error::Attestation(String). Common causes:

  • You built with tee-tdx but aren’t running inside a TDX guest (hardware refuses to quote).
  • Your MR_TD differs from the operator’s. Rebuild with their image.
  • The operator rotated MR_TD and you haven’t. Rebuild.

ConnectTimeout can also stem from TDX mismatches on deployments that surface attestation failures silently at the bond layer; see Troubleshooting.