EasyEnclave Mini

A confidential computing manifesto · MMXXVI

Synthwave horizon: a striped sun setting behind mountains over a neon perspective grid, with an OUTATIME license plate

Where we're going, we don't need commands.

EasyEnclave Mini is a ~50 MB Intel TDX runtime. No general-purpose distro. No container runtime. No SSH. No HTTP. No package manager. Boot. Attest. Run.

Open Mini on GitHub How we got here

Six things we removed

00 — No general-purpose distro

Stock distributions ship hundreds of packages no one would consciously add to a trusted computing base if asked one at a time. Mini is a small Rust PID 1, a Linux kernel, and your workload binary — and that's the list.

If we can't justify it for the attestation report, it isn't in the image.

01 — No container runtime

The enclave already isolates the workload from the host. Stacking containerd inside the VM rebuilds namespaces and cgroups at the layer where you most need to read every line. Workloads run as plain processes.

Isolation primitives belong outside what we're trying to attest.

02 — No package manager

Every byte was decided at build time. The image you boot is the image you measure. There's nothing to apt-get inside a running enclave because the enclave isn't a place to install software — it's a place to run a workload you already shipped.

03 — No SSH daemon

You don't ssh into a confidential VM. The point of one is that the operator cannot see in. Control happens through one local unix socket inside the guest, gated by a boot-time token that nobody outside the VM ever sees.

If your runbook ends in ssh in and check, you've already failed the threat model.

04 — No HTTP server

The control plane is newline-delimited JSON over a unix socket. There is no public HTTP surface to scan, fingerprint, or 0-day. Anything that talks out, talks out from the workload itself, on its terms.

05 — No commands to memorize

Boot. Attest. Run a workload binary. That's the entire posture. Everything else was a build-time decision baked into a measured, reproducible image — readable end-to-end.

Where we're going, we don't need a runbook.

1.21 megabytes is plenty.
— flux capacitor, probably
DevOps Defender: a pixel-art armored guardian holding a shield bearing a TDX lock, surrounded by circuit traces
DevOps Defender · stands at the edge of the enclave so you don't have to.

What's actually inside

Subtraction is the loud half of the story. Here's the quiet half: the small set of pieces that survived the cut. Each one is here because it earns its place in the attestation report.

Rust PID 1
Mounts filesystems, loads boot-time config, reaps processes, starts and supervises the workload. That's the job description.
Linux kernel
A pinned kernel with the modules required to talk to TDX hardware and bring up the network. Nothing extra is compiled in.
TDX attestation
Quotes are produced through Linux configfs-tsm, with caller-supplied report-data so verifiers can bind quotes to whatever they're checking.
Unix socket API
One local socket, newline-delimited JSON, boot-token gated. The full control plane fits on a single page.
Your workload
A static binary you built, pulled in at image-build time. It runs as the workload — no shim, no init wrapper, no daemon.
Image targets
Differences between cloud and local boot live in the image profile: gcp, azure, local-tdx-qcow2. The runtime stays the same.

Read more

There is a longer version of this story: why the project split, why a 7 GB CUDA image got parked, why the active branch is ~50 MB and CPU-only.

How we got here Read the source