Etcd's problem space, attacked with Rust.
Mango is not "etcd, rewritten." It is etcd's problem space attacked with a language whose primitives lift specific etcd footguns out of existence at compile time.
Etcd is the reference implementation we study. We are not bound by its Go-isms. No GC means tail latency is bounded by what we do in our own code, never by a runtime collector. Memory safety without a runtime means use-after-free, double-free, and data races on shared memory are not possible in safe Rust — CVEs in this class become impossible-by-construction, not impossible-after-careful-review.
Fearless concurrency via Send and Sync turns "shared mutable state across threads" from a Friday-night page into a compile error. Explicit failure as a value via Result<T, E> makes every fallible operation visible at the call site. Cargo-native supply chain hygiene via cargo-deny, cargo-audit, cargo-vet, and SBOM via CycloneDX.
These are the mechanism. The ten bars below are the measurement. Every PR is judged against them. If a change merely matches etcd, that is a regression relative to the goal — find the win, or find the lever.
Ten measurable axes.
Each bar has a comparison oracle (the pinned etcd v3.5.x binary at benches/oracles/etcd/), a hardware signature, and a named test that gates merge. Full bar definitions in the roadmap.
Is mango the right tool for me?
Distributed KV stores are not interchangeable. Pick the one whose consistency model and scale ceiling match the problem you have.
| Mango | etcd | FoundationDB | DynamoDB | |
|---|---|---|---|---|
| Consistency | Linearizable | Linearizable | Strict serializable | Eventual; strong opt-in (2× cost) |
| Replication | Raft, single cluster | Raft, single cluster | Multi-version, multi-shard | Hash-sharded, multi-region async |
| Writes / cluster | ≥ 1.5× etcd target · bar #1 | ~50-200K /sec | ~10M /sec (mixed) | ~10-100M /sec global (mixed) |
| Linearizable reads | ~600K /sec (Tier 2b) target | ~50-150K /sec (ReadIndex) | (see above) | Strong-reads-only mode (2× cost) |
| Stale reads | ~1M /sec (Tier 2a) target | ~500K-1M /sec (serializable) | (see above) | Default mode |
| Deployment | Self-host, OSS | Self-host, OSS | Self-host, OSS | AWS-only, hosted |
| Primary use case | Cluster metadata, coordination, config, leader election | Same as mango | Application data, ACID at scale | Application data CRUD at hyperscale |
| Operational profile | Single-binary, deterministic latency (no GC) | Single-binary, Go GC | Multi-process; coordinators, storage, log | Fully managed |
Shipped, in flight, planned.
The roadmap progresses phase by phase, expert-gated PR by expert-gated PR. ROADMAP.md is the source of truth; everything below is a snapshot.
Shipped
- Phase 0Governance, CI gates, lints, supply-chain tooling.
- Phase 0.5Foundation tooling: nextest, loom, madsim, miri, semver-checks, cargo-deny.
- Phase 1Single-node storage layer on redb plus tikv/raft-engine, behind a swappable Backend trait.
- Phase 2MVCC: Revision, sharded KeyIndex, snapshots via arc_swap, compaction with physical removal, fuzz target on key encoding.
Planned
- Phase 3Watch: streaming change notifications, sync & unsync watcher groups, progress notifies.
- Phase 4Lease: TTL keys, keep-alive, expiry-driven atomic delete.
- Phase 5Raft consensus on tikv/raft-rs. Pipelined replication, ReadIndex, deterministic simulation.
- Phase 6gRPC server: KV, Watch, Lease services + the node binary.
- Phase 7mangoctl, the etcdctl-equivalent CLI.
- Phase 13Robustness: public Jepsen run, deterministic simulator regression suite.
- Phase 14Performance push against the pinned etcd v3.5.x oracle.
- Phase 14.5Tier 2 read-scale-out via learner replicas.