Distributed Filesystem Research, Built To Run

Storage that keeps working when the network does not.

NexusFS combines local-first storage, signed operations, deterministic replication, and proof-ready verification in a single executable. It is designed for edge devices, offline-first workflows, and systems that cannot afford blind trust.

Current Baseline Local core + admin + verified scaffolding

Canonical object hashing, chunked CAS writes, persistent heads, and idempotent oplog application are already wired into the workspace.

Binary shape Single executable
Replication model Oplog first
Verification Signatures + proofs
Target Edge + offline-first
Workspace crates 13
Core principle Verifier first
Local storage CAS + KV
Proof path Transparent to ZK

Why NexusFS

Practical systems engineering with research headroom

Offline-first by default

The system is built to survive partitions, weak connectivity, and delayed synchronization without requiring an always-on control plane.

Signed, replay-safe operations

Every mutation is represented as a signed filesystem operation with a stable identifier, allowing idempotent application across peers.

Proof-ready architecture

Transparent verification is the immediate baseline, but the object and protocol design reserve a clean path toward ZK commitments.

Energy-aware evolution

Background work is intended to respect battery, temperature, and link cost rather than pretending all nodes are always-on servers.

System Shape

Layered for integrity, not ceremony

NexusFS separates immutable content, mutable namespace state, transport, and optional facades. That keeps the local state machine deterministic while letting the replication layer and future proof systems evolve independently.

  • Core: object formats, chunking, state transitions
  • Storage: blob and KV backends with clean traits
  • Protocol: shared operation and network message types
  • Net: oplog-first synchronization and blob fetch
  • Facades: admin, S3-like API, and future POSIX mount
See the architecture page
NexusFS architecture layers from clients and peers through core, storage, crypto, and proof systems.

Build Surface

Designed to ship as a project, not just a paper

Research Tracks

Grounded implementation, ambitious horizon

Quick Start

Run the local core in minutes

cargo build -p nexusfs
cargo run -p nexusfs -- daemon --config examples/nexusfs.toml
cargo run -p nexusfs -- status --config examples/nexusfs.toml

The current implementation already boots local storage, persists device identity, creates a repository head on first launch, and serves the admin surface from the embedded HTTP layer.

Documentation

Public docs and engineering specs, both included

The repo now includes a dedicated `documentation/` folder for clean onboarding and a deeper internal `docs/` set for protocol and research detail.