Shardy: Decentralized Physical Infrastructure Network
Technical Architecture Specification
Version: 2.1 (Updated)
Target Architecture: Browser-based worker nodes + orchestrator consensus mesh
1. Product Definition
Shardy is a decentralized compute network that turns browser nodes into verifiable workers. The system separates:
- Control Plane (Orchestrator): Task creation, dispatch, verification, consensus, state persistence.
- Compute Plane (Worker Nodes): WebGPU/WASM execution, telemetry, ZK proof generation.
- P2P Plane (libp2p): Gossip for presence, transactions, and block announcements.
2. Current Runtime Components (Updated Structure)
| Component | Location | Responsibility |
|---|---|---|
| Orchestrator Service | docs/orchestrator/src | Bun + Elysia API, WebSocket worker gateway, consensus engine, state machine, task dispatcher. |
| Worker Node App | shardy-monorepo/apps/shardy | Browser node runtime, WebGPU compute worker, ZK proof generation, libp2p presence. |
| Docs App | shardy-monorepo/apps/docs | Technical and product documentation. |
| ZK Artifacts (Client) | shardy-monorepo/apps/shardy/public/snark | Groth16 WASM + zkey + verification key + manifest. |
| WASM Preprocessor | shardy-monorepo/apps/shardy/public/wasm | preprocess_node_engine.wasm for deterministic tensor preprocessing. |
3. Technology Stack (Actual Runtime)
| Execution Domain | Tooling Implemented | Architectural Responsibility |
|---|---|---|
| API + Orchestrator | Bun + Elysia | REST/WS gateway, worker admission, task lifecycle, telemetry, campaigns. |
| Consensus & State | libp2p + custom consensus engine | Block proposal/voting/commit, state root computation, snapshot/block sync. |
| Persistence | RocksDB (default), SQLite (dev-only) | Tasks, deliveries, workers, task events, dead letters, balances, blocks. |
| P2P Mesh | libp2p + GossipSub | Presence gossip, transaction broadcast, block announcements. |
| Compute Engine | WebGPU + TypeGPU | GPU execution in compute.worker.ts. |
| CPU Preprocessing | Rust -> WASM | Deterministic preprocessing and clamping, fallback to JS. |
| ZK Proofs | Circom + SnarkJS (Groth16) | Proof generation on node, verification on orchestrator. |
4. High-Level Flow (Updated)
5. End-to-End Pipeline (Overview)
5. Core Mechanics (Reality-Based)
A. Worker Admission & Tiering
Workers run a local benchmark and send profile_v2. The orchestrator assigns:
- Tier 1 / Tier 2 / Tier 3 based on GFLOPS and memory stability.
- Admission requires minimum GFLOPS and VRAM stability.
B. Task Dispatch & Framing
Each assignment ships two frames:
- Meta frame: protobuf payload containing taskId, deliveryId, seed, complexity, verifierVersion.
- Binary frame: raw input payload for GPU/WASM.
C. Verification & Quorum
Tasks are assigned with redundancy. For verified tasks:
- Each worker produces a checksum and a Groth16 proof.
- Orchestrator verifies proof and cross-checks checksum.
- Matching results trigger
task_verified; mismatch producesconsensus_mismatchand slashing.
D. State Safety
The orchestrator computes a deterministic state root from persisted data. Consensus nodes:
- Commit blocks only if state roots match.
- Sync via block streams or snapshots for fast catch-up.
6. Security Guarantees (Updated)
- Signed Identity: Worker
helloandpresenceare ECDSA P-256 signed. - Replay Guard: Proof submissions include replay guards in the state store.
- Timeout + Reassignment: Watchdog triggers retries, reassignments, or dead letters.
- State Root Halting: Consensus halts on state root mismatches.
Last updated on