Skip to Content
Shardy: Decentralized Physical Infrastructure Network

Shardy: Decentralized Physical Infrastructure Network

Technical Architecture Specification

Version: 2.1 (Updated)
Target Architecture: Browser-based worker nodes + orchestrator consensus mesh


1. Product Definition

Shardy is a decentralized compute network that turns browser nodes into verifiable workers. The system separates:

  • Control Plane (Orchestrator): Task creation, dispatch, verification, consensus, state persistence.
  • Compute Plane (Worker Nodes): WebGPU/WASM execution, telemetry, ZK proof generation.
  • P2P Plane (libp2p): Gossip for presence, transactions, and block announcements.

2. Current Runtime Components (Updated Structure)

ComponentLocationResponsibility
Orchestrator Servicedocs/orchestrator/srcBun + Elysia API, WebSocket worker gateway, consensus engine, state machine, task dispatcher.
Worker Node Appshardy-monorepo/apps/shardyBrowser node runtime, WebGPU compute worker, ZK proof generation, libp2p presence.
Docs Appshardy-monorepo/apps/docsTechnical and product documentation.
ZK Artifacts (Client)shardy-monorepo/apps/shardy/public/snarkGroth16 WASM + zkey + verification key + manifest.
WASM Preprocessorshardy-monorepo/apps/shardy/public/wasmpreprocess_node_engine.wasm for deterministic tensor preprocessing.

3. Technology Stack (Actual Runtime)

Execution DomainTooling ImplementedArchitectural Responsibility
API + OrchestratorBun + ElysiaREST/WS gateway, worker admission, task lifecycle, telemetry, campaigns.
Consensus & Statelibp2p + custom consensus engineBlock proposal/voting/commit, state root computation, snapshot/block sync.
PersistenceRocksDB (default), SQLite (dev-only)Tasks, deliveries, workers, task events, dead letters, balances, blocks.
P2P Meshlibp2p + GossipSubPresence gossip, transaction broadcast, block announcements.
Compute EngineWebGPU + TypeGPUGPU execution in compute.worker.ts.
CPU PreprocessingRust -> WASMDeterministic preprocessing and clamping, fallback to JS.
ZK ProofsCircom + SnarkJS (Groth16)Proof generation on node, verification on orchestrator.

4. High-Level Flow (Updated)


5. End-to-End Pipeline (Overview)


5. Core Mechanics (Reality-Based)

A. Worker Admission & Tiering

Workers run a local benchmark and send profile_v2. The orchestrator assigns:

  • Tier 1 / Tier 2 / Tier 3 based on GFLOPS and memory stability.
  • Admission requires minimum GFLOPS and VRAM stability.

B. Task Dispatch & Framing

Each assignment ships two frames:

  • Meta frame: protobuf payload containing taskId, deliveryId, seed, complexity, verifierVersion.
  • Binary frame: raw input payload for GPU/WASM.

C. Verification & Quorum

Tasks are assigned with redundancy. For verified tasks:

  • Each worker produces a checksum and a Groth16 proof.
  • Orchestrator verifies proof and cross-checks checksum.
  • Matching results trigger task_verified; mismatch produces consensus_mismatch and slashing.

D. State Safety

The orchestrator computes a deterministic state root from persisted data. Consensus nodes:

  • Commit blocks only if state roots match.
  • Sync via block streams or snapshots for fast catch-up.

6. Security Guarantees (Updated)

  • Signed Identity: Worker hello and presence are ECDSA P-256 signed.
  • Replay Guard: Proof submissions include replay guards in the state store.
  • Timeout + Reassignment: Watchdog triggers retries, reassignments, or dead letters.
  • State Root Halting: Consensus halts on state root mismatches.
Last updated on