# Makechain A specialized protocol built for making things. # Makechain Makechain is a protocol for cryptographically signed, permissionless version control — consensus, state storage, and an entire blockchain built for making things. }> # Acknowledgements Makechain builds on the work of many open-source projects and teams. This page recognizes the most significant contributions. ## Snapchain & Merkle Manufactory Makechain's architecture draws heavily from [Snapchain](https://github.com/farcasterxyz/snapchain) and the broader protocol work by Merkle Manufactory. The approach to consensus-ordered messaging, hub-based architecture, and cryptographic identity informed many of Makechain's core design decisions. ## Commonware The [Commonware Library](https://commonware.xyz) provides the foundational primitives that Makechain runs on: | Primitive | Usage | |-----------|-------| | `commonware-consensus` | Simplex BFT consensus engine | | `commonware-broadcast` | Block relay broadcast | | `commonware-p2p` | Authenticated peer connections | | `commonware-parallel` | Execution strategies | | `commonware-storage` | QMDB merkleized key-value store | | `commonware-runtime` | Async task execution | | `commonware-cryptography` | Ed25519 signing, BLAKE3 digests | | `commonware-codec` | Binary serialization | ## Tempo [Tempo](https://tempo.xyz) provides the settlement chain used by Makechain for custody-backed typed-data signatures, ERC-1271 validation checkpoints, and storage settlement receipts. ## Cryptography * [ed25519-dalek](https://github.com/dalek-cryptography/curve25519-dalek) — Ed25519 signatures * [BLAKE3](https://github.com/BLAKE3-team/BLAKE3) — Hashing for message digests, state roots, and content addressing * [k256](https://github.com/RustCrypto/elliptic-curves) — Secp256k1 for EIP-712 signature recovery * [@noble/ed25519](https://github.com/paulmillr/noble-ed25519) and [@noble/hashes](https://github.com/paulmillr/noble-hashes) — Pure JS cryptography for the client SDK ## Smart Contracts * [OpenZeppelin Contracts](https://github.com/OpenZeppelin/openzeppelin-contracts) — Battle-tested Solidity primitives (`Ownable2Step`, `Pausable`, `EIP712`, `Nonces`) * [Foundry](https://github.com/foundry-rs/foundry) — Solidity development, testing, and deployment ## Infrastructure * [Alloy](https://github.com/alloy-rs/alloy) — Ethereum RPC and ABI integration for settlement-chain access and contract tooling * [Tonic](https://github.com/hyperium/tonic) — gRPC framework powering the node API * [Tokio](https://github.com/tokio-rs/tokio) — Async runtime * [Zstd](https://github.com/gysber/zstd-rs) — Compression for snapshots ## Web & Tooling * [Vocs](https://github.com/wevm/vocs) — Documentation framework * [Hono](https://github.com/honojs/hono) — REST gateway on Cloudflare Workers * [Viem](https://github.com/wevm/viem) and [Wagmi](https://github.com/wevm/wagmi) — Web3 client libraries for the demo applications * [Cloudflare Workers](https://workers.cloudflare.com) — Edge compute for the gateway, MCP server, and container hosting # Architecture Makechain uses a layered architecture with single-chain Simplex BFT consensus and serial per-project execution. ## System overview Makechain architecture: clients connect via gRPC to the validator node containing API, mempool, Simplex BFT consensus, execution engine, and QMDB ## Layers ### Message Layer Every message is a self-authenticating envelope containing a BLAKE3 hash, Ed25519 signature, and the signer's public key. Messages are structurally validated before entering the mempool. ### Consensus Layer A single Simplex BFT consensus chain orders all messages. The leader proposes blocks by draining the mempool, and the execution engine processes them in two phases. Two-phase execution: serial account pre-pass, then serial per-project execution via BatchStore, producing a BLAKE3 state root The two phases are: 1. **Account pre-pass** — `SIGNER_ADD`, `SIGNER_REMOVE`, `ACCOUNT_DATA`, `VERIFICATION_ADD` / `VERIFICATION_REMOVE`, `LINK_ADD` / `LINK_REMOVE`, `REACTION_ADD` / `REACTION_REMOVE`, `PROJECT_CREATE`, `PROJECT_REMOVE`, `STORAGE_CLAIM`, and `FORK` are applied serially because they touch shared `owner_address`-scoped account state 2. **Serial project execution** — Remaining project-scoped messages are grouped by `project_id` and executed serially per group against a shared `BatchStore` overlay on QMDB This single-chain model achieves high throughput through consensus pipelining (multiple blocks in flight) without the complexity of cross-shard coordination. Non-proposer validators can optionally broadcast **subblock payloads** — signed mempool snapshots — to help the proposer build fuller blocks (see [Consensus](/protocol/consensus#subblock-architecture)). ### State Layer State is stored in a prefix-namespaced key-value store with lexicographic ordering for range scans: | Prefix | Namespace | |--------|-----------| | `0x02` | Blocks | | `0x03` | Tombstones | | `0x04` | Account state | | `0x05` | Account metadata | | `0x06` | Key entries | | `0x07` | Key reverse index (pubkey → owner\_address) | | `0x09` | Verifications | | `0x0A` | Project state | | `0x0B` | Project metadata | | `0x0C` | Project name index | | `0x0D` | Refs | | `0x0E` | Commits | | `0x0F` | Collaborators | | `0x10` | Links | | `0x11` | Link reverse index | | `0x12` | Reactions | | `0x13` | Reaction reverse index | | `0x14` | Counters | | `0x15` | Prune markers | | `0x16` | Storage grants | | `0x17` | Storage claim settlement markers | | `0x18` | Finalized messages | | `0x19` | Replay metadata | QMDB is the single source of truth. During block execution, a `BatchStore` creates a local mutations overlay on QMDB, then merkleizes and applies the changeset atomically on commit. API queries use a `QmdbReadStore` for lock-free reads. The `StateStore` trait keeps the storage backend pluggable. ### Content Storage The consensus layer stores only message metadata (~100-500 bytes). File content (blobs, trees) lives in external storage, referenced by optional `content_digest` (integrity hash) and `url` (locator) in commit bundles. These fields are self-attested — validators do not fetch or verify content. ## Commonware Primitives Makechain builds on the [Commonware Library](https://commonware.xyz): | Primitive | Usage | |-----------|-------| | `commonware-consensus` | Simplex BFT consensus engine | | `commonware-broadcast` | Block relay broadcast (buffered per-peer caching) | | `commonware-p2p` | Authenticated peer connections | | `commonware-parallel` | Execution strategies (Sequential) | | `commonware-runtime` | Async task execution (tokio backend) | | `commonware-cryptography` | Ed25519 signing, BLAKE3 digests | | `commonware-storage` | QMDB merkleized key-value store (source of truth) | | `commonware-codec` | Binary serialization | ## Indexer and event processor The **indexer** (`cargo build --bin indexer --features indexer`) streams finalized blocks from a node via gRPC, decodes messages, and writes them into Postgres. It publishes events to Redis streams for downstream consumers. The **event processor** (`cargo build --bin event-processor --features event-processor`) consumes Redis streams published by the indexer and maintains denormalized query tables in Postgres — projects, accounts, collaborators, links, and reactions. Processors are idempotent and checkpoint-based, allowing safe restarts without data loss. Both binaries are feature-gated and require external Postgres and Redis instances. ## gRPC API The node exposes a gRPC service on port 50051 (configurable) with: * **grpc-web support** — browser clients via HTTP/1.1 (tonic-web) * **CORS** — configured for cross-origin grpc-web requests * **Server reflection** — runtime service discovery (grpc reflection v1) * **Message streaming** — `SubscribeMessages` with type and project\_id filters # Building with AI Makechain provides multiple ways for AI assistants to access protocol documentation, search source code, and query live chain data. The MCP server is the most powerful option — it gives assistants direct access to the full codebase and protocol API. ## MCP server The Makechain MCP server at `mcp.makechain.net` exposes 16 tools for searching the codebase, searching MIPs, reading documentation, and querying the live protocol API. It follows the [Model Context Protocol](https://modelcontextprotocol.io/) standard and works with any MCP-compatible client. Under [MIP-3](https://github.com/officialunofficial/makechain/blob/main/protocol/mips/MIP-0003-address-native-identity.md), accounts are identified by `owner_address` (20-byte EVM address). Protocol tools use that identifier on the address-native variant. ### Tools | Tool | Description | |------|-------------| | `get_file` | Retrieve a source file by path at any git ref (branch, tag, or SHA) | | `search_code` | Search the codebase via GitHub code search; supports directory and extension filters | | `list_files` | Browse the directory tree at a given path and ref | | `get_overview` | Repository overview — README and CLAUDE.md | | `get_spec` | Protocol specification — full or by section | | `get_docs` | Documentation pages from the docs site | | `search_mips` | Search Makechain Improvement Proposals by number or keyword | | `semantic_search_mips` | Concept search over MIPs using hybrid retrieval | | `get_project` | Project details by ID or by owner address and name | | `list_projects` | List projects, optionally filtered by owner address | | `get_account` | Account details by owner address or public key | | `get_ref` | Get a specific ref from a project | | `list_refs` | List all refs for a project | | `get_commit` | Commit details from a project | | `get_chain_stats` | Chain statistics — block height, message counts | | `get_block` | Block details by block number | Codebase tools (`get_file`, `search_code`, `list_files`, `get_overview`, `get_spec`, `get_docs`) call GitHub directly with a 5-minute edge cache — there is no internal index to fall behind. Pass `ref` to read at a tag or commit SHA; defaults to `main`. MIP tools use Cloudflare AI Search against `mips.makechain.net`. Protocol tools proxy through the REST gateway at `api.makechain.net`. ### Claude Code Add the MCP server to [Claude Code](https://claude.ai/claude-code): ```bash claude mcp add --transport sse makechain https://mcp.makechain.net/sse ``` Claude Code can now search the Makechain codebase, read the protocol spec, and query live chain data directly. ### Claude Desktop Add to your **claude\_desktop\_config.json**: ```json { "mcpServers": { "makechain": { "url": "https://mcp.makechain.net/sse" } } } ``` ### Cursor Add to your **.cursor/mcp.json**: ```json { "mcpServers": { "makechain": { "url": "https://mcp.makechain.net/sse" } } } ``` ### Any MCP client Connect to the SSE endpoint at `https://mcp.makechain.net/sse`. The server returns a session endpoint on connection. Send JSON-RPC messages to the session endpoint to call tools. ## llms.txt The docs automatically generate [`llms.txt`](https://llmstxt.org/) files for LLM consumption: * [`/llms.txt`](https://docs.makechain.net/llms.txt) — a concise index of all pages with titles and descriptions * [`/llms-full.txt`](https://docs.makechain.net/llms-full.txt) — complete documentation content in a single file Use `llms-full.txt` to give an AI assistant full context on the Makechain protocol in one shot. ## Ask AI Every documentation page includes an "Ask in ChatGPT" button that opens the current page context in ChatGPT with a Makechain-aware prompt. The dropdown also lets you copy the raw page content for pasting into any AI assistant. Use + I (macOS) or Ctrl + I (Windows/Linux) to quickly access the AI menu. ### Using with Claude Copy the page content (or the full `llms-full.txt` URL) and paste it into [Claude](https://claude.ai): ``` Read the Makechain protocol documentation at https://docs.makechain.net/llms-full.txt and answer my questions about it. ``` ### Using with ChatGPT Click the "Ask in ChatGPT" button on any page, or open ChatGPT and provide the docs URL: ``` Research this page: https://docs.makechain.net/protocol/overview and help me understand the message semantics. ``` ## Markdown access Any documentation page can be accessed as raw Markdown by appending `.md` to the URL: ``` https://docs.makechain.net/protocol/messages.md ``` This provides better token efficiency and easier parsing for LLMs compared to HTML. ## Example prompts Here are effective prompts for working with Makechain using AI assistants: ### Protocol understanding ``` What are the differences between 1P and 2P message semantics in Makechain? How does the compare-and-swap mechanism work for REF_UPDATE? ``` ### Building on Makechain ``` Show me how to construct and sign a PROJECT_CREATE message using Ed25519. How do I verify an Ethereum address claim signature? ``` ### Architecture questions ``` How does the two-phase execution model work? Why are some messages processed serially in the account pre-pass? ``` ### Debugging ``` I'm getting a StorageLimitExceeded error when creating a project. What are the storage limits and how do I check my capacity? ``` ## Protocol specification The complete protocol specification is available in the repository at [`protocol/SPECIFICATION.md`](https://github.com/officialunofficial/protocol/blob/main/SPECIFICATION.md). This is the canonical reference — the documentation site is derived from it. For AI assistants working with the codebase, the [`CLAUDE.md`](https://github.com/officialunofficial/makechain/blob/main/CLAUDE.md) file provides build commands, architecture overview, module descriptions, and conventions. # Changelog Notable changes to the Makechain protocol, node, and documentation. Organized by month and grouped by category. ## April 2026 ### Features * **Commonware v2026.4 upgrade** — upgraded all commonware dependencies from v2026.3 to v2026.4, including consensus, p2p, storage, and cryptography crates. * **SilentLeader forwarding** — consensus relay now uses `ForwardingPolicy::SilentLeader`, enabling targeted block re-sends to peers that missed a block instead of re-broadcasting to all peers. * **Secondary peers** — new `secondary_peers` config (CLI: `--secondary-peers`, env: `MAKECHAIN_SECONDARY_PEERS`) for registering known follower nodes that receive broadcasts but do not participate in consensus. Secondary peers must dial in and are not actively discovered by the P2P network. * **Protocol submodule** — `protocol/` directory replaced with a git submodule pointing to [officialunofficial/protocol](https://github.com/officialunofficial/protocol), enabling independent versioning of the specification. ### Fixes * **Empty reader block recovery** — fix reader/follower recovery for empty blocks without falling back to full replay. * **PROOF\_VERSION stability** — keep `PROOF_VERSION` at 1; no existing wire formats require a version bump. ## March 2026 ### Features * **Subblock architecture** — non-proposer validators broadcast signed mempool snapshots (subblock payloads) to help the proposer build fuller blocks. Configurable via `subblock_enabled`, `subblock_interval_ms`, and `subblock_max_messages`. Broadcasts on P2P channel 5 with Ed25519 signature verification, validator membership checks, and staleness detection. * **TUI command center** — interactive terminal dashboard (`cargo run --bin tui`) with embedded devnet node. Tabs for Dashboard, Blocks, Messages, Projects, Accounts, Mempool, and a Submit tab for composing messages directly from the terminal. * **MIP-3 identity cutover** — documentation and client surfaces now treat `owner_address` as the sole canonical account identifier, remove legacy account-ID examples, and describe `STORAGE_CLAIM` as the only active Tempo-backed storage ingress path. * **Client library extraction** — shared `src/client/` module extracted from CLI with reusable transport, keystore, formatting, signer, and wallet utilities for building custom clients. ## February 2026 ### Features * **Idle consensus throttling** — drop the oneshot sender in `propose()` when the mempool is empty, using Simplex BFT's nullification mechanism to throttle idle rounds from ~100+/sec to ~5/sec * **QMDB runtime storage directory** — configure commonware runtime's `storage_directory` for QMDB partition placement, replacing manual directory management * **Cold-start retry** — gateway retries on empty gRPC response bodies during container cold start, ensuring reliable responses even when the node is booting * **Subscriber robustness** — signal lagged subscribers and enforce per-connection subscription limits to prevent resource exhaustion * **Per-page markdown generation** — enable AI agents to fetch individual documentation pages as markdown via `.md` URLs * **Ask AI dropdown** — custom AI integration menu in docs with ChatGPT, Claude, copy-as-markdown, and llms.txt links * **Interactive demos** — six interactive demo pages (register account, create project, push commits, verify identity, fork project, manage access) with a reusable Demo component system * **Design system** — 55 shape SVGs, brand guidelines, color system, typography scale, component library, and writing guide * **Health endpoints** — `/healthz` and `/readyz` HTTP endpoints on the metrics server for load balancer integration * **Prometheus gossip metrics** — P2P monitoring with broadcast/receive counters by outcome * **Consensus event metrics** — track proposal, verification, and commit events * **Validator key file** — `--validator-key-file` flag for production key loading (alternative to `--seed`) * **Structured startup logging** — startup completion log with timing breakdown * **AddVerification CLI** — `add-verification` command for linking external addresses * **ListKeys RPC** — paginated key listing by account * **Multi-validator flags** — `--bootstrapper` and multi-participant CLI flags for the node binary ### Fixes * **Empty block disk exhaustion** — stop infinite empty block production that caused disk exhaustion from snapshots; re-enable container snapshots with the default interval * **Snapshot restore logging** — log the state root hash when restoring from snapshots instead of discarding it; remove redundant `entries().count()` call * **QMDB empty diff skip** — skip QMDB persistence for blocks with no state changes as defense-in-depth * **NOT\_FOUND for missing resources** — return proper gRPC `NOT_FOUND` status for missing accounts and commits instead of empty responses * **Cursor pagination** — fix 0xFF byte boundary bug in cursor-based pagination * **Search pagination** — fix `search_projects` pagination and use saturating arithmetic for stats counters * **Verification count gate** — enforce verification limit before processing claim * **Empty block liveness** — ensure empty blocks still advance the chain * **Lazy account init** — initialize account state on first key registration instead of eagerly * **Nonce overflow** — use saturating addition for ref nonces to prevent overflow * **Commit count inflation** — fix double-counting in commit statistics * **Name collision on restore** — check name uniqueness when restoring a removed project * **Ref type immutability** — prevent changing a ref's type (branch vs tag) after creation * **Owner-as-collaborator guard** — reject `COLLABORATOR_ADD`/`REMOVE` targeting the project owner * **Reverse index corruption** — fix key reverse index cleanup on key removal * **Chain stats accuracy** — correct `state_entries` undercount and `total_accounts` source * **Missing metrics tracking** — add `track_request` calls to 17 gRPC endpoints * **Gossip replay protection** — reject already-committed messages in the gossip receiver * **State root snapshot** — use actual state root in shutdown snapshot instead of zero hash * **Network flag validation** — fail fast on invalid `--network` flag instead of silent devnet default * **Block hash verification** — verify block hash integrity before storing in `commit_block` * **DA reference logging** — log warning on malformed DA reference decode instead of silent skip * **Solana verification safety** — replace `unwrap()` with error handling in `verify_sol_claim` ### Refactoring * **Hex encoding optimization** — use pre-allocated buffer instead of per-byte `format!()` calls * **`as_str()` methods** — add `as_str()` to `MessageType` and other enums to eliminate `format!("{:?}")` allocations * **Reporter double-lock fix** — return block response from `commit_block` to avoid double-locking in reporter * **`lock_state()` helper** — extract shared state lock helper in gRPC service for consistent error handling * **Shared hex module** — consolidate duplicate hex encoding functions into **src/hex.rs** * **Block build simplification** — simplify `build_block` panic path in `commit_block` * **Debug derives** — add `Debug` derives to public consensus and API structs * **Mempool optimization** — eliminate message cloning in mempool drain and harden decode paths * **Reverse pubkey index** — O(1) account-by-key lookups via `0x0B` prefix index ### Documentation * **RPC reference** — comprehensive reference for all 32 gRPC methods with request/response schemas * **Protocol docs** — scope requirements, storage limits, execution phase corrections * **Design system pages** — brand, colors, typography, components, writing guide, shapes * **Key schema docs** — updated with new prefixes (`0x0A`, `0x0B`) and current test counts # Contracts Overview Makechain V2 does not use validator-relayed host-chain identity or signer events. The active protocol contract surface is much smaller than the legacy relay-era stack. ## What remains relevant * `STORAGE_CLAIM` settlement verification * historical ERC-1271 validation pinned to a finalized Tempo `blockHash` * the `host_chain_id(network)` mapping used by V2 typed data Those are message-local checks. They do not create Makechain accounts, inject system messages, or mutate consensus state through relayed events. ## What was removed from active protocol semantics These are not part of `V2AddressNative` behavior: * registry-driven account creation * validator relaying of Tempo events into blocks * relay-derived signer authorization * protocol-level ownership transfer * protocol-level recovery That means the old identity-contract stack is not an active ingress path for V2 consensus. ## Storage claims `STORAGE_CLAIM` is the only Tempo-backed storage ingress path. The claim carries settlement evidence directly in the message body: * `settlement_tx_hash` * `settlement_chain_id` * `settlement_log_index` * `actor` Appendix A now assigns both `DEVNET` and `TESTNET` to the Tempo Moderato `StorageRelay` proxy `0x930dc180AaD00fc9302278d502Ff8b52bB0a0F79`. `MAINNET` still uses the zero-address fail-closed sentinel until its canonical `StorageRelay` deployment is published. ## ERC-1271 ERC-1271 is supported for: * custody signatures * app-attribution signatures * ETH verification claims Validation is performed only on the Tempo settlement chain selected by `host_chain_id(network)` and only at the exact finalized `blockHash` bound into the signed payload. # Contributing Development setup, test workflow, and contribution guidelines. ## Prerequisites | Tool | Version | Purpose | |------|---------|---------| | Rust | Stable 1.95+ | Workspace toolchain | | protoc | 3.x+ | `tonic-build` codegen | | Bun | 1.x+ | Docs site (Vocs) | Install the pinned Rust toolchain: ```bash rustup toolchain install 1.95.0 ``` Install protoc: ```bash # macOS brew install protobuf # Ubuntu / Debian sudo apt install protobuf-compiler ``` *** ## Repository structure | Path | Description | |------|-------------| | **crates/** | Rust workspace crates (proto, crypto, state, consensus, api, sync, client) | | **src/bin/node.rs** | Full validator node binary | | **src/bin/cli.rs** | CLI client for interacting with a node | | **src/bin/tui/** | Interactive terminal UI (dashboard, submit, wallet) | | **src/bin/indexer.rs** | gRPC chain indexer writing to Postgres (feature: `indexer`) | | **src/bin/event\_processor.rs** | Event-driven query table builder consuming Redis streams (feature: `event-processor`) | | **proto/makechain.proto** | Protobuf service and message definitions | | **tests/** | Integration and unit test suites | | **apps/docs/** | Vocs documentation site | | **contracts/** | Foundry project for the `StorageRelay` settlement contract and deployment artifacts | | **protocol/** | Protocol specification git submodule ([officialunofficial/protocol](https://github.com/officialunofficial/protocol)) | *** ## Build and test ### Build ```bash cargo build # Build library + node binary (runs tonic codegen via build.rs) cargo run --bin node # Start node: gRPC :50051, p2p :50052, Simplex consensus cargo run --bin cli # CLI client cargo run --bin tui # Interactive terminal UI (--dev for embedded devnet) # Feature-gated binaries (require Postgres / Redis) cargo build --bin indexer --features indexer # Chain indexer cargo build --bin event-processor --features event-processor # Query table event processor ``` ### Run tests ```bash cargo test # Run all tests cargo test # Run a single test by name ``` ### Test distribution | File | Tests | Coverage | |------|-------|----------| | **crates/makechain-state/tests/state\_test.rs** | 226 | State transitions, authorization, 2P semantics, CAS, LWW, archive, fork, storage limits | | **tests/integration\_test.rs** | 219 | End-to-end gRPC submit, mempool, propose, commit, query, streaming, multi-account | | **tests/consensus\_e2e\_test.rs** | 19 | In-process multi-validator proposer/verifier harness | | **tests/conformance\_test.rs** | 20 | Commonware-conformance encoding stability tests | | **crates/makechain-state/tests/validation\_test.rs** | 181 | Structural validation for every message type | | **crates/makechain-api/tests/api\_test.rs** | 86 | API query layer: get/list for all resources, pagination, verifications, links, reactions | | **crates/makechain-crypto/tests/message\_test.rs** | 47 | Signing and verification round-trips, envelope validation | | **tests/event\_processor\_test.rs** | 17 | Event processor logic, event routing, idempotent semantics (feature: `event-processor`) | | Unit tests (inline) | ~1000 | State, consensus, sync, API, client, and node binary internals | *** ## Conventions ### TDD workflow Write tests first, then implement. ### Error assertions Use `matches!()` for asserting error variants in tests: ```rust let result = apply_message(&mut store, &msg); assert!(matches!(result, Err(StateError::ProjectNotFound(_)))); ``` ### Proto changes When adding new message types or RPCs: 1. Modify **proto/makechain.proto** 2. Run `cargo build` — Rust types appear automatically via `tonic-build` 3. Add structural validation in **crates/makechain-state/src/validation.rs** 4. Add state handlers in **crates/makechain-state/src/handlers/** 5. Add API query functions in **crates/makechain-api/src/query.rs** if needed 6. Write tests covering the new type ### Code style * Rust edition 2024, stable toolchain 1.95+ * All hash fields are 32 bytes, all signatures are 64 bytes * Proto enum variants use `SCREAMING_SNAKE_CASE` with a type prefix * State keys use prefix-byte namespacing (see **crates/makechain-state/src/keys.rs**) *** ## Pull requests 1. Create a feature branch from `main` 2. Write tests for your changes 3. Run `cargo test` and ensure all tests pass 4. Run `cargo build` to verify compilation 5. Open a pull request with a clear description of the change *** ## Docs contribution The documentation site uses [Vocs](https://vocs.dev) with Bun: ```bash cd apps/docs bun install # Install dependencies (first time) bun run dev # Dev server at http://localhost:5173 bun run build # Build static site to apps/docs/dist/ ``` Pages are MDX files in **apps/docs/src/pages/**. Follow the [writing guide](/design/writing-guide) for conventions. Deployed to Cloudflare Pages. # FAQ ## Identity ### What identifies an account in V2? `owner_address` is the sole canonical identity. ### How do keys work? Delegated protocol keys are Ed25519-only and are managed through `SIGNER_ADD` and `SIGNER_REMOVE`. ### How does verification work? `VERIFICATION_ADD` links ETH or SOL identities to `owner_address` using canonical V2 proofs. ## Storage ### How does storage expansion work? Through `STORAGE_CLAIM` only. Relay-era `STORAGE_RENT` is removed from V2 semantics. ## Consensus ### Do validators inject Tempo events into blocks? No. V2 blocks contain only user-submitted protocol messages and the exact committed execution payload. ## Forks and refs ### Do CAS semantics change in V2? No. `REF_UPDATE` still uses `old_hash` and `nonce` to prevent silent branch races. # Quickstart This quickstart follows the V2 address-native model from `protocol/SPECIFICATION.md`. ## Prerequisites * **Rust stable 1.95+** * **protoc** on your PATH ## Build and run ```bash git clone https://github.com/officialunofficial/makechain.git cd makechain cargo build --workspace cargo run --bin node -- --grpc-addr 127.0.0.1:50051 --p2p-addr 127.0.0.1:50052 ``` ## Identity model Makechain V2 is address-native. * `owner_address` is the sole canonical identity. * Account setup uses address-native signer management. * Delegated protocol keys are Ed25519-only. * `SIGNER_ADD` and `SIGNER_REMOVE` are the only signer-management paths. ## Typical first actions 1. generate an Ed25519 keypair 2. authorize it with `SIGNER_ADD` 3. submit `PROJECT_CREATE` 4. submit `COMMIT_BUNDLE` and `REF_UPDATE` The Rust CLI and some generated clients are still catching up to the spec, so use `protocol/SPECIFICATION.md` as the normative field and payload reference. ## Next steps * [Protocol overview](/protocol/overview) * [Protocol identity](/protocol/identity) * [Create a project](/guides/create-project) * [API overview](/api/overview) # Glossary ## Protocol * **Message** — signed operation envelope containing `data`, `hash`, `signature`, and Ed25519 `signer`. * **MessageData** — canonical address-native payload carrying `type`, `timestamp`, `network`, `owner_address`, and exactly one body variant. * **owner\_address** — the sole canonical account identifier in V2. Raw 20-byte EVM-style address. * **1P** — one-phase state change with no paired remove message. * **2P** — add/remove set semantics where remove wins on a tie. * **CAS** — compare-and-swap semantics used by refs. * **ExecutionPayload** — canonical committed execution payload paired with each persisted block in V2. ## Identity * **Delegated key** — Ed25519 protocol signing key registered to an `owner_address`. * **Scope** — delegated-key privilege level: `OWNER`, `SIGNING`, or `AGENT`. * **allowed\_projects** — optional project allowlist bound into `SIGNER_ADD` for `AGENT` keys. * **custody signature** — EIP-712 signature from `owner_address` authorizing `SIGNER_ADD` or `SIGNER_REMOVE`. * **request\_owner\_address** — external wallet address that requested a signer addition. Used for app attribution. * **request\_signature** — EIP-712 app-attribution signature proving the requesting app wallet participated in `SIGNER_ADD`. * **custody\_nonce** — monotonic per-account replay counter for signer-management messages. * **claim signature** — proof used by `VERIFICATION_ADD` to attest an ETH or SOL identity. ## Messages * **SIGNER\_ADD** — custody-backed add operation for an Ed25519 delegated key. * **SIGNER\_REMOVE** — custody-backed remove operation for an Ed25519 delegated key. * **STORAGE\_CLAIM** — the only Tempo-backed storage ingress path in V2. * **target\_owner\_address** — address-native collaborator or follow target. * **author\_address** — self-attested commit author identity in `CommitMeta`. * **claim\_key\_type** — wallet signature family used by an ETH verification claim. * **claim\_block\_hash** — historical Tempo block hash bound into an ERC-1271 ETH verification claim. ## Merge requests * **Merge request** — a cross-project contribution proposal from a fork descendant to an upstream project. Recorded onchain as `MERGE_REQUEST_ADD` and closed by `MERGE_REQUEST_REMOVE`. * **request\_id** — content-addressed merge request identity: `H(canonical_encode(MessageData))`, the `MERGE_REQUEST_ADD` message hash. Different timestamps produce different IDs, so the same requester can open multiple requests against the same project. * **Fork lineage** — the retained parent-chain under prefix `0x1A` that links a fork descendant to its source project. Bounded to `MAX_FORK_LINEAGE_DEPTH = 256` hops. Persists across project removal and pruning. * **Dual authorization** — `MERGE_REQUEST_REMOVE` authorization model: the original requester may withdraw, or a target project owner/collaborator with `WRITE+` may close. ## Identity selectors * **owner\_address** — canonical account selector in V2. * **KEY\_ADD** — removed relay-era message family. * **OWNERSHIP\_TRANSFER** — removed relay-era message family. * **STORAGE\_RENT** — removed relay-era message family. * **RELAY\_SIGNER\_ADD** — removed relay-era message family. * **RELAY\_SIGNER\_REMOVE** — removed relay-era message family. * **Registry-backed account creation** — removed from V2 semantics. * **Protocol-level recovery** — removed from V2 semantics. # Guides Step-by-step walkthroughs for every core operation on Makechain. Each guide shows the full message lifecycle — from construction to consensus finality. In V2, account identity is `owner_address`. Start by authorizing an Ed25519 protocol key with `SIGNER_ADD`, then move into project, collaboration, and social flows. ## Identity & Accounts ## Projects & Code ## Collaboration ## Social # Troubleshooting ## Build issues ### `protoc` not found Install protobuf tooling and ensure `protoc` is on PATH. ### Rust version mismatch Use Rust stable 1.95 or newer for the main Rust workspace. ## Message validation ### Missing `owner_address` V2 messages require a canonical 20-byte `owner_address`. ### Signer not authorized Ordinary protocol messages require a delegated Ed25519 key registered to the acting `owner_address` with enough scope. ### Verification claim rejected Check that: * ETH uses the correct `VerificationClaim(owner, ethAddress, chainId, verificationType, network)` payload * SOL uses `makechain:verify::` * `claim_key_type` and `claim_block_hash` are set correctly for ERC-1271 flows ## Read-model mismatch If a deployed surface still disagrees with the spec, prefer `protocol/SPECIFICATION.md` and the V2 field mappings documented in [RPC reference](/api/rpc-reference). # Examples These examples follow the V2 address-native model. ## Protocol writes with `@makechain/viem` ```ts import { createPublicClient, http } from "viem"; import { createEd25519Signer, makechainActions } from "@makechain/viem"; const signer = createEd25519Signer("0x<32-byte-ed25519-private-key>"); const client = createPublicClient({ transport: http() }).extend( makechainActions({ gatewayUrl: "https://api.makechain.net", network: "testnet", ownerAddress: "0x0000000000000000000000000000000000000001", signer, }), ); await client.projects.create({ ownerAddress: "0x0000000000000000000000000000000000000001", signer, name: "my-project", visibility: "public", }); ``` ## Verification example ```ts await client.verification.add({ ownerAddress: "0x0000000000000000000000000000000000000001", signer, verificationType: "eth_address", address: "0x0000000000000000000000000000000000000001", claimSignature: "0x", }); ``` ## Lifted registration flow ```ts const rent = await client.identity.rentStorage({ ownerAddress: "0x0000000000000000000000000000000000000001", walletClient, publicClient, units: 1, }); console.log(rent.claimStatus); // "pending_relayer" const registration = await client.identity.waitForStorageClaim({ ownerAddress: "0x0000000000000000000000000000000000000001", timeoutMs: 120_000, }); if (registration.stage === "ready_for_username") { await client.identity.usernameCreate({ ownerAddress: "0x0000000000000000000000000000000000000001", signer, username: "alice", }); } ``` ## Commit and ref example ```ts await client.commits.submitBundle({ ownerAddress: "0x0000000000000000000000000000000000000001", signer, projectId: "<64-char-project-id-hex>", commits: [ { hash: "<64-char-commit-hash>", treeRoot: "<64-char-tree-root>", authorAddress: "0x0000000000000000000000000000000000000001", authorTimestamp: Math.floor(Date.now() / 1000), title: "feat: first commit", messageHash: "<64-char-message-hash>", }, ], }); await client.commits.updateRef({ ownerAddress: "0x0000000000000000000000000000000000000001", signer, projectId: "<64-char-project-id-hex>", refName: "refs/heads/main", newHash: "<64-char-commit-hash>", nonce: 1n, }); ``` ## Read-side note Gateway read routes in the active V2 stack are owner-address-native. Use [RPC reference](/api/rpc-reference) for the canonical request and response field names. ## Merge request example Merge request reads use the viem client (REST gateway), while writes use the SDK (message construction and signing). Read merge requests with the viem client: ```ts // List merge requests targeting a project const mrs = await client.mergeRequests.list({ projectId: "<64-char-project-id-hex>", }); // Get a specific merge request const mr = await client.mergeRequests.get({ projectId: "<64-char-project-id-hex>", requestId: "<64-char-request-id-hex>", }); // List merge requests opened by an account const myMrs = await client.mergeRequests.listByRequester({ ownerAddress: "0x0000000000000000000000000000000000000001", }); ``` Write merge requests with the SDK: ```ts import { createClient } from "@makechain/sdk"; const client = createClient({ endpoint: "https://api.makechain.net", network: "testnet", }); // Open a merge request from a fork await client.mergeRequestAdd(keypair, ownerAddress, { projectId: "", sourceProjectId: "", sourceRef: "refs/heads/feature", sourceCommitHash: "<64-char-commit-hash>", targetRef: "refs/heads/main", title: "Add new feature", }); // Close a merge request (requester withdrawal or maintainer closure) await client.mergeRequestRemove(keypair, ownerAddress, { projectId: "", requestId: "<64-char-request-id>", }); ``` # API Reference Makechain exposes a single gRPC service (`MakechainService`) for reading and writing state. The service supports grpc-web for browser clients and server reflection for runtime discovery. ## Write Operations Submit signed messages for inclusion in the consensus pipeline. | RPC | Description | |-----|-------------| | `SubmitMessage` | Submit a single signed message (verify, validate, mempool) | | `BatchSubmitMessages` | Submit multiple signed messages with per-message acceptance results | | `DryRunMessage` | Validate a message against current state without submitting | ## Read Operations Query the current state of projects, accounts, refs, and commits. All list operations support cursor-based pagination (max 200 items per page). ### Projects | RPC | Description | |-----|-------------| | `GetProject` | Get project metadata and status by project ID | | `GetProjectByName` | Look up a project by `owner_address` and project name | | `SearchProjects` | Search projects by name prefix with pagination | | `ListProjects` | List projects with optional owner filter and pagination | | `GetProjectActivity` | Recent messages for a specific project | | `GetMergeRequest` | Get an active merge request by target project and request ID | | `ListMergeRequests` | List active merge requests targeting a project | | `ListMergeRequestsByRequester` | List active merge requests opened by a specific `owner_address` | ### Git Objects | RPC | Description | |-----|-------------| | `GetRef` | Get a single ref by project ID and ref name | | `ListRefs` | List all refs in a project with pagination | | `GetRefLog` | Get the update history of a ref | | `GetCommit` | Get commit metadata by project ID and commit hash | | `ListCommits` | List commits in a project with pagination | | `GetCommitAncestors` | Walk the commit graph and return ancestor chain | | `ListCollaborators` | List project collaborators with pagination | ### Accounts | RPC | Description | |-----|-------------| | `GetAccount` | Get account metadata, keys, storage units, project count, verifications, and link count by `owner_address` | | `GetAccountActivity` | Recent messages for a specific `owner_address` | | `GetKey` | Inspect a single key entry for an `owner_address` (scope, allowed projects, added\_at) | | `ListKeys` | List all keys registered to an `owner_address` with pagination | | `ListVerifications` | List verified external addresses for an `owner_address` | | `ListLinks` | List links for an `owner_address` (follows, stars) | | `ListLinksByTarget` | List accounts that link to a target `owner_address` or project | | `ListReactions` | List reactions from an `owner_address` (likes on commits) | | `ListReactionsByTarget` | List `owner_address` values that reacted to a target commit | ### Blocks & Messages | RPC | Description | |-----|-------------| | `GetBlock` | Get a committed block by block number (includes transaction chunks) | | `ListBlocks` | List recent committed blocks (newest first) | | `GetMessage` | Look up a committed message by its BLAKE3 hash | | `ListMessages` | List committed messages across a range of blocks | ### Proofs | RPC | Description | |-----|-------------| | `GetOperationProof` | Get a QMDB operation (inclusion) proof for a public state key | | `GetExclusionProof` | Get a QMDB exclusion proof for a public state key | | `VerifyOperationProof` | Verify a previously obtained operation proof against the current committed root | | `GetStorageQuotaProof` | Get a QMDB proof bundle for an account's active storage grants | ## Node Operations | RPC | Description | |-----|-------------| | `GetNodeStatus` | Current block height, mempool size, pending blocks, network, version, and uptime | | `GetHealth` | Liveness and readiness probe for load balancers | | `GetChainStats` | Cumulative chain analytics (total messages, projects, accounts, blocks) | | `GetSnapshotInfo` | Current snapshot status (block number, entry count, state root) | | `GetMempoolInfo` | Mempool size and per-type message counts | ## Streaming | RPC | Description | |-----|-------------| | `SubscribeMessages` | Server-streaming RPC for live message updates | | `SubscribeBlocks` | Server-streaming RPC for live block updates | `SubscribeMessages` supports filtering by: * **`project_id`** — only receive messages for a specific project * **`types`** — only receive specific message types (e.g., only `COMMIT_BUNDLE`) ## Connection The default gRPC endpoint is `localhost:50051`. Use `--grpc-addr` to configure. ```bash # gRPC (native clients) grpcurl -plaintext localhost:50051 list # CLI client cargo run --bin cli -- --endpoint http://localhost:50051 account get --owner-address 0x0000000000000000000000000000000000000001 ``` ## REST Gateway A Cloudflare Workers gateway translates HTTP REST requests into gRPC calls. This is the recommended way for browser clients, mobile apps, and any HTTP-native integration to interact with Makechain. * All endpoints return JSON * Input validated with Zod schemas * SSE streaming for real-time message and block updates * gRPC-web passthrough for clients that prefer raw protobuf Merge-request REST surfaces: * `GET /v1/projects/{projectId}/merge-requests` * `GET /v1/projects/{projectId}/merge-requests/{requestId}` * `GET /v1/accounts/{ownerAddress}/merge-requests` On the REST surface, merge-request `source_ref` and `target_ref` are exposed as hex-encoded bytes so arbitrary non-UTF-8 ref names round-trip without loss. See the [REST API reference](/api/rest) for complete endpoint documentation, or try the [interactive API explorer](https://api.makechain.net/reference) to call endpoints directly from your browser. ## grpc-web Browser clients can also connect via grpc-web (HTTP/1.1) directly. The node accepts HTTP/1.1 requests and translates them to gRPC internally via `tonic-web`. CORS headers are configured to allow cross-origin requests. # REST API This page documents the intended V2 REST contract from `protocol/SPECIFICATION.md`. Some deployed gateway routes are still catching up, but the spec is authoritative. ## Address-native selectors V2 REST surfaces are keyed by `owner_address`. Examples of normative request/response changes: * project owner filters use `owner_address` * account lookups use `owner_address` * link reverse lookups use `target_owner_address` * commit and ref history entries expose `owner_address` or `author_address` ## Disabled relay-era surfaces V2 REST documentation must not model these as active message families: * `KEY_ADD` * `OWNERSHIP_TRANSFER` * `STORAGE_RENT` * `RELAY_SIGNER_ADD` * `RELAY_SIGNER_REMOVE` ## Write endpoints The write path still revolves around signed `Message` submission: * submit one message * dry-run one message * batch submit multiple messages ## Read model highlights * projects expose `owner_address` * account summaries are address-native * collaborator entries are address-native * commit metadata uses `author_address` * ref log entries use `owner_address` * merge request summaries include `requester_owner_address`, `source_project_id`, `source_ref`, and `target_ref` ### Merge request endpoints * `GET /v1/projects/{projectId}/merge-requests` — list active merge requests targeting a project * `GET /v1/projects/{projectId}/merge-requests/{requestId}` — get a specific merge request * `GET /v1/accounts/{ownerAddress}/merge-requests` — list active merge requests opened by an account On the REST surface, `source_ref` and `target_ref` are exposed as hex-encoded bytes so non-UTF-8 ref names round-trip without loss. See [RPC reference](/api/rpc-reference) for field details. # RPC Reference This page follows `protocol/SPECIFICATION.md` as the normative source of truth. This page documents the canonical V2 RPC contract from `protocol/SPECIFICATION.md`. ## Write operations ### `SubmitMessage` Accepts a fully signed `Message`. In V2, all committed block messages are user-submitted and carry Ed25519 envelopes. Disabled relay-era message families must be rejected. ### `BatchSubmitMessages` Accepts a bounded list of signed `Message` values and returns per-message results. ### `DryRunMessage` Validates a signed `Message` against current state without mempool admission. ## Merge-request reads ### `GetMergeRequest` Returns one active merge request identified by `(project_id, request_id)`. | Field | Type | Description | |-------|------|-------------| | `project_id` | bytes (32) | Target project ID | | `request_id` | bytes (32) | Content-addressed merge request ID | Returns `NOT_FOUND` if the merge request is absent, removed, or pruned. ### `ListMergeRequests` Returns active merge requests under a target project prefix with cursor-based pagination and an optional `requester_owner_address` filter. | Field | Type | Description | |-------|------|-------------| | `project_id` | bytes (32) | Target project ID | | `limit` | uint32 | Page size (max 200) | | `cursor` | bytes | Opaque pagination cursor | | `requester_owner_address` | bytes (20) | Optional requester filter | Results are ordered by forward-key lexicographic order (equivalent to `request_id` order within a project). ### `ListMergeRequestsByRequester` Returns active merge requests opened by a specific `owner_address`, ordered by reverse-key lexicographic order. | Field | Type | Description | |-------|------|-------------| | `owner_address` | bytes (20) | Requester owner address | | `limit` | uint32 | Page size (max 200) | | `cursor` | bytes | Opaque pagination cursor | ### `MergeRequestSummary` All three merge-request RPCs return `MergeRequestSummary` entries with the following fields: | Field | Type | Description | |-------|------|-------------| | `request_id` | bytes (32) | Content-addressed merge request ID | | `project_id` | bytes (32) | Target (upstream) project ID | | `requester_owner_address` | bytes (20) | Original requester's owner address | | `source_project_id` | bytes (32) | Source fork-descendant project ID | | `source_ref` | bytes | Branch or ref name in the source project | | `source_commit_hash` | bytes (32) | Head commit of proposed changes | | `target_ref` | bytes | Suggested target ref in upstream | | `title` | string | Short description | | `added_at` | uint32 | Timestamp from the `MERGE_REQUEST_ADD` message | Closure attribution is not available from canonical state. To determine whether a merge request was withdrawn by the requester or closed by a maintainer, inspect finalized `MERGE_REQUEST_REMOVE` messages for the same `(project_id, request_id)` pair. ## Project and account selectors V2 selectors are address-native. | Message | V2 field | | --- | --- | | `GetProjectResponse` | `owner_address` | | `GetProjectByNameRequest` | `owner_address` | | `SearchProjectsRequest` | `owner_address` | | `ListProjectsRequest` | `owner_address` | | `GetAccountRequest` | `owner_address` | | `GetAccountResponse` | `owner_address`, `username` | | `GetKeyRequest` | `owner_address` | | `ListKeysRequest` | `owner_address` | | `GetAccountActivityRequest` | `owner_address` | | `ListVerificationsRequest` | `owner_address` | `GetAccount(owner_address)` returns a default-zero account view even when no account row has been materialized. `GetAccountResponse.username` is the canonical username when the account has effective active storage and an active reservation. It is the empty string when the account has no effective active username. ## Links, reactions, and history These transport fields are also address-native in V2: | Message | V2 field | | --- | --- | | `ListLinksRequest` | `owner_address` | | `LinkEntry` | `target_owner_address`, `source_owner_address` | | `ListLinksByTargetRequest` | `target_owner_address` | | `ListReactionsRequest` | `owner_address` | | `ReactionEntry` | `source_owner_address` | | `RefLogEntry` | `owner_address` | | `GetStorageQuotaProofRequest` | `owner_address` | | `GetStorageQuotaProofResponse` | `owner_address` | ## Commit metadata `CommitMeta` uses `author_address`. ## Merge request transport Merge-request transport fields `source_ref` and `target_ref` remain raw protobuf `bytes` fields. REST/OpenAPI surfaces expose them as hex-encoded byte strings so non-UTF-8 ref names are lossless. ## Verification transport Any V2 transport surface that returns verification records must expose enough proof metadata to round-trip canonical claim semantics. For ETH verifications that means: * `verification_type` * `address` * `chain_id` * `claim_key_type` * `claim_block_hash` For SOL verifications: * `claim_key_type` is zero or omitted * `claim_block_hash` is empty ## Block and sync transport The canonical persisted and streamed verification unit is the pair: * `Block` * `ExecutionPayload` Relevant V2 responses include the exact committed `ExecutionPayload`: * `GetBlockResponse` * `GetSyncTargetResponse` * `SyncBlocksResponse` * `SubscribeBlocks` V2 transports must preserve canonical account-message order directly in the committed payloads. # Brand ## Wordmark The Makechain wordmark uses Inter SemiBold at tight letter-spacing. ### Dark background
### Light background
## Logo The full logo combines the wordmark with the brand shapes arranged below. ### Dark background
### Light background
### Clear space Maintain at least 1x the height of the shapes row as clear space around the logo.
## Brand Shapes The five primary brand shapes appear in the logo and serve as visual anchors throughout documentation.
Square #00EEBE
Circle #7A3BF7
Triangle #FA7CFA
Star #FAD030
Heart #FE0302
## Usage in Headings Shapes are placed inline to the left of section headings using `` in MDX: ```mdx ## Section Title ``` Cycle through the five brand shapes per page. Assign shapes consistently within a page but vary across pages. ## Principles
Monochrome base
Black and white only. No grays in primary surfaces. The absence of color makes the shapes hit harder.
Vibrant accents
Color only comes from the shapes. Every accent is saturated and distinct — no pastels, no gradients.
Geometric precision
Clean edges, integer coordinates, no anti-aliasing artifacts. Shapes are math, not illustration.
Tight spacing
Dense information, minimal whitespace. Every pixel earns its place. Content over chrome.
# Colors ## Theme The base theme is pure monochrome. Background and text use black/white with graduated neutral layers for depth. ### Backgrounds
Background #000000
Background Dark #0a0a0a
Background 2 #111111
Background 3 #191919
Background 4 #1e1e1e
Background 5 #252525
### Text
Text #ffffff
Text 2 #cccccc
Text 3 #999999
Text 4 #666666
### Borders
Border #252525
Border 2 #404040
*** ## Accent Palette All color in the system comes from the shape accents. No color is used for text, backgrounds, or UI chrome — only for these geometric marks. ### Brand (primary 5)
#00EEBE
#7A3BF7
#FA7CFA
#FAD030
#FE0302
### Extended
#FF6B35
#FF3366
#EC4899
#F59E0B
#84CC16
#22C55E
#14B8A6
#06B6D4
#0096FF
#3B82F6
#6366F1
#8B5CF6
#A855F7
*** ## Contrast All accent colors are tested against the `#000000` background. | Color | Hex | Ratio | WCAG AA | |-------|-----|-------|---------| | Green | `#00EEBE` | 12.8:1 | Pass | | Purple | `#7A3BF7` | 4.0:1 | Pass (large) | | Pink | `#FA7CFA` | 8.0:1 | Pass | | Yellow | `#FAD030` | 11.4:1 | Pass | | Red | `#FE0302` | 4.6:1 | Pass (large) | | Orange | `#FF6B35` | 6.5:1 | Pass | | Blue | `#3B82F6` | 5.3:1 | Pass | | Cyan | `#06B6D4` | 8.1:1 | Pass | | Emerald | `#22C55E` | 8.3:1 | Pass | *** ## Light Mode The system inverts cleanly. All theme tokens have light-mode counterparts: | Token | Dark | Light | |-------|------|-------| | Background | `#000000` | `#ffffff` | | Background 2 | `#111111` | `#f5f5f5` | | Background 3 | `#191919` | `#eeeeee` | | Text | `#ffffff` | `#000000` | | Text 2 | `#cccccc` | `#333333` | | Text 3 | `#999999` | `#666666` | | Border | `#252525` | `#e0e0e0` | | Border 2 | `#404040` | `#cccccc` | Accent colors are identical in both modes — they're vivid enough to work on black or white. # Components Patterns for composing content elements across the docs. ## Section Headings Every H2 gets a shape prefix. The shape is an inline `` at 14px, vertically centered.
Getting Started
Key Features
Architecture
Configuration
Community
*** ## Feature Cards Grid of cards with shape accent, title, and description. Used for overviews and principle lists.
Fast Finality
Sub-second block finality via Simplex BFT. No waiting for confirmations.
Cryptographic Auth
Every message is self-authenticating with Ed25519 signatures and BLAKE3 hashes.
Per-Project Execution
Messages grouped by project and executed serially per group within each block.
*** ## Stat Blocks Horizontal row of key metrics. Shape serves as a bullet marker.
~200ms
Block time
~300ms
Finality
10k+
Messages/sec
32 bytes
Content-addressed IDs
*** ## Status Row Inline shapes as status indicators.
Consensus — operational
gRPC API — operational
Sharding — planned
*** ## Code Blocks Fenced code blocks use the `#111111` background with monospace font. ```rust // Content-addressed project ID let project_id = blake3::hash(&message_bytes); ``` ```bash cargo run --bin node -- --port 50051 --p2p-port 50052 ``` ``` Global State Root ├── Project A Root (BLAKE3 of sorted key-value diffs) ├── Project B Root └── ... ``` *** ## Tables Standard markdown tables for structured data. Borders and backgrounds come from theme tokens. | Message Type | Phase | Scope | |-------------|-------|-------| | `PROJECT_CREATE` | Serial | SIGNING | | `SIGNER_ADD` | Serial | Custody-backed | | `SIGNER_REMOVE` | Serial | Custody-backed | | `COMMIT_BUNDLE` | Project | AGENT | | `REF_UPDATE` | Project | AGENT | | `COLLABORATOR_ADD` | Project | SIGNING | *** ## Lists with Shapes Use shapes as custom bullet markers for feature lists.
Permissionless — anyone can create projects and push code without gatekeepers
Content-addressed — project IDs are BLAKE3 hashes of creation messages
CRDT semantics — deterministic conflict resolution with LWW, remove-wins, and CAS
Merkle-authenticated — every state entry is provable via per-project roots
*** ## Callout Boxes Bordered containers for important information, keyed by shape.
Note
The consensus layer stores only message metadata (~100-500 bytes). File content lives in external storage, referenced by content digest.
Important
REF\_UPDATE uses compare-and-swap. If the ref has moved since your read, the update is rejected.
Tip
Use cargo test test\_name to run a single test by name for fast iteration.
*** ## Architecture Diagrams ASCII diagrams in fenced code blocks, referenced by surrounding shapes. ``` ┌─────────────────────────┐ │ Clients │ grpc-web / gRPC │ (Browser, CLI, SDK) │ └───────────┬─────────────┘ │ ┌───────────▼─────────────┐ │ Validator Node │ │ ┌────────┐ ┌─────────┐ │ │ │ gRPC │→│ Mempool │ │ │ └────────┘ └────┬────┘ │ │ ┌───▼────┐ │ │ │Simplex │ │ │ │BFT │ │ │ └───┬────┘ │ │ ┌───▼────┐ │ │ │ State │ │ │ └────────┘ │ └──────────────────────────┘ ``` *** ## Shape Pairing Guide When writing docs pages, assign shapes to H2s consistently within a page. The recommended cycle: | Position | Shape | Color | Typical meaning | |----------|-------|-------|----------------| | 1st H2 | square | `#00EEBE` | Primary / main concept | | 2nd H2 | circle | `#7A3BF7` | Secondary / supporting | | 3rd H2 | triangle | `#FA7CFA` | Technical detail | | 4th H2 | star | `#FAD030` | Configuration / options | | 5th H2 | heart | `#FE0302` | Community / coda | For pages with more than 5 sections, pull from the extended shape set: diamond, hexagon, bolt, shield, sparkle, leaf, flame, etc. # Shapes 52 vector shapes for use as visual anchors throughout the docs. Use inline in MDX headings: ```mdx ## Section Title ``` *** ## Geometric
square
rounded-square
circle
oval
triangle
caret
diamond
pentagon
hexagon
heptagon
octagon
capsule
semicircle
parallelogram
trapezoid
## Symbols
star
sparkle
starburst
heart
cross
x-mark
asterisk
bolt
shield
target
eye
infinity
hourglass
ribbon
## Nature
sun
moon
crescent
leaf
flower
wave
## 3D
cube
pyramid
## Outlines & Rings
ring
donut
spiral
arc
## Directional
arrow-right
arrow-up
chevron-right
chevron-down
## Decorative
grid
dots
dash
slash
zigzag
stripe
bracket
## Colors | Swatch | Hex | Used by | |--------|-----|---------| | | `#00EEBE` | square, stripe | | | `#7A3BF7` | circle | | | `#FA7CFA` | triangle, flower | | | `#FAD030` | star, sparkle, bolt, sun | | | `#FE0302` | heart, x-mark, target | | | `#FF6B35` | diamond, semicircle, starburst, pyramid | | | `#0096FF` | pentagon, parallelogram, rounded-square, wave | | | `#06B6D4` | hexagon, eye, slash | | | `#14B8A6` | octagon, cube, bracket | | | `#3B82F6` | cross, grid | | | `#84CC16` | arrow-right, heptagon, dash | | | `#F59E0B` | arrow-up, crescent, hourglass | | | `#EC4899` | trapezoid, spiral, ribbon | | | `#8B5CF6` | chevron-down, moon | | | `#FF3366` | ring, asterisk, zigzag, caret | | | `#6366F1` | donut, shield | | | `#A855F7` | oval, infinity, arc | | | `#22C55E` | capsule, leaf | | | `#EC4899` | chevron-right, flower | # Typography ## Typeface The system uses the default Vocs font stack — system sans-serif for body text and monospace for code.
BODY
Inter, -apple-system, BlinkMacSystemFont, "Segoe UI", Helvetica, Arial, sans-serif
CODE
ui-monospace, SFMono-Regular, "SF Mono", Menlo, Consolas, monospace
*** ## Scale
H1
Page Title
H2
Section Heading
H3
Subsection Heading
BODY
Every operation is a cryptographically signed, self-authenticating message — verifiable without external lookups. Messages are ordered by Simplex BFT consensus with sub-second finality.
SMALL / CAPTION
Supplementary text for labels, metadata, and annotations.
CODE
shard\_index = project\_id\[0..4] as u32 % num\_shards
*** ## Hierarchy Rules 1. **One H1 per page** — the page title. No shape prefix. 2. **H2 with shape** — major sections. Every H2 gets a shape to its left at `width="14"`. 3. **H3 plain** — subsections within an H2. No shape, no decoration. 4. **Body at 0.85 opacity** — slightly softened white for comfortable reading on black. 5. **Code in monospace** — inline `code` and fenced blocks use the monospace stack on `#111111` background. *** ## Inline Code Use backtick-wrapped `inline code` for: * Field names: `project_id`, `old_hash`, `content_digest` * Message types: `COMMIT_BUNDLE`, `REF_UPDATE` * Hex values: `0x01`, `0x0A` * CLI commands: `cargo test`, `bun run build` *** ## Tables Tables use the default Vocs styling — borders from the theme, alternating row contrast via background layers. | Weight | Usage | Example | |--------|-------|---------| | 700 | H1 page title | `font-weight: bold` | | 600 | H2, H3 headings | `font-weight: semibold` | | 400 | Body text | `font-weight: normal` | | 400 | Code blocks | Monospace at normal weight | # Writing Guide The Makechain documentation is the canonical reference for the protocol, its APIs, and tooling. This guide provides editorial standards for writing clear, consistent, and accurate documentation. This page covers: * [Writing general documentation](#general-documentation) * [Writing protocol documentation](#protocol-documentation) * [Writing API documentation](#api-documentation) *** ## General Documentation ### Voice and tone Write in a technical, direct voice. Assume the reader is a developer who understands cryptography, distributed systems, and version control. Do not over-explain fundamentals — link to external references when background is needed. **Be precise, not verbose.** Every sentence should convey information. Cut filler words, hedging phrases ("it should be noted that"), and unnecessary qualifiers. * Correct: "Messages are ordered by Simplex BFT consensus with sub-second finality." * Incorrect: "It's worth noting that messages are typically ordered by what we call Simplex BFT consensus, which generally provides sub-second finality." ### Second person Write in the second person. Use "you" when addressing the reader directly. * Correct: "You submit messages via the gRPC `SubmitMessage` endpoint." * Incorrect: "We submit messages via the gRPC `SubmitMessage` endpoint." Reserve "we" for statements where the Makechain team is the explicit subject: "We plan to add P-256/WebAuthn as a secondary signature scheme." ### Present tense Use present tense to describe how the system works. Use future tense only for features that do not exist yet. * Correct: "The execution engine processes messages in two phases." * Incorrect: "The execution engine will process messages in two phases." ### Active voice Use active voice. Passive voice obscures the subject and adds unnecessary words. * Correct: "The leader proposes blocks by draining the mempool." * Incorrect: "Blocks are proposed by the leader by draining the mempool." ### Short sentences One idea per sentence. If a sentence has more than one comma, split it. Follow a long sentence with a short one. * Correct: "Each project group operates on its own copy-on-write overlay store. This ensures isolation between projects." * Incorrect: "Each project group operates on its own copy-on-write overlay store that can read the base state plus any account changes from Phase 1 via the batch store, which ensures isolation between projects." ### Gender-neutral language Use "they" as a singular pronoun. Address groups as "developers," "users," or "validators." ### No emojis Do not use emojis in documentation. Color and visual interest come from the [shape system](/design/shapes), not emoji. *** ## Spelling and Terminology ### Makechain-specific terms Use these terms consistently: | Term | Usage | Not | |------|-------|----| | owner address | The canonical account identifier. Write as `owner_address` in protocol contexts and "owner address" in prose. | legacy account ID terms | | message | Lowercase when referring to the concept. | Message (unless starting a sentence) | | message type | Refer to specific types in `SCREAMING_SNAKE_CASE` with backticks: `PROJECT_CREATE` | ProjectCreate, project\_create | | project ID | Lowercase "ID." Always note it is content-addressed (BLAKE3 hash of the creation message). | Project Id, projectId | | ref | A branch or tag pointer. Plural: "refs." | reference, branch (unless clarifying) | | scope | Key permission level. Three scopes: OWNER, SIGNING, AGENT. Show in ALL CAPS without backticks when used as a label. | scope level, permission | | state root | The BLAKE3 merkle root of all state. No hyphen. | stateroot, state-root | | mempool | One word, lowercase. | mem-pool, memory pool | | consensus | Lowercase unless starting a sentence. Refer to the specific algorithm as "Simplex BFT." | Consensus | ### External product casing Match the canonical casing of external tools and protocols: * Ed25519 (not ed25519 or ED25519) * BLAKE3 (not blake3 or Blake3) * gRPC (not GRPC or Grpc) * grpc-web (lowercase with hyphen) * protobuf (lowercase) * Rust (capitalized) * Cloudflare (capitalized) * Ethereum (capitalized), but `ETH_ADDRESS` in code * Solana (capitalized), but `SOL_ADDRESS` in code ### Abbreviations Spell out abbreviations on first use per page, followed by the abbreviation in parentheses: * "Byzantine Fault Tolerant (BFT) consensus" * "compare-and-swap (CAS)" These abbreviations are acceptable without expansion: HTTP, gRPC, URL, API, CLI, SDK, CI/CD, hex. Do not use Latin abbreviations. Write "for example" instead of "e.g." and "that is" instead of "i.e." ### Numbers and units * Byte counts are explicit: "32 bytes," "64 bytes" * Hash sizes: "BLAKE3 (32 bytes)" on first mention per page * Time: use "ms" for milliseconds, "s" for seconds — "~200ms block time" * Throughput: "10,000+ messages per second" * Storage: use "GB" for gigabytes, "KB" for kilobytes * Hex values: lowercase, no `0x` prefix unless referencing a state key prefix — "prefix `0x01`" *** ## Formatting ### Headings One H1 per page — the page title. No shape prefix on H1. All H2 headings get a shape prefix using an inline ``: ```mdx ## Section Title ``` H3 headings are plain text — no shape, no decoration. Do not skip heading levels (H2 → H4). Use sentence case for all headings: * Correct: `## State root computation` * Incorrect: `## State Root Computation` Exception: capitalize product names in headings — "Creating your first EAS build," "Configuring Simplex BFT." ### Shape assignment Cycle through the five brand shapes for H2s within a page: 1. square (`#00EEBE`) — primary concept 2. circle (`#7A3BF7`) — secondary / supporting 3. triangle (`#FA7CFA`) — technical detail 4. star (`#FAD030`) — configuration / options 5. heart (`#FE0302`) — supplementary / coda For pages with more than 5 sections, pull from the [extended shape set](/design/shapes): diamond, hexagon, bolt, shield, sparkle, leaf, flame. ### Inline code Use backticks for: * Message types: `PROJECT_CREATE`, `COMMIT_BUNDLE` * Field names: `project_id`, `old_hash`, `content_digest` * Hex prefixes: `0x01`, `0x0A` * CLI commands: `cargo test`, `bun run build` * RPC methods: `SubmitMessage`, `GetProject` * Rust types and crate names: `BatchStore`, `commonware-consensus` Do not use backticks for: * Product names: Makechain, Simplex BFT, Commonware * Scope labels: OWNER, SIGNING, AGENT (use ALL CAPS plain text) * File names and directories — use **bold** instead: **app.json**, **src/state/** ### File and directory names Use **bold** for file names, directory names, and file extensions in prose: * Correct: "Your protocol buffer definition is in **proto/makechain.proto**." * Incorrect: "Your protocol buffer definition is in `proto/makechain.proto`." ### Code blocks Always specify the language for fenced code blocks: ```` ```rust let project_id = blake3::hash(&message_bytes); ``` ```` Use `bash` for shell commands, `rust` for Rust code, `json` for JSON, and plain triple backticks (no language) for ASCII diagrams and pseudocode. ### Tables Use markdown tables for structured reference data. Tables are the primary format for: * Message type lists with descriptions and scopes * Configuration parameters with defaults * State key prefixes and namespaces * Storage limits * Error types with triggers Always include a header row with separator: ```markdown | Type | Description | Scope | |------|-------------|-------| | `PROJECT_CREATE` | Create a new project | SIGNING | ``` ### Lists Use dashes (`-`) for unordered lists, not asterisks. Start numbered lists at `1`. Use **bold** for the lead term in definition-style lists: ```markdown - **Permissionless** — anyone can create projects and push code - **Content-addressed** — project IDs are BLAKE3 hashes of creation messages ``` Use em dashes (—) to separate the term from its definition, not colons or hyphens. ### Links Link descriptive text, not "here" or "this page": * Correct: "See the [storage limits](/protocol/storage-limits) for per-account capacity." * Incorrect: "See storage limits [here](/protocol/storage-limits)." Use relative paths for internal links: `/protocol/overview`, not `https://makechain.pages.dev/protocol/overview`. ### ASCII diagrams Use box-drawing characters for architecture diagrams in plain fenced code blocks: ``` ┌─────────────┐ │ Component │ └──────┬──────┘ │ ┌──────▼──────┐ │ Next Layer │ └─────────────┘ ``` Diagrams should be self-contained and readable without surrounding text. *** ## Protocol Documentation Protocol pages document the specification. They are reference material — precise, complete, and authoritative. ### Describe behavior, not implementation Protocol docs describe what the system does, not how the Rust code implements it. Reference implementation details (crate names, function names) belong in code comments and CLAUDE.md, not in user-facing docs. * Correct: "Account-level messages are applied serially because they modify shared account state." * Incorrect: "Account-level messages are applied serially using the `apply_account_messages` function in `execution.rs`." ### Document the envelope When introducing a message type, always specify: 1. The message type name in `SCREAMING_SNAKE_CASE` 2. The required key scope (OWNER, SIGNING, or AGENT) 3. The conflict key or ordering mechanism (CAS, LWW, append-only) 4. The semantics category (1P or 2P) ### Show the state change For each message type, describe: * **Preconditions** — what must be true for the message to be accepted * **Effect** — what state changes when the message is applied * **Failure modes** — what errors are returned and when ### Use tables for message type reference The canonical format for listing message types: ```markdown | Type | Description | Required Scope | |------|-------------|---------------| | `PROJECT_CREATE` | Create a new project with name and visibility | SIGNING | | `PROJECT_REMOVE` | Remove a project (hides refs, commits, collaborators) | OWNER | ``` ### Conflict resolution rules Always state the conflict resolution rule explicitly: * "On a timestamp tie, remove wins." * "Last-write-wins per conflict key `(project_id, field)`." * "Compare-and-swap: includes expected current hash. If the ref has moved, the update is rejected." *** ## API Documentation API pages document the gRPC service. They are functional reference — developers look things up here while coding. ### RPC method format Document each RPC with: 1. Method name in backticks: `GetProject` 2. Request fields as a table 3. Response fields as a table 4. A curl/grpcurl example when useful 5. Error conditions ### Field descriptions Write useful descriptions. Teach the developer something beyond what the type signature shows: * Correct: "`project_id` — the BLAKE3 hash of the original `PROJECT_CREATE` message (32 bytes, hex-encoded)" * Incorrect: "`project_id` — the project ID" ### Pagination All list endpoints use cursor-based pagination. Document the pattern once and reference it: * `cursor` — opaque string from a previous response. Omit for the first page. * `limit` — maximum items to return. Default 50, maximum 200. ### Streaming endpoints For streaming RPCs (`SubscribeMessages`, `SubscribeBlocks`), document: * The filter parameters * What triggers a message on the stream * Whether the stream replays historical data or is live-only *** ## Page Structure Every documentation page follows this structure: ``` # Page Title ← H1, no shape Introductory paragraph. ← 1-2 sentences establishing context ## First Section ← H2 with shape Content... ### Subsection ← H3, plain Content... ## Second Section ← H2 with shape Content... ``` ### Opening paragraph Start every page with 1-2 sentences that tell the reader what this page covers and why it matters. No preamble, no "In this section we will discuss..." * Correct: "Makechain enforces per-account storage limits to prevent unbounded state growth." * Incorrect: "This page describes the storage limits system. Storage limits are an important part of the protocol." ### One concept per page Each page covers one topic. If you find yourself writing "see also" to another section on the same page, consider whether the content should be its own page. ### End with edges Close pages with edge cases, error types, or future considerations. The reader who reaches the bottom is looking for details. *** ## Punctuation ### Oxford commas Use Oxford commas: "projects, commits, and refs" — not "projects, commits and refs." ### Em dashes Use em dashes (—) to set off parenthetical clauses. No spaces around em dashes: * Correct: "Every operation is a cryptographically signed message — verifiable without external lookups." * Incorrect: "Every operation is a cryptographically signed message - verifiable without external lookups." In MDX, write `—` directly (Unicode em dash). The `—` entity also works. ### Double quotes Use double quotes in prose. Reserve single quotes for nested quotation or code contexts: * Correct: Set the field named "id" to your project's ID. * Incorrect: Set the field named 'id' to your project's ID. ### Possessives Singular possessive: add **'s** regardless of final consonant — "BLAKE3's digest," "the process's state." Plural possessive ending in **s**: add just the apostrophe — "the validators' signatures." ### Slashes No spaces around slashes: "client/server," "Android/iOS." *** ## Glossary Core terms used throughout Makechain documentation. ### Protocol | Term | Definition | |------|-----------| | Message | A signed, self-authenticating operation envelope containing a BLAKE3 hash, Ed25519 signature, signer public key, and operation payload | | Message type | The specific operation: `PROJECT_CREATE`, `COMMIT_BUNDLE`, `REF_UPDATE`, etc. | | 1P (one-phase) | Unilateral state change with no paired undo message. Categories: Singleton, LWW Register, Append-only, State transition | | 2P (two-phase) | Add/Remove pairs operating on a set. Remove wins on timestamp tie | | CAS | Compare-and-swap — optimistic locking where an update includes the expected current value | | LWW | Last-write-wins — the most recent message by consensus order overwrites prior state | | Remove-wins | On a timestamp tie between add and remove, the remove takes precedence | | Conflict key | The tuple that identifies which state slot a message targets, for example `(project_id, field)` | ### Identity | Term | Definition | |------|-----------| | owner address | Canonical 20-byte account identifier in V2. It is written as `owner_address` in protocol fields | | Scope | Permission level for a registered key: OWNER (full control), SIGNING (push, manage), AGENT (automated actions) | | Claim signature | Cryptographic proof linking an external address to an `owner_address`. SOL challenge format: `makechain:verify::` | ### Consensus | Term | Definition | |------|-----------| | Simplex BFT | Single-chain Byzantine Fault Tolerant consensus protocol from the Commonware library | | Block | A batch of messages ordered by consensus. ~200ms block time | | Finality | A block reaches finality in a single voting round. ~300ms target | | Notarization | A 2/3+ validator vote to accept a proposed block | | Mempool | Queue of validated messages waiting to be included in a block | ### Execution | Term | Definition | |------|-----------| | Account pre-pass | Phase 1: serial execution of account-level messages that modify shared state | | Project execution | Phase 2: serial per-project execution of project-scoped messages grouped by `project_id` | | Batch store | QMDB-backed store for block execution with a local mutations overlay | | Overlay store | Copy-on-write state store used for dry-run validation and unit tests | | State root | BLAKE3 merkle root produced by QMDB merkleization | ### Storage | Term | Definition | |------|-----------| | Storage unit | Yearly capacity allocation for an `owner_address`. Capacity enters through `STORAGE_CLAIM`, and missing state means zero active units | | Content storage | External storage for file content (blobs, trees), referenced by optional `content_digest` and `url` in commit bundles | | Pruning | Automatic removal of oldest unprotected commit metadata when a project exceeds its limit. Commits referenced by active refs are never pruned | | Ref | A named pointer (branch or tag) to a commit hash | | Fast-forward | A ref update where the new commit is a descendant of the current ref target | ### Infrastructure | Term | Definition | |------|-----------| | Commonware | The library of distributed systems primitives that Makechain builds on | | tonic | Rust gRPC framework used for the API layer | | QMDB | Queryable Merkle Database — sole persistent state store for all protocol state | # Claim your username In MIP 4, canonical username is not profile metadata. Username is claimed through a dedicated `USERNAME_CREATE` message and replaced through `USERNAME_UPDATE`. `STORAGE_CLAIM` funds raw storage only — it does not assign a username. To try this interactively, follow the [account registration guide](/guides/register-account) which includes the full wallet-connected flow with username selection. ## How username is assigned Username assignment uses two dedicated message types: 1. `USERNAME_CREATE` — claims the first canonical username for an account. Requires active storage and that no username is currently assigned. 2. `USERNAME_UPDATE` — replaces the existing canonical username with a new one. Requires an active username and active storage. Both messages require a delegated Ed25519 key on `owner_address` with scope `SIGNING`. In the recommended client flow, do not manually submit `STORAGE_CLAIM` first. Instead: 1. rent storage with `rentStorage()` 2. wait until raw storage is visible with `waitForStorageClaim()` or inspect snapshot state with `getRegistrationStatus()` 3. submit `USERNAME_CREATE` or `USERNAME_UPDATE` Username-bearing quota requires both active storage **and** an active username. Storage alone is not sufficient. ## Canonical username format Canonical protocol form is: * lowercase ASCII only * length `3` through `32` * characters limited to `a-z`, `0-9`, `-` * first and last characters must be alphanumeric Canonical regex: ```text ^[a-z0-9][a-z0-9-]{1,30}[a-z0-9]$ ``` Clients may accept mixed-case input for UX, but they must lowercase and validate it before signing or submission. Validators reject non-canonical usernames on the wire. ## Username lifetime * username is active while the account has both active storage and an active username assignment * username is released when effective active storage drops to zero * `USERNAME_UPDATE` replaces the current username with a new one while storage remains active ## Read surfaces `GetAccountResponse.username` returns: * the canonical username when both effective active storage and an active username are present * the empty string when the account has no active username ## Profile metadata stays separate `ACCOUNT_DATA(DISPLAY_NAME)` is still mutable profile metadata. It is distinct from canonical username because display name is: * not globally unique * not storage-backed * not released on storage expiry * still updated through ordinary LWW `ACCOUNT_DATA` semantics ## Learn more * [Guide: Set up your profile](/guides/set-profile) — edit display name, bio, avatar, and website * [Protocol: Messages](/protocol/messages) * [Protocol: Submit pipeline](/protocol/submit-pipeline) # Create a project `PROJECT_CREATE` creates a new content-addressed project owned by `owner_address`. ## Create a project ## How it works 1. choose the wallet address that will be the project's `owner_address` 2. authorize an Ed25519 protocol key with `SIGNER_ADD` 3. submit a signed `PROJECT_CREATE` message 4. use the resulting message hash as `project_id` ## V2 semantics * project ownership is address-native * there is no registry-relayed `KEY_ADD` bootstrap step * the project ID is still derived from the canonical message payload ## Message body `PROJECT_CREATE` carries: * `name` * `visibility` * optional `description` * optional `license` ## Learn more * [Protocol overview](/protocol/overview) * [Protocol messages](/protocol/messages) * [Protocol state model](/protocol/state-model) * [Push commits](/guides/push-commits) ## Next steps * [Contribute to a project](/guides/push-commits) — submit a commit bundle and move a ref * [Manage access](/guides/manage-access) — add collaborators to the project you just created # Follow, star, and react V2 supports three social primitives: * `FOLLOW` via `LINK_ADD` and `LINK_REMOVE` * `STAR` via `LINK_ADD` and `LINK_REMOVE` * `LIKE` via `REACTION_ADD` and `REACTION_REMOVE` ## Target model * follow targets use `target_owner_address` * star targets use `target_project_id` * reaction targets use `target_project_id` and `target_commit_hash` ## V2 semantics * links and reactions remain 2P sets * reverse indexes are still maintained * `FOLLOW` accepts any valid 20-byte target address * `STAR` and `LIKE` still require valid project and commit targets where applicable ## Follow an account ## Star a project ## Like a commit ## Learn more * [Protocol messages](/protocol/messages) * [Protocol storage limits](/protocol/storage-limits) * [Manage access](/guides/manage-access) ## Next steps * [Fork a project](/guides/fork-project) — build on an existing project lineage * [Verify your identity](/guides/verify-identity) — link an external address to the same owner address # Fork a project `FORK` creates a new project whose lineage points at a source project and a specific `source_commit_hash` anchor. ## Fork a project ## Validation Forking checks: * the source project exists * the source commit exists * the acting `owner_address` has access to the source project when it is private ## Result * the forked project gets a fresh content-addressed `project_id` * its `fork_source` records the source project lineage * refs and commits start empty in the new project state ## Learn more * [Protocol messages](/protocol/messages) * [Protocol state model](/protocol/state-model) * [Open a merge request](/guides/open-merge-request) * [Contribute to a project](/guides/push-commits) ## Next steps * [Open a merge request](/guides/open-merge-request) — use your fork descendant as the source of a proposal * [Contribute to a project](/guides/push-commits) — push more commits to the fork before proposing changes upstream # Manage access Collaborator management is address-native in V2. ## Add a collaborator ## Message types * `COLLABORATOR_ADD` * `COLLABORATOR_REMOVE` Both target `target_owner_address`. ## Permission model * `READ` * `WRITE` * `ADMIN` * `OWNER` Only the canonical project owner may grant or modify `OWNER` access. ## V2 semantics * collaborator targets are 20-byte addresses * the target does not need a preexisting account row * collaborator state remains a 2P set with remove-wins semantics ## Learn more * [Protocol identity](/protocol/identity) * [Protocol messages](/protocol/messages) * [Follow, star, and react](/guides/follow-star-react) ## Next steps * [Follow, star, and react](/guides/follow-star-react) — explore the social and engagement message types * [Fork a project](/guides/fork-project) — create a descendant project from an existing source # Open a merge request Merge requests let you propose changes from a fork descendant back to an upstream project without write access on the target. ## Build the prerequisites interactively Use the live docs flows to produce the source state that a merge request depends on: The merge-request flow itself remains protocol-documented here while the interactive fork and ref-update steps above give you the concrete source state needed for `MERGE_REQUEST_ADD`. ## How it works The full lifecycle is: 1. fork the upstream project with `FORK` 2. push commits and update refs on your fork 3. open a merge request with `MERGE_REQUEST_ADD`, targeting the upstream project 4. the upstream maintainer reviews the proposal 5. the maintainer merges by submitting `COMMIT_BUNDLE` and `REF_UPDATE` on the target project, then closes the request with `MERGE_REQUEST_REMOVE` You can also withdraw your own merge request at any time with `MERGE_REQUEST_REMOVE`. The protocol records the proposal — merge resolution is application-layer. The protocol does not define a "merged" or "rejected" state. ## Message body ### `MERGE_REQUEST_ADD` | Field | Constraint | |-------|-----------| | `project_id` | Target (upstream) project ID, 32 bytes | | `source_project_id` | Source fork-descendant project ID, 32 bytes | | `source_ref` | Branch or ref name in the fork, 1–254 bytes, no null bytes | | `source_commit_hash` | Head commit of proposed changes, 32 bytes | | `target_ref` | Suggested target ref in upstream, 1–254 bytes, no null bytes | | `title` | Short description, 1–200 UTF-8 bytes | `source_project_id` must be in the target project's retained fork lineage within `MAX_FORK_LINEAGE_DEPTH = 256` hops. `source_project_id` must not equal `project_id` — you cannot open a merge request from a project to itself. `source_ref` must resolve exactly to `source_commit_hash` at execution time — the hash is authoritative, the ref name is advisory for humans. `target_ref` is advisory only and does not constrain the maintainer performing the merge. The merge request identity is content-addressed: `request_id = Message.hash`. Two messages with different timestamps produce different `request_id` values, so the same requester can open multiple merge requests against the same project. ### `MERGE_REQUEST_REMOVE` | Field | Constraint | |-------|-----------| | `project_id` | Target project ID, 32 bytes | | `request_id` | Content-addressed MR ID (original `MERGE_REQUEST_ADD` message hash), 32 bytes | ## Authorization ### Opening a merge request `MERGE_REQUEST_ADD` requires `SIGNING` scope. The target project must be `Active` (not `Archived` or `Removed`). The requester does **not** need any permission on the target project if it is public. Private targets require `READ+` access. The source project must not be `Removed`, and the requester must be the source owner or have `READ+` access on a private source project. ### Closing a merge request `MERGE_REQUEST_REMOVE` has dual authorization — the first message type with this pattern: * **Requester withdrawal** — the original requester can close without any target-project membership * **Maintainer closure** — the target project owner or any collaborator with `WRITE+` permission can close The target project must not be `Removed`. Closure is allowed on `Archived` projects so maintainers can clean up. ## V2 semantics * merge requests are a tombstone-backed 2P set — same pattern as collaborators, links, and reactions * merge request state is stored under the target project's namespace at prefix `0x1B`, with a requester reverse index at `0x1C` * `ProjectState.merge_request_count` tracks the number of active merge requests per project * the target project owner funds merge-request capacity: `20 + storage_units × 20` per project * both active entries and tombstones count toward quota; oldest entries are pruned first * closed merge requests return `NOT_FOUND` — closure attribution is available only from finalized `MERGE_REQUEST_REMOVE` message history, not from canonical state * `MERGE_REQUEST_ADD` and `MERGE_REQUEST_REMOVE` are storage-sensitive message types using tombstone-backed 2P-set semantics: remove wins on add/remove timestamp ties, and equal-timestamp add/add remains last-inclusion-wins ## Learn more * [Protocol messages](/protocol/messages) * [Protocol state model](/protocol/state-model) * [Protocol storage limits](/protocol/storage-limits) * [Protocol submit pipeline](/protocol/submit-pipeline) * [API examples](/api/examples) * [Fork a project](/guides/fork-project) ## Next steps * [Fork a project](/guides/fork-project) — create the source fork lineage * [Contribute to a project](/guides/push-commits) — push the commits referenced by the merge request # Contribute to a project Makechain separates commit metadata from ref movement. ## Push a commit ## Two-message flow 1. submit `COMMIT_BUNDLE` 2. submit `REF_UPDATE` Both messages are scoped by `project_id`, while the acting principal remains `owner_address` through the envelope and delegated-key checks. ## V2 details * `CommitMeta` uses `author_address` * refs keep CAS semantics with `old_hash` and `nonce` * `AGENT` keys may write only to `allowed_projects` * no relay-era signer or identity flow is involved ## Content storage The protocol stores commit metadata only. Optional `content_digest` and `url` point to external content storage. ## Learn more * [Protocol messages](/protocol/messages) * [Protocol submit pipeline](/protocol/submit-pipeline) * [Work with branches](/guides/work-with-branches) ## Next steps * [Manage access](/guides/manage-access) — add collaborators to the project * [Work with branches](/guides/work-with-branches) — learn the branch and ref semantics behind the demo # Register an account Makechain V2 is address-native. Accounts are address-native, and signer management is submitted directly to consensus. ## Try it interactively The generated Ed25519 key stays active in this browser and becomes the Makechain signer used by later write guides. ## What an account means in V2 Your account identity is your `owner_address`. * It is the 20-byte address of your wallet or passkey-derived account. * It is valid even before any state row exists. * Missing account state means default-zero bookkeeping. ## Basic setup flow 1. Connect the wallet or passkey that will be your `owner_address`. 2. Generate an Ed25519 protocol keypair for fast Makechain message signing. 3. Authorize that same key with `SIGNER_ADD`. 4. Rent storage on Tempo. 5. Wait for the relayer-backed `STORAGE_CLAIM`. 6. Claim your canonical username with `USERNAME_CREATE`. ## What `SIGNER_ADD` does `SIGNER_ADD` is the first key-management step in V2. * The message envelope is signed by an Ed25519 key for transport integrity. * Authorization comes from an EIP-712 custody signature from `owner_address`. * The message also includes app attribution through `request_owner_address` and `request_signature`. There is no relay-era fallback. `SIGNER_ADD` and `SIGNER_REMOVE` are the only signer-management paths. ## Account lifecycle notes * `ACCOUNT_DATA` writes profile fields such as display name, avatar, bio, and website. * `VERIFICATION_ADD` links external ETH or SOL identities to the same `owner_address`. * Storage expansion happens only through `STORAGE_CLAIM` settlement verification. * `STORAGE_CLAIM` funds raw storage only. Claim the canonical username separately with `USERNAME_CREATE`, and replace it later with `USERNAME_UPDATE`. First successful `STORAGE_CLAIM` remains settlement-verified and does not require delegated-key authorization. Delegated-key scope `SIGNING` is required for username messages and other ordinary user messages. ## Recommended app flow In app clients, the preferred registration sequence is lifted above the raw protocol message: 1. connect the wallet or passkey that will own the account 2. generate an Ed25519 keypair and authorize it with `SIGNER_ADD` 3. rent storage on Tempo with `rentStorage()` 4. wait for Makechain to show the relayer-backed claim with `waitForStorageClaim()` 5. claim the canonical username with `usernameCreate()` This keeps `STORAGE_CLAIM` as the protocol primitive while avoiding direct claim submission in the normal user flow. ## CLI shape The exact Rust CLI migration is still in progress, but the normative V2 model is: 1. generate an Ed25519 keypair 2. sign a `SIGNER_ADD` typed payload with the custody wallet 3. submit the signed Makechain message 4. rent storage and claim the username Use `protocol/SPECIFICATION.md` as the source of truth for field names and signing payloads. ## Learn more * [Protocol: Identity](/protocol/identity) * [Protocol: Messages](/protocol/messages) * [Protocol: Storage limits](/protocol/storage-limits) ## Next steps * [Create a project](/guides/create-project) — submit your first project-scoped Makechain message * [Set up your profile](/guides/set-profile) — add display name, avatar, bio, and website metadata * [Verify your identity](/guides/verify-identity) — link an Ethereum or Solana address to the same owner address # Set up your profile Canonical username is not profile metadata in MIP 4. Usernames are claimed through `USERNAME_CREATE` and replaced through `USERNAME_UPDATE` — separate from storage. `STORAGE_CLAIM` funds raw storage only. This guide covers only mutable profile metadata: display name, bio, avatar, and website. Each field is stored as an independent last-write-wins register via `ACCOUNT_DATA` messages. ## Set your profile data ## What happened Each `ACCOUNT_DATA` message sets a single field such as display name, avatar, bio, or website. Fields are independent — updating your bio does not affect your display name. Profile fields use LWW semantics. If two messages update the same field, the one with the later consensus timestamp wins. Stale writes (older than the current value) are rejected. Each field is stored under an `(owner_address, field)` composite key with a `(timestamp, value)` tuple enabling the LWW comparison.
name Display name, max 32 bytes
avatar URL to a profile image, max 500 characters
bio Free-text description, max 500 characters
website URL to your website, max 500 characters
Requires **SIGNING** key scope. Processed in the account pre-pass (Phase 1, serial execution). Canonical username is separate state. Use [`USERNAME_CREATE`](/guides/claim-username) to claim your first username after activating storage.
```bash # Set display name makechain account set-data --secret --owner-address 0x... --field display_name --value "Alice Smith" # Set bio makechain account set-data --secret --owner-address 0x... --field bio --value "Building the future..." # Set avatar URL makechain account set-data --secret --owner-address 0x... --field avatar --value "https://example.com/avatar.png" # Set website makechain account set-data --secret --owner-address 0x... --field website --value "https://example.com" ``` **Read profile data** — the normative V2 account lookup is address-native and keyed by `owner_address`. **Write profile data** — `POST /v1/write` with a signed `ACCOUNT_DATA` message. The gateway proxies to the node's gRPC submit endpoint. **Read canonical username** — `GetAccountResponse.username` returns the active username only when the account has effective active storage and an active username reservation. Storage-funded accounts without a username still return the empty string.
## Learn more * [Protocol: Identity](/protocol/identity) — accounts, keys, and ownership model * [Guide: Claim your username](/guides/claim-username) — claim the canonical username after activating storage * [Protocol: Messages](/protocol/messages) — message types and LWW semantics * [API: Accounts](/api/overview#accounts) — programmatic account access ## Next steps * [Claim your username](/guides/claim-username) — claim your canonical username after activating storage * [Verify your identity](/guides/verify-identity) — link an Ethereum or Solana address to your owner address * [Create a project](/guides/create-project) — start building on Makechain # Verify your identity Link an external address (Ethereum or Solana) to your `owner_address` by signing the canonical V2 verification payload. The claim is verified cryptographically in consensus and stored in state. A verification means you control that exact external address without revealing your private key. ## Link an Ethereum address ## What happened You sign a `VerificationClaim(owner, ethAddress, chainId, verificationType, network)` typed struct using EIP-712 `signTypedData_v4`. For EOAs, the validator recovers the secp256k1 signer address. For passkey wallets, it validates the WebAuthn assertion, recovers the P256 public key, and derives the Ethereum-style address. In both cases, the recovered address must match the `address` field and the typed `owner` must match `MessageData.owner_address`. You sign the challenge `makechain:verify::` with your Solana keypair. The validator verifies the Ed25519 signature directly — the Solana address is the public key. If the signature is valid, the verification is accepted.
1. Sign the challenge with your Solana wallet
2. Submit and finalize
Verifications are a 2P set. Submit `VERIFICATION_REMOVE` to unlink an address. On a timestamp tie between add and remove, remove wins.
## Learn more * [Protocol: Identity](/protocol/identity) — accounts, keys, and verifications * [Protocol: Security](/protocol/security) — cryptographic verification details * [API: Accounts](/api/overview#accounts) — read verifications via the accounts endpoint ## Next steps * [Create a project](/guides/create-project) — start building on Makechain * [Follow, star, and react](/guides/follow-star-react) — follow accounts and star projects # Work with branches Branches and tags are refs under `project_id`. ## Advance a branch interactively This live demo advances `refs/heads/main` with `REF_UPDATE` after submitting a `COMMIT_BUNDLE`. The remaining ref operations below use the same project-scoped authorization model and CAS semantics. ## Core operations * create a ref with `REF_UPDATE` and empty `old_hash` * advance a ref with `REF_UPDATE` * reset a ref with `REF_UPDATE` and `force = true` * delete a ref with `REF_DELETE` ## V2 notes * refs are project-scoped * the acting principal is `owner_address` * CAS semantics remain unchanged * deleting a ref does not delete commit metadata ## Learn more * [Protocol messages](/protocol/messages) * [Protocol state model](/protocol/state-model) * [Contribute to a project](/guides/push-commits) ## Next steps * [Fork a project](/guides/fork-project) — create a descendant project from an anchored source ref * [Open a merge request](/guides/open-merge-request) — understand how branch and fork state feed into proposals # Consensus Makechain uses Simplex BFT to order all protocol messages on a single chain. ## What consensus commits V2 commits the exact pair: * `Block` * `ExecutionPayload` The execution payload records the exact account-phase message order and project-phase grouping order. ## Execution phases 1. account-phase serial execution 2. project-phase grouped execution by `project_id` This ordering is consensus-relevant and part of the committed payload. ## V2 clarification Consensus does not rely on relay-injected host-chain identity or signer events. Disabled relay-era message families are not part of valid V2 block contents. # Content Storage The consensus layer stores only message metadata (~100–500 bytes per message). Actual file content — blobs, trees, and full commit messages — lives in external storage chosen by the developer. ## Architecture ``` Developer Makechain Consensus Content Storage │ │ │ ├─ Upload blobs ───────────────────────────────────► │ │ (returns URL + digest) │ │ │ │ │ ├─ COMMIT_BUNDLE ──────────►│ │ │ (content_digest + url) │ │ │ ├─ Include in block │ │ │ │ Consumer │ │ │ │ │ ├─ Read commit metadata ◄──┤ │ ├─ Fetch blobs via url ──────────────────────────► │ ├─ Verify content_digest │ │ │ │ │ ``` ## Content Digest and URL A `COMMIT_BUNDLE` may include two optional fields: * **`content_digest`** — a 32-byte hash serving as an integrity proof for the referenced content * **`url`** — a content locator string (max 2048 characters) pointing to the uploaded data Both fields are **self-attested** by the submitter. Validators do not fetch or verify the referenced content. Clients verify integrity offline by comparing the fetched content's hash against the `content_digest`. These fields link consensus-layer metadata to the full data: | Consensus Layer (validators) | External Storage | |-----|-----| | Commit hash, title, author, parents | Full commit message text | | Tree root hash | Tree objects (directory listing) | | `content_digest` + `url` | Blob objects (file content) | ## Content Lifecycle 1. **Upload**: Developer uploads tree and blob data to external storage, receiving a URL and computing a content digest 2. **Submit**: Developer submits a `COMMIT_BUNDLE` with `content_digest`, `url`, and commit metadata 3. **Finalize**: Validators include the bundle in a block — metadata is stored in consensus state 4. **Retrieve**: Consumers read metadata from consensus, fetch full data via `url`, and verify against `content_digest` ## Recovery from Pruning When consensus-layer commit metadata is pruned (see [Storage Limits](/protocol/storage-limits)), the full commit data remains recoverable from external storage: * Pruned `CommitMeta` entries lose their hash, title, and parent links from validator state * External storage retains the complete blob data (subject to storage provider retention policies) * The `content_digest` allows integrity verification of recovered data This separation ensures that storage limits on validators don't cause permanent data loss. ## Storage Backend Options The content storage backend is chosen by the developer. Options include: | Backend | Tradeoffs | |---------|-----------| | **Object storage (R2, S3)** | Fast, reliable, cost-effective for compressed bundles | | **IPFS / Filecoin / Arweave** | Permanent, decentralized, content-addressed | | **Self-hosted storage** | Full control, use `content_digest` for integrity | | **Git hosting** | Store bundles alongside existing git infrastructure | # Identity This page documents the normative `V2AddressNative` identity model from `protocol/SPECIFICATION.md`. ## Canonical identity `owner_address` is the sole canonical account identifier in Makechain V2. * It is a raw 20-byte EVM-style address. * Any valid 20-byte address is a valid Makechain principal, even if it has never submitted a message. * Missing account state means default-zero bookkeeping, not invalid identity. V2 has no registry-backed account creation, protocol-level ownership transfer, or protocol-level recovery flow. ## Delegated protocol keys Protocol signing keys are Ed25519-only in V2. Each delegated key is attached to an `owner_address` and has one of these scopes: | Scope | Meaning | | --- | --- | | `OWNER` | Highest privilege | | `SIGNING` | Ordinary account and project administration | | `AGENT` | Project-scoped automated actions | `AGENT` keys may be constrained by `allowed_projects`. `OWNER` and `SIGNING` keys ignore `allowed_projects`. ## Signer management `SIGNER_ADD` and `SIGNER_REMOVE` are the only signer-management paths in V2. * Both are user-submitted messages with valid Ed25519 envelopes. * The envelope signer provides transport integrity only. * Authorization comes from a custody signature that verifies directly against `MessageData.owner_address`. Supported wallet-level custody signature families are: | `custody_key_type` | Family | | --- | --- | | `0` | secp256k1 ECDSA | | `1` | P256 ECDSA | | `2` | WebAuthn P256 | | `3` | ERC-1271 | For ERC-1271, the corresponding `custody_block_hash` must be present and is bound into the typed data. ## App attribution Every `SIGNER_ADD` includes app attribution. * `request_owner_address` is a 20-byte external wallet address. * It is not a Makechain account lookup key. * `request_signature` verifies directly against `request_owner_address`. * Self-request is represented by `request_owner_address == owner_address`. The typed data is `SignerRequest(address requestOwner, bytes32 key, uint64 validAfter, uint64 validBefore, uint32 network)`. ## Typed data The custody and attribution payloads are address-native in V2: ```text SignerAdd(address owner, bytes32 key, uint32 scope, uint64 validAfter, uint64 validBefore, uint64 nonce, bytes32[] allowedProjects, uint32 network) SignerRemove(address owner, bytes32 key, uint64 validAfter, uint64 validBefore, uint64 nonce, uint32 network) SignerRequest(address requestOwner, bytes32 key, uint64 validAfter, uint64 validBefore, uint32 network) ``` ERC-1271 variants add `bytes32 validationBlockHash` and bind the historical Tempo block hash into the signed payload. ## External address verification `VERIFICATION_ADD` and `VERIFICATION_REMOVE` link external addresses to an `owner_address`. For ETH verification, the proof metadata includes: * `claim_key_type` * `claim_block_hash` when `claim_key_type == 3` For SOL verification, the signed challenge string is: ```text makechain:verify:: ``` `owner_address_hex` is lowercase hex, exactly 40 characters, with no `0x` prefix. ## Removed concepts These do not exist in V2 semantics: * registry-based account allocation * onchain registry-based account creation * protocol-level ownership transfer * protocol-level recovery * host-chain-driven identity ingress * relay-derived signer authorization # Message Types This page documents the live `V2AddressNative` message set from `protocol/SPECIFICATION.md`. Every committed V2 message is user-submitted and carries a valid Ed25519 envelope. Validators do not inject Tempo events into blocks. ## Projects | Type | Description | Base scope | | --- | --- | --- | | `PROJECT_CREATE` | Create a project | `SIGNING` | | `PROJECT_METADATA` | Update project metadata | `SIGNING` | | `PROJECT_ARCHIVE` | Archive a project | `SIGNING` | | `FORK` | Fork an existing project | `SIGNING` | | `PROJECT_REMOVE` | Remove a project | `SIGNING` | Project ownership is keyed by `owner_address`. ## Refs and commits | Type | Description | Base scope | | --- | --- | --- | | `REF_UPDATE` | Move a ref to a new hash | `AGENT` | | `REF_DELETE` | Delete a ref | `AGENT` | | `COMMIT_BUNDLE` | Append commit metadata | `AGENT` | `CommitMeta` uses `author_address`. ## Collaborators and account metadata | Type | Description | Base scope | | --- | --- | --- | | `COLLABORATOR_ADD` | Add a collaborator | `SIGNING` | | `COLLABORATOR_REMOVE` | Remove a collaborator | `SIGNING` | | `ACCOUNT_DATA` | Update account metadata | `SIGNING` | Collaborator targets are address-native: * `CollaboratorAddBody.target_owner_address` * `CollaboratorRemoveBody.target_owner_address` ## Merge requests | Type | Description | Base scope | | --- | --- | --- | | `MERGE_REQUEST_ADD` | Open a merge request from a fork descendant into a target project | `SIGNING` | | `MERGE_REQUEST_REMOVE` | Withdraw or close a merge request | `SIGNING` | `MERGE_REQUEST_ADD` binds `source_project_id`, `source_ref`, `source_commit_hash`, and a suggested `target_ref` into a content-addressed `request_id`. The source project must be in the target project's retained fork lineage within `MAX_FORK_LINEAGE_DEPTH = 256` hops. `source_project_id` must not equal `project_id`. No target-project membership is required for public projects; private targets require `READ+` access. `source_ref` must resolve exactly to `source_commit_hash` at execution time. `MERGE_REQUEST_REMOVE` closes an active merge request through dual authorization — the original requester may withdraw without any target-project membership, or a target project owner or collaborator with `WRITE+` may close it. Merge request state is stored under the target project's namespace at prefix `0x1B` with a requester reverse index at `0x1C`. Closure attribution is available only from finalized message history, not from canonical state. ## Verification and storage | Type | Description | Base scope | | --- | --- | --- | | `VERIFICATION_ADD` | Prove an external address | `SIGNING` | | `VERIFICATION_REMOVE` | Remove a verification | `SIGNING` | | `STORAGE_CLAIM` | Claim raw storage units | none | `STORAGE_CLAIM` is the only Tempo-backed storage ingress path in V2. It funds raw storage only and does not assign usernames. It still requires finalized settlement verification. First successful application does not require delegated-key authorization, and duplicate replay remains marker-idempotent after settlement verification. `VERIFICATION_ADD` includes address-native proof metadata: * `claim_key_type` * `claim_block_hash` when `claim_key_type == 3` ## Username management | Type | Description | Base scope | | --- | --- | --- | | `USERNAME_CREATE` | Claim the first canonical username for an account | `SIGNING` | | `USERNAME_UPDATE` | Replace the current canonical username | `SIGNING` | `USERNAME_CREATE` requires active storage (at least one accepted `STORAGE_CLAIM`) and that no username is currently assigned. `USERNAME_UPDATE` requires an active username and active storage. Username-bearing quota requires both active storage and an active username. ## Links and reactions | Type | Description | Base scope | | --- | --- | --- | | `LINK_ADD` | Add a link | `SIGNING` | | `LINK_REMOVE` | Remove a link | `SIGNING` | | `REACTION_ADD` | Add a reaction | `SIGNING` | | `REACTION_REMOVE` | Remove a reaction | `SIGNING` | `FOLLOW` links use `target_owner_address`. `STAR` links use `target_project_id`. Reactions remain project-and-commit scoped, but the reacting account is identified by `owner_address`. ## Signer management | Type | Description | Authorization | | --- | --- | --- | | `SIGNER_ADD` | Add an Ed25519 delegated key | custody-backed | | `SIGNER_REMOVE` | Remove an Ed25519 delegated key | custody-backed | `SIGNER_ADD` includes: * `request_owner_address` * `request_signature` * `allowed_projects` * `custody_block_hash` when `custody_key_type == 3` * `request_block_hash` when `request_key_type == 3` ## Disabled families These message families are disabled in V2 and must not appear at ingress, replay, or commit time: * `KEY_ADD` * `OWNERSHIP_TRANSFER` * `STORAGE_RENT` * `RELAY_SIGNER_ADD` * `RELAY_SIGNER_REMOVE` # Protocol overview Makechain V2 is a clean reset called `V2AddressNative`. Its core properties are: * `owner_address` is the sole canonical identity * identity is address-native * `STORAGE_CLAIM` is the only Tempo-backed storage ingress path * `SIGNER_ADD` and `SIGNER_REMOVE` are the only signer-management paths * Tempo events are never injected into Makechain blocks * canonical persisted verification uses the exact `(Block, ExecutionPayload)` pair ## Identity and authorization Ordinary user messages are authorized by delegated Ed25519 keys attached to an `owner_address`. `SIGNER_ADD` and `SIGNER_REMOVE` bypass delegated-key lookup and derive authority from custody proofs. `STORAGE_CLAIM` still requires finalized settlement verification, but first successful application does not require delegated-key authorization. `STORAGE_CLAIM` funds raw storage only — usernames are claimed separately through `USERNAME_CREATE` and replaced through `USERNAME_UPDATE`. `MERGE_REQUEST_ADD` and `MERGE_REQUEST_REMOVE` open and close cross-project contribution proposals from fork descendants. `MERGE_REQUEST_REMOVE` has dual authorization: the requester may withdraw, or a target project member with `WRITE+` may close. ## Execution model Execution is split into two ordered phases: 1. account-level messages in proposer-selected serial order 2. project-level messages grouped by `project_id` in byte-lexicographic order This ordering is part of the canonical `ExecutionPayload` commitment. ## Removed legacy behavior V2 does not preserve: * registry-based identity semantics * registry or relay ingress * relay checkpoints * relay-era replay compatibility * protocol-level ownership transfer or recovery # Security ## Fail-closed identity `owner_address` is immutable protocol identity. State attached to one address cannot be retargeted to another by protocol message. ## Message authorization Ordinary messages require a delegated Ed25519 key with sufficient scope. `SIGNER_ADD` and `SIGNER_REMOVE` are special-cased and derive authority from custody proofs. `STORAGE_CLAIM` still requires finalized settlement verification, but first successful application does not require delegated-key authorization. Duplicate replay remains settlement-first and marker-idempotent. ## Replay protection * signer management uses `custody_nonce` * ref updates and deletes use per-ref nonces * message hashes remain content-addressed BLAKE3 digests over canonical protobuf encoding ## Removed ingress classes V2 removes relay-era attack and ambiguity surface by rejecting: * `KEY_ADD` * `OWNERSHIP_TRANSFER` * `STORAGE_RENT` * `RELAY_SIGNER_ADD` * `RELAY_SIGNER_REMOVE` Validators do not inject Tempo events into blocks. # Execution Model Makechain uses a single consensus chain with serial per-project execution within each block, rather than separate shard chains. ## Execution Phases Within each block, messages are processed in two phases: ### Phase 1: Account Pre-pass (Serial) Account-level messages are applied serially because they modify shared account state (key registrations, project counts): * `SIGNER_ADD` / `SIGNER_REMOVE` * `ACCOUNT_DATA` * `VERIFICATION_ADD` / `VERIFICATION_REMOVE` * `LINK_ADD` / `LINK_REMOVE` * `REACTION_ADD` / `REACTION_REMOVE` * `PROJECT_CREATE` / `PROJECT_REMOVE` / `FORK` (modify account `project_count`) ### Phase 2: Project Execution (Serial per project) Project-scoped messages are grouped by `project_id` and each group is executed serially: * `PROJECT_ARCHIVE` / `PROJECT_METADATA` * `REF_UPDATE` / `REF_DELETE` * `COMMIT_BUNDLE` * `COLLABORATOR_ADD` / `COLLABORATOR_REMOVE` * `MERGE_REQUEST_ADD` / `MERGE_REQUEST_REMOVE` Both phases execute against a shared `BatchStore` — a mutations overlay on committed QMDB state. Phase 2 sees all account changes from Phase 1. ## State Root Computation After execution, the `BatchStore` is merkleized via QMDB's batch API, producing a single BLAKE3 state root digest (QMDB MMR root). This is deterministic — all validators execute the same messages in the same order and produce the same root. ## Future: Sharding The protocol spec reserves the possibility of sharding by `project_id` for horizontal scaling: ```rust shard_index = project_id[0..4] as u32 % num_shards ``` The current per-project grouping already provides the isolation needed for future parallel or sharded execution without cross-shard coordination (except for `FORK`, which includes a state proof from the source project). # State model V2 state is address-native. ## Core selectors * account bookkeeping is keyed by `owner_address` * project ownership is stored as `owner_address` * collaborators use `target_owner_address` * links and reactions use `owner_address` and reverse address-native indexes * commit metadata uses `author_address` ## Account defaults If an account row is absent, the canonical default view is: * `key_count = 0` * `project_count = 0` * `storage_units = 0` * `custody_nonce = 0` * `created_at = 0` ## Merkle scope The canonical merkleized prefixes are `0x03` through `0x17` inclusive, plus `0x1A` through `0x1C`. `0x02`, `0x18`, and `0x19` remain persisted but are not part of `state_root`. ## Merge request state Three prefix families support merge requests: | Prefix | Entity | Layout | |--------|--------|--------| | `0x1A` | Fork parent (retained lineage) | `[0x1A | project_id:32]` | | `0x1B` | Merge request (forward) | `[0x1B | project_id:32 | request_id:32]` | | `0x1C` | Merge request (reverse, by requester) | `[0x1C | requester_owner_address:20 | project_id:32 | request_id:32]` | * `0x1A` stores the immediate fork parent for each fork project, independent of prunable project rows, so merge-request lineage validation survives project pruning * `0x1B` and `0x1C` are a forward/reverse pair — for every forward entry, a matching reverse entry must exist, and vice versa * `ProjectState` includes `merge_request_count`, which tracks the number of active merge requests targeting the project * `0x1A`, `0x1B`, and `0x1C` are all queryable and merkleized ## Removed state concepts V2 state does not include: * registry allocation metadata * ownership transfer markers * signer-authorization event markers * storage-rent event markers * relay checkpoint provenance # Storage limits Makechain V2 keeps per-account storage bookkeeping keyed by `owner_address`. ## Base model Accounts start with default-zero rented units plus the protocol's built-in base allocation. Additional capacity enters the protocol only through `STORAGE_CLAIM`. ## Ingress path `STORAGE_CLAIM` is the only Tempo-backed storage ingress path in V2. Legacy `STORAGE_RENT` relay messages are removed and invalid. ## Merge request quota Merge requests consume the target project owner's storage quota: | Resource | Effective limit | |----------|-----------------| | Merge requests per target project | `20 + storage_units × 20` | The target project owner funds capacity because merge requests occupy the target project's namespace. Both active entries and tombstones count toward the limit. When the limit is exceeded, the oldest entries are pruned first. Pruning an active entry deletes the forward row, deletes the matching reverse row, and decrements `merge_request_count`. Pruning a tombstone deletes only that tombstone row and leaves the prune marker behind. As with other storage-sensitive message types, expired storage grants are swept before quota enforcement. ## Current implementation note Appendix A now assigns both `DEVNET` and `TESTNET` to the Tempo Moderato `StorageRelay` proxy `0x930dc180AaD00fc9302278d502Ff8b52bB0a0F79`. `MAINNET` still uses the zero-address fail-closed sentinel until its canonical `StorageRelay` deployment is published. # Submit pipeline The V2 admission path is message-centric and fail-closed. ## Admission steps 1. verify the Ed25519 message envelope 2. structurally validate `MessageData` and the selected body 3. apply the correct authorization rule for the message family 4. run stateful validation 5. admit to the mempool if all checks pass ## Authorization split Ordinary user messages require a delegated Ed25519 key registered to `owner_address` with sufficient scope. Special cases: * `SIGNER_ADD` and `SIGNER_REMOVE` are custody-backed * `STORAGE_CLAIM` is settlement-verified and does not require delegated-key authorization on first successful application All three message families still require Ed25519 envelopes for transport integrity. * `SIGNER_ADD` and `SIGNER_REMOVE` derive authority from custody proofs. * `STORAGE_CLAIM` still requires finalized settlement verification, but first successful application does not require a delegated key registered to `owner_address`. Duplicate `STORAGE_CLAIM` replay stays marker-idempotent after settlement verification, so an already-consumed claim remains a no-op regardless of current delegated-key state. ## Dual authorization `MERGE_REQUEST_REMOVE` has a dual authorization path — the only message type with this pattern: 1. **Requester withdrawal** — the original `owner_address` may close their own merge request without any target-project membership 2. **Maintainer closure** — the target project owner or any collaborator with `WRITE+` permission may close any merge request targeting that project Both paths require a registered key with `SIGNING` scope. ## Execution placement Account-level messages execute first in serial order. Project-level messages execute second, grouped by `project_id` in byte-lexicographic order. There is no relay-era account pre-pass for injected system messages because V2 has no injected system messages. # WebAuthn Makechain uses WebAuthn passkeys as a primary authentication method for account ownership, key management, and session re-authentication. Passkeys provide phishing-resistant, biometric-backed credentials that replace seed phrases and browser extension wallets. ## Standard vs Makechain WebAuthn Standard WebAuthn follows a simple challenge-response pattern. The server generates a random challenge, the client signs it with a passkey, and the server verifies the P-256 signature. The challenge carries no semantic meaning. Makechain's custody flow repurposes the WebAuthn challenge. Instead of a random nonce, the challenge is an EIP-712 typed data hash. This lets a passkey authorize specific protocol operations like `SIGNER_ADD` and `SIGNER_REMOVE` in a single gesture. | Aspect | Standard WebAuthn | Makechain custody | |--------|-------------------|-------------------| | Challenge | Random server nonce | EIP-712 signing hash | | Purpose | Prove passkey ownership | Authorize key management | | Verification | Server-side P-256 check | Onchain + consensus P-256 recovery | | Wire format | Standard CBOR attestation | Custom envelope (authData + clientJSON + sig + v) | | State change | None | Adds or removes a signing key | Both flows use the same passkey credential and the same P-256 curve. The difference is what the passkey signs and who verifies it. ## Passkey ownership EVM wallets on Tempo support WebAuthn passkeys natively. Tempo validates P-256 signatures at the chain level via the `0x76` transaction type, so no onchain secp256r1 verifier contract is needed. When you use a passkey wallet in Makechain V2: 1. Your passkey's P-256 public key maps to an EVM address 2. This address is the canonical Makechain `owner_address` 3. All custody signatures (`SIGNER_ADD`, `SIGNER_REMOVE`) use EIP-712 typed data signed by this passkey `owner_address` is the sole canonical account identity in V2. There is no `KEY_ADD`, onchain account allocation, ownership transfer, or recovery flow in the protocol. ## Custody signatures Custody signatures authorize key management without an onchain transaction. The account owner signs an EIP-712 typed data hash with their passkey, and validators verify the signature during consensus. ### EIP-712 domain and types The domain is shared across all custody and verification signatures: ```json { "name": "Makechain", "version": "1", "chainId": "host_chain_id(network)" } ``` Three typed payloads cover the live custody and app-attribution operations: * **SignerAdd** — `SignerAdd(address owner, bytes32 key, uint32 scope, uint64 validAfter, uint64 validBefore, uint64 nonce, bytes32[] allowedProjects, uint32 network)` * **SignerRemove** — `SignerRemove(address owner, bytes32 key, uint64 validAfter, uint64 validBefore, uint64 nonce, uint32 network)` * **SignerRequest** — `SignerRequest(address requestOwner, bytes32 key, uint64 validAfter, uint64 validBefore, uint32 network)` ### Wire format When `custody_key_type=2` (WebAuthn P-256), the custody signature field carries a custom envelope instead of a raw 65-byte ECDSA signature. The EIP-712 hash is encoded as a base64url string in the `clientDataJSON` challenge field. ``` ┌──────────────────────────────────────────────────┐ │ authDataLen │ authenticatorData │ │ (2 bytes LE) │ (37+ bytes) │ ├──────────────┼────────────────────────────────────┤ │ jsonLen │ clientDataJSON │ │ (2 bytes LE) │ (variable) │ ├──────────────┴────────────────────────────────────┤ │ signature (64 bytes) │ recovery bit (1 byte) │ └──────────────────────────────────────────────────┘ ``` The minimum envelope size is 107 bytes. Signatures longer than 65 bytes trigger the WebAuthn verification path. ### Verification steps Validators verify WebAuthn custody signatures by: 1. Parsing the envelope to extract `authenticatorData`, `clientDataJSON`, the P-256 signature, and the recovery bit 2. Checking authenticator flags — User Present (UP) and User Verified (UV) must both be set 3. Validating `clientDataJSON` — the `type` field must be `webauthn.get` and the `challenge` field must match the expected EIP-712 hash (base64url-encoded) 4. Computing the signed data — `SHA-256(authenticatorData || SHA-256(clientDataJSON))` 5. Recovering the P-256 public key from the signature and signed data 6. Deriving the EVM address from the recovered public key and comparing against the expected `owner_address` The P-256 signature uses low-S normalization to ensure deterministic recovery. ## Session authentication Session authentication uses standard WebAuthn ceremonies for fast re-authentication. Unlike custody signing, the challenge is a random server nonce with no EIP-712 semantics and no protocol state change. This flow is useful when you return to the docs site, playground, or workbench and need to prove you still control your passkey without reconnecting a full wallet session. The webauthx library (from wevm) provides clean server-client ceremony orchestration. ```typescript import { Authentication } from "webauthx/server"; import { Authentication as ClientAuth } from "webauthx/client"; // Server: generate challenge const options = Authentication.getOptions({ rpId: "makechain.net", allowCredentials: [{ id: credentialId }], }); // Client: sign with passkey const response = await ClientAuth.sign(options); // Server: verify signature const result = Authentication.verify(response, { publicKey: storedPublicKey, challenge: options.challenge, rpId: "makechain.net", origin: "https://docs.makechain.net", }); ``` Session authentication and custody signing share the same passkey credential. The distinction is in what gets signed and where verification happens: | Aspect | Session authentication | Custody signing | |--------|----------------------|-----------------| | Challenge | Random nonce | EIP-712 hash | | Verifier | Application server | Consensus validators | | State change | None (session only) | `SIGNER_ADD` or `SIGNER_REMOVE` | | Library | webauthx | Custom envelope encoder | ### Try it Register a passkey credential and authenticate with it, then compare the standard WebAuthn flow to Makechain's custody signing model. ## Verification claims External address verification also supports WebAuthn P-256 passkeys. When you link an Ethereum address to your Makechain account via `VERIFICATION_ADD`, the claim signature can be a WebAuthn envelope. The EIP-712 type for verification claims is: * **VerificationClaim** — `VerificationClaim(address owner, address ethAddress, uint256 chainId, uint32 verificationType, string network)` For `VERIFICATION_ADD`, clients must set `claim_key_type = 2` when the claim signature is a WebAuthn envelope. That envelope uses the same wire format and validation steps as custody signatures. Standard ECDSA claims use the default key type and a 65-byte signature. This means a passkey that controls an `owner_address` can also prove ownership of its associated Ethereum address without a separate wallet extension or private key export. :::tip\[Try it] The [register an account](/guides/register-account) guide walks through using a passkey wallet with address-native Makechain. The `SIGNER_ADD` step demonstrates WebAuthn custody signing in action. ::: # Use cases Real-world applications and patterns for building on Makechain. These examples demonstrate how the protocol's primitives — content-addressed projects, cryptographic identity, and consensus-ordered state — combine to solve problems across AI, crypto, developer tooling, and more.