Hyperbridge ISMP — state vs log, coprocessor GET flow, and relayer queue


This post supplements Hyperbridge ISMP — state_root, overlay_root, and mmr_root with a consolidated “root map + workflow” view, new framing (trie vs MMR as state vs log, not a Substrate-vs-EVM split), and notes from reading pallet_state_coprocessor and Tesseract EVM messaging.

Repo context: polytope-labs/hyperbridge.


TL;DR (one paragraph)

Hyperbridge keeps two “views” of the same ISMP object, keyed by the same commitment hash:

The commitment hash of a request/response is not stored twice as separate unrelated facts: it is used as a key in the trie and as a member of the MMR log, but the stored payloads differ (trie = operational metadata; MMR = message log membership).

Role split (keep this mental model): trie proofs answer “does this key exist in state, and with what app metadata?”; MMR proofs answer “is this message in the ordered log at this place?” — that is state vs log, not a clean “Substrate uses trie, Ethereum uses MMR” rule.

On the EVM side, Solidity verifies message inclusion against StateCommitment.overlayRoot, which—for Hyperbridge as coprocessor—is the message MMR root (not the relay-chain BEEFY MMR).


ISMP child trie vs message MMR (is this duplication?)

Short answer: yes, intentionally redundant indexing — not “two copies of the same bytes in two merkle trees for no reason.”

What the MMR leaf “contains” (conceptual): it corresponds to a full Request / Response object (or its hash commitment), which already embeds source/dest/nonce/... as part of the message encoding. The proof path is “this hash is a leaf in the MMR at this index/position,” not a separate second preimage.


Trie vs MMR: it’s state vs log, not “Substrate vs Ethereum”

The useful split is:

Not a hard platform rule:

So: trie = state/index layer; MMR = log/history layer — both can appear in both worlds, depending on what you’re proving to.


The three roots you will see (and what they actually mean)

Root name you see What it commits to Where it lives
Substrate Header::state_root full chain state (all pallets) block header
ISMP child trie root (child_trie_root) ISMP child trie subtree (commitments/receipts metadata) header digest (ConsensusDigest)
ISMP message MMR root (mmr_root) append-only message log of request/response leaves header digest (ConsensusDigest)

How they are packed into StateCommitment

StateCommitment only has state_root + optional overlay_root, so the parachain client packs digest fields into those slots. There are two regimes:

Ordinary parachain (not coprocessor):

Hyperbridge as coprocessor:

So on Hyperbridge, overlay_root is the message MMR root.


What overlayRoot means on EVM (the exact question)

In HandlerV1.handleGetResponses:

bytes32 root = host.stateMachineCommitment(message.proof.height).overlayRoot;
bool valid = MerkleMountainRange.VerifyProof(root, message.proof.multiproof, leaves, message.proof.leafCount);

overlayRoot here is the Hyperbridge ISMP message overlay MMR root for the referenced Hyperbridge height.

It is not the Polkadot relay-chain/BEEFY MMR root.

Relationship:

  1. relay-chain finality proof ⇒ EVM consensus client accepts a Hyperbridge StateCommitment
  2. that StateCommitment.overlayRoot ⇒ used by the handler to verify message leaves (requests/responses) via MMR

Hyperbridge coprocessor GET workflow (end-to-end)

This is the flow we confirmed while reading pallet_state_coprocessor:

1) Source chain initiates a GetRequest

The request exists on the source chain and is committed into the source’s ISMP state (so it can be proven later).

2) Relayer (Tesseract) observes GETs and assembles proofs

Tesseract groups GETs and fetches:

Then it submits one unsigned message to Hyperbridge:

3) Hyperbridge verifies and certifies the response locally

pallet_state_coprocessor::handle_unsigned(GetRequestsWithProof) verifies both proofs, constructs GetResponse { get, values }, then calls:

What dispatch_get_response does (important):

It does not “send a response to the source chain” by itself. It certifies/records on Hyperbridge; consumers can then prove or read it.


Consuming Hyperbridge-certified GET responses

You usually consume the certification via either:

  1. Substrate runtime API + node RPC (fetch full response + proof material), or
  2. EVM Host/Handler delivery (MMR proof checked on-chain, then app callback invoked).

A) Substrate consumption (runtime API + node RPC)

From a Substrate client you can:

The runtime loads the response by using ResponseCommitments → offchain leaf position → fetch leaf (Leaf::Response(...)).

B) EVM consumption (EvmHost + Handler)

On EVM, the verifier is typically HandlerV1 (set as HostParams.handler). Delivery looks like:

  1. relayer submits a GetResponseMessage to HandlerV1.handleGetResponses(host, message)
  2. handler checks challenge period, request existence, replay protection
  3. handler verifies MMR multiproof against:
    • host.stateMachineCommitment(message.proof.height).overlayRoot
    • which for Hyperbridge coprocessor is the Hyperbridge message MMR root
  4. if valid, handler calls host.dispatchIncoming(GetResponse, relayer)
  5. EvmHost.dispatchIncoming(GetResponse, relayer) calls the app’s:
    • IApp.onGetResponse(IncomingGetResponse(response, relayer))

Tesseract relayer: what submit() and the queue pipeline really do (EVM)

In tesseract/messaging/evm, submit(messages) is just:

The queue is created with start_pipeline, and the handler is:

That handler:

  1. converts messages into EVM contract calls (calldata) for the Handler contract
  2. signs/sends transactions
  3. extracts *Handled events from receipts and returns TxResult

Crucially: proof bytes are already inside the Message objects by the time submit() is called; the queue just turns those messages into actual on-chain submissions.