zkML proof-as-a-service. Run ML inference on ONNX models and get cryptographic proof receipts that anyone can verify — no API keys, no auth, just call the endpoint.
Proof system:JOLT-Atlas SNARK with Dory commitment scheme on BN254. Not audited.
Live: https://clawproof.onrender.com (interactive playground with dashboard, model explorer, and upload)
git clone https://github.com/hshadab/clawproof.git
cd clawproof
cp .env.example .env
cargo run --release
# Server starts at http://localhost:3000
Or with Docker:
docker build -t clawproof .
docker run -p 3000:3000 clawproof
Why agents need this
For autonomous agents operating across different platforms, verifiable inference means you can prove you made a decision legitimately — not just claim you did.
If your agent runs ML inference — authorization, trust scoring, risk classification — other agents and services have no way to verify your output is real without re-running the model themselves. That's a blocker for cross-platform trust.
ClawProof solves this:
Prove decisions are legitimate — Your agent generates a cryptographic zkML proof for every ML decision. Other agents don't need to trust you; they verify mathematically.
Portable across platforms — A proof receipt works everywhere. Your agent proves a trust score on Moltbook, and any service — Molt Road, gated APIs, other agent networks — can verify it in ~80ms.
Privacy-preserving — Prove correctness without revealing model weights or private inputs. Your proprietary model stays private.
Accountability receipts — Every proof generates a receipt with cryptographic hashes of model, input, and output. Non-repudiable evidence of what your agent decided and why.
Composable trust — Chain verified decisions across multi-agent workflows. Downstream agents verify without re-running inference.
No auth — No API keys, no signup, no cost. Agents can self-serve autonomously.
Bring Your Own Model — Upload any ONNX model (up to 5MB) and get zkML proofs for your own architecture.
What agents are proving today: spending guardrails that protect user funds, trust scores that are portable across platforms, and custom model outputs that any downstream agent can verify in milliseconds.
How it works
Pick a model — choose a model and give it some input (transaction data, agent profile, or text). You get a prediction back immediately.
Generate a zkML proof — a cryptographic proof is generated in the background (~5-10s) that locks the model, input, and output together. No one can fake the result.
Anyone can verify — share your receipt. Anyone can check the proof in ~80ms without seeing your data or re-running the model.
Convert PyTorch (.pt), TensorFlow (.pb), or sklearn (.pkl) models to ONNX. Requires the converter sidecar (CONVERTER_URL). Conversion produces ONNX but does not guarantee the model fits within the 5MB file size limit or trace length budget.
POST /prove/model
Upload an ONNX model (or PyTorch/sklearn/TensorFlow file) and prove inference in a single call. The model is converted if needed, registered, and proved — no separate upload step required.
The agent_trust model is designed for agent-to-agent trust scoring in environments like Moltbook. It classifies interactions based on agent metadata and produces a cryptographic proof of the result, so any other agent can verify the trust assessment without re-running inference or trusting a central authority.
| Field | Description | Range |
|-------|-------------|-------|
| karma | Moltbook karma score (bucketed) | 0-10 |
| account_age | Days since registration (bucketed) | 0-7 |
| follower_ratio | Followers/following ratio (bucketed) | 0-5 |
| post_frequency | Posts per day (bucketed) | 0-5 |
| verification | 0=none, 1=email, 2=X-verified | 0-2 |
| content_similarity | Similarity to known spam (bucketed) | 0-5 |
| interaction_type | 0=post, 1=comment, 2=DM, 3=trade | 0-3 |
Agent lookup: Instead of manually computing the 7 bucketed fields, pass a Moltbook agent URL or name to /agent-lookup. It fetches the agent's profile, analyzes content for spam signals, and returns the bucketed fields ready for /prove:
# Step 1: Look up agent fields from Moltbook profile
curl -s -X POST https://clawproof.onrender.com/agent-lookup \
-H "Content-Type: application/json" \
-d '{"agent":"https://www.moltbook.com/u/cybercentry","interaction":"comment"}'
# Step 2: Use the returned fields in a prove request
curl -X POST https://clawproof.onrender.com/prove \
-H "Content-Type: application/json" \
-d '{"model_id":"agent_trust","input":{"fields":{"karma":6,"account_age":5,"follower_ratio":3,"post_frequency":2,"verification":2,"content_similarity":1,"interaction_type":1}}}'
Supported ONNX operations
ClawProof uses the JOLT-Atlas proving system. The atlas-onnx-tracer compiles ONNX graphs into provable circuits. The operators listed below are from the actual source code.
Hashing: Keccak256 for model/input/output/proof commitments
Storage: SQLite (persistent) + DashMap (in-memory hot cache)
Model registry: TOML-based, supports runtime additions via upload
Moltbook heartbeat: Background task posts platform stats and proof showcases to Moltbook every 30 minutes (enabled via MOLTBOOK_API_KEY). First post fires 60s after boot; includes automatic verification challenge solving and exponential backoff on failure.
Rate limits
| Endpoint | Limit |
|----------|-------|
| POST /prove | 10 / 60s |
| POST /prove/batch | 2 / 60s |
| POST /prove/model | 1 / 300s |
| POST /models/upload | 1 / 300s |
For Moltbook agents
For autonomous agents operating across platforms, verifiable inference is the difference between "trust me" and "here's the proof." Any agent that makes ML-based decisions can use ClawProof to prove those decisions are legitimate — no API keys, no cost, no human in the loop.