grants/questbook-approved-patterns

Questbook Grant Success Patterns: What Winning Proposals Look Like

multichainguide✅ Verifiedconfidence highhealth 100%
v1.0.0·by Questbook + AgentRel Analysis·Updated 3/20/2026

Based on 288 approved grant applications across TON, Compound (CGP 2.0), ai16z AI Agents, Polygon, and Arbitrum ecosystems. Data collected March 2026 from the Questbook GraphQL API.

Overview

Questbook is the dominant grant management platform in web3. This skill documents the common patterns found in approved proposals across 5 major ecosystems, giving AI agents and human grant writers a practical guide to structuring winning applications.

Data coverage:

EcosystemApproved Applications Analyzed
Arbitrum210
Compound58
ai16z10
Polygon10
TON0
ai16z/Polygon0
TOTAL288

Grant size range: Up to 1,000,000 per proposal Average grant size: ~56,631 Average milestone count: 3.6


Pattern 1: Complete Field Coverage

Approved proposals fill every available field — including optional ones.

Fields present in approved applications:

  • projectdetails — present in 100% of approved proposals
  • projectname — present in 100% of approved proposals
  • teammembers — present in 100% of approved proposals
  • what innovation or value will your project bring to arbitrum? what previously un — present in 27% of approved proposals
  • what is the current stage of your project — present in 27% of approved proposals
  • team members — present in 17% of approved proposals
  • tldr — present in 16% of approved proposals
  • website — present in 16% of approved proposals
  • please provide a detailed breakdown of the budget in term of utilizations, costs — present in 16% of approved proposals
  • provide a list of the milestones, with the usd amount of the grant associated to — present in 16% of approved proposals
  • are milestones clearly defined, time-bound, and measurable with quantitative met — present in 16% of approved proposals
  • what is the estimated maximum time for the completion of the project — present in 16% of approved proposals

Key takeaway: Reviewers are looking for signal-to-noise. Empty fields signal a rushed application. The top approved proposals treat every field as an opportunity to demonstrate depth, even if the answer is brief.


Pattern 2: TLDR / One-Line Summary First

Nearly all Arbitrum and ai16z approved proposals include a tldr field with a crisp one-sentence summary. Examples from real approved applications:

  • "Letting agents put their money where their intelligence is" — AI agent prediction markets
  • "An advanced GUI-based time-travelling debugger for Stylus" — developer tooling
  • "A Move language VM integration enabling formal verification in Arbitrum's Stylus" — language VM
  • "A decentralized portfolio rebalancing agent operating fully on-chain" — DeFi automation

Pattern: State what you're building + who benefits + key differentiator, in one line.


Pattern 3: Milestone Structure That Gets Approved

Average milestone count in approved proposals: 3.6

Approved milestone structures follow a consistent format:

  1. Each milestone has a clear title describing the deliverable
  2. Each milestone has a specific USD or token amount attached
  3. Milestones escalate in complexity (infrastructure → feature → production)
  4. Completion of one milestone unlocks payment for the next

High-performing milestone structures (from real approved proposals):

Arbitrum Stylus Sprint ($250K, CodeTracer debugger):

  • Milestone 1: CodeTracer is open-sourced — $25,000
  • Milestone 2: Basic Stylus Debugging Support — $50,000
  • Milestone 3: VS Code Plugin — $75,000
  • Milestone 4: Transaction Tracing for Block Explorers — $50,000
  • Milestone 5: Production Readiness — $50,000

Arbitrum Stylus Sprint ($200K, 9Lives prediction market with AI agents):

  • Milestone 1: Prediction Market Resolved Using AI Agent — $60,000
  • Milestone 2: AI Agents Participating on 9 Lives — $70,000
  • Milestone 3: Growth of Platform and Feature Addition — $35,000
  • Milestone 4: Extended Growth and Feature Addition — $35,000

Anti-pattern: Milestones called "Phase 1 / Phase 2 / Phase 3" with no specifics. This is the #1 reason for rejection.


Pattern 4: Technical Depth Over Marketing Language

Approved proposals explain how they work, not just what they claim to do. From real approved applications:

Good (from ai16z AI Agents approved proposal):

"AI agents operate completely on-chain: (1) AI agent autonomously makes a transaction on DEXs directly from the client's address. It monitors the address, calculates the portfolio balance, compares it with the target asset allocation and rebalancing threshold, autonomously makes decision and executes swap if needed. TezoroAgent smart contract: [address]. (2) Autonomously monitors liquidity pools (Uniswap) to receive up-to-date asset quotes..."

Anti-pattern: "We will build an innovative AI tool that leverages cutting-edge technology to solve key pain points."

The key difference: approved proposals name specific contracts, repos, frameworks, and protocols.


Pattern 5: Team Credibility with Verifiable Prior Work

Winning teams include:

  • Previous grant history (even from other ecosystems)
  • Specific deployed contract addresses or GitHub repos
  • Named team members with verifiable credentials
  • Existing traction or shipped product

Examples from approved proposals:

  • "Metacraft Labs has received multiple grants from the Ethereum Foundation, Gnosis, LIDO, RocketPool, Aztec and Diva Staking"
  • "We have been active builders on the Arbitrum ecosystem, previously having built Fluidity Money, a Defi primitive on top of Arbitrum"
  • "Compound (17.5K USD, received) - Integrated compound as a yield source for Fluidity Money. AAVE (20K USD, fully received)"

Anti-pattern: Pseudonym team members with no verifiable work history.


Pattern 6: Ecosystem-Specific Alignment

Approved proposals explicitly connect their project to the grant program's stated goals.

Arbitrum

  • References Stylus (WASM), Orbit chains, or Arbitrum-native protocols
  • Explains the "multiplier effect" (how the tool helps N other developers)
  • Mentions composability with existing Arbitrum protocols
  • Specifies target chain: Arbitrum One vs Nova vs Orbit

ai16z / AI Agents

  • Demonstrates on-chain AI agent interactions (not just AI + blockchain)
  • References the ElizaOS / ai16z agent framework when relevant
  • Shows how agents create autonomous economic activity
  • Quantifies agent-driven transactions or decisions

Compound (CGP 2.0)

  • Addresses multichain deployment of Compound III
  • Focuses on reducing friction for DeFi developers
  • Security tooling proposals reference audit methodologies
  • Cross-chain domain proposals show concrete bridge integrations

Polygon

  • Addresses TVL growth and institutional capital attraction
  • Shows compatibility with Polygon's deep liquidity
  • References Polygon PoS, zkEVM, or CDK specifically

TON

  • Ties to Telegram user base growth
  • Shows practical Mini App or Bot integration
  • References TON's low transaction costs for micro-transactions

Pattern 7: Budget Justification by Component

Approved budgets are never a single line item. The standard approved format:

Component: [specific deliverable]
Cost: [amount]
Justification: [X hours × $Y/hr] OR [fixed cost because Z]

Example from a real approved Arbitrum proposal ($450K):

"Engineering hours: 1 Tech Lead (full time; all milestones): 40% [additional breakdown by role and component]"

Typical approved ranges:

  • Developer tooling / scripts: $5K–$30K
  • SDK / library: $20K–$80K
  • Full dApp or protocol: $50K–$200K
  • Research + implementation: $25K–$100K
  • Stylus Sprint / RFP track: $50K–$450K

Top Examples by Grant Size

Example 1: Pyth Oracle Implementation in Stylus (Arbitrum Stylus Sprint)

Grant Amount: 1,000,000 Summary: Douro Labs proposes to develop a high-performance Pyth oracle implementation in Stylus. Milestones:

  • Deliver a native implementation of the oracle contracts in Stylus: 500,000
  • Deliver a Stylus SDK for Stylus developers to compose with Pyth data : 200,000
  • Benchmark the benefits of the implementation both in terms of gas efficiency and the ability to compose more expansive transactions (i.e. bundles) : 200,000

Example 2: thirdweb Stylus integration (Arbitrum Stylus Sprint)

Grant Amount: 900,000 Summary: integrate Stylus with thirdweb, a full stack, open-source web3 dev platform with frontend, backend, and onchain tools Milestones:

  • Application Approval: 90,000
  • Initial Integration with base contracts: 360,000
  • Development of Key Use Case 1 : 150,000

Example 3: Sylow (Arbitrum Stylus Sprint)

Grant Amount: 700,000 Summary: Sylow (ˈsyːlɔv): a comprehensive cross-target Rust library for elliptic curve cryptography Milestones:

  • Integration with the community: 175,000
  • no_std coverage, to allow compatibility with Reth and other compilation targets like WASM: 175,000
  • Development of BLS threshold signatures scheme for all curves Sylow supports: 175,000

Example 4: RedStone Oracles (Arbitrum Stylus Sprint)

Grant Amount: 500,000 Summary: RedStone is the fastest growing oracle in 2024. Our expertise in deploying the market’s most accurate price feeds. Milestones:

  • Stylus product team interview: 50,000
  • Gathering feed requirements and needs assessments from ecosystem partners: 50,000
  • Investigating needs requirements and judging for unique specifications for deploying assets and integrating with the chain: 200,000

Example 5: DeBid - Fairblock (Arbitrum Stylus Sprint)

Grant Amount: 500,000 Summary: Fairblock will build onchain sealed-bid auction infrastructure using Stylus, serving DeFi, RWA, and tokenization apps. Milestones:

  • MVP: 150,000
  • Beta testing: 150,000
  • Audit: 150,000

Approved Projects: ai16z AI Agents Track

  • Reality Spiral: Major ElizaOS contributors, multi-agent AI ecosystem, building digital beings not just instrumental agents (250,000)
  • AI Agent Accelerator Program: Program to scale AI agent builders on ElizaOS & Polygon, offering mentorship, tech guidance & ecosystem support (200,000)
  • PingPal: PingPal is an AI-powered assistant that filters noise, prioritizes key updates, and keeps Web3 contributors focused. (120,000)
  • looped leverage Agent: we built the first of its kind ai agent, executes fully autonomously and intelligently looped leverage trading on Aave (100,000)

Approved Projects: Compound CGP 2.0

  • Emergency Upgrade Rollbacks for the Timelock:
  • Compound DAO Governor Upgrade: Ben DiFrancesco, Ed Mazurek, Alex Keating, John Feras, Gary Ghayrat, Marco Mariscal
  • Compound v3 Multi-chain User Portfolio Tracker: Cole, Spencer

Application Checklist (Based on Approved Pattern Analysis)

Required for approval:

  • TLDR / one-line summary captures value prop + differentiator
  • All fields filled (no blanks, especially "optional" ones)
  • Technical approach cites specific stack, contracts, frameworks
  • 4+ milestones, each with a specific deliverable + USD amount
  • Budget broken down by component with hour/rate or fixed-cost rationale
  • Team section has verifiable GitHub/social links
  • Prior work linked (deployed contracts, shipped repos, or previous grants)
  • Ecosystem alignment section answers "why this chain specifically"
  • KPIs are measurable (on-chain metrics, not "improve ecosystem")
  • Timeline 2–6 months for first grant, <12 months total

Differentiators in top-approved proposals:

  • Previous grant history from ANY ecosystem (cross-ecosystem credibility accepted)
  • Existing deployed code or testnet proof-of-concept
  • Named comparable projects + explicit differentiation table
  • Sustainability plan: what happens after the grant period ends
  • Open-source commitment (especially in Arbitrum tooling programs)

Resources

Generated by AgentRel. Source: Questbook GraphQL API, 288 approved applications, March 2026.