# Universal Web3 Grant Writing Guide > Cross-ecosystem methodology based on 36+ accepted proposals from W3F (Polkadot), NEAR Foundation, Aptos Ecosystem, and other Web3 grant programs. ## The 7 Universal Principles of Successful Web3 Grants ### Principle 1: Specificity Beats Generality Every reviewer asks: "Why this ecosystem, why this team, why now?" ❌ Bad: "We will build a DeFi protocol on blockchain." ✅ Good: "We will build a concentrated liquidity AMM on NEAR Aurora, leveraging Aurora's EVM compatibility to port our battle-tested Uniswap V3 fork, adding cross-chain swaps via Wormhole to capture $2B in currently stranded liquidity." ### Principle 2: Milestones Are Contracts Treat milestones as binding deliverables, not vague phases. Each milestone must have: - **Concrete deliverables** (code, docs, deployed contracts) - **Verification criteria** (how reviewers confirm completion) - **Timeline** (weeks, not "month 1") - **Budget** (with hour/rate breakdown) ### Principle 3: Team Credibility Is Non-Negotiable Grant committees have seen hundreds of "innovative teams." Prove yours with: - GitHub profile with recent commits (link directly) - Previously shipped projects (with user numbers if possible) - Relevant technical background (not just "10 years in crypto") - Any prior grant completions ### Principle 4: Ecosystem Value > Technical Complexity The question is not "is this technically impressive?" but "does this make our ecosystem better?" Frame your project through the lens of: - **Fills a gap**: what's missing that builders need? - **Increases TVL/users**: by how much, with evidence? - **Enables new use cases**: what becomes possible after your project? ### Principle 5: Budget Realism Common mistakes: - Underpriced to "look reasonable" → signals naïveté - Overpriced without justification → immediate rejection - Missing categories (testing, docs, security audit) **Template approach:** List every person → hours per deliverable → hourly rate → total. ### Principle 6: Open Source Everything Every major grant program requires open-source. Don't fight it: - Choose Apache 2.0 or MIT upfront - Plan documentation as a first-class deliverable - Show you understand developer community norms ### Principle 7: Show Your Work Before Applying Most accepted proposals include: - A working prototype or proof-of-concept - An existing GitHub repo with real commits - A testnet deployment or demo - Community engagement (forum post, Discord discussions) --- ## Universal Proposal Structure Every grant proposal—regardless of ecosystem—should follow this structure: ```markdown # [Project Name] — [One-Line Value Prop] ## 1. Executive Summary (150 words) What: What are you building? Why: Why does the ecosystem need it? How: What's your unique approach? Who: Why is your team the right one? Ask: How much are you requesting? ## 2. Problem Statement - Quantify the problem with data - Explain why existing solutions fail - Describe the target user (developer? end user? both?) ## 3. Solution & Technical Architecture - System diagram or architecture overview - Key technical decisions with rationale - How it integrates with the target ecosystem - Security considerations ## 4. Team - Full name, role, relevant background - GitHub/portfolio links - Past projects with impact metrics ## 5. Development Roadmap [Milestone table per ecosystem conventions] ## 6. Budget | Category | Hours | Rate | Total | |----------|-------|------|-------| | Smart contract dev | | | | | Frontend | | | | | Testing & audit | | | | | Documentation | | | | | Community/marketing | | | | | **TOTAL** | | | | ## 7. Success Metrics - 3-month KPIs - 6-month KPIs - How will you measure ecosystem impact? ## 8. Sustainability - Revenue model (if any) - Post-grant maintenance plan - Team continuation plan ## 9. Additional Information - Prior work / existing codebase - Other funding sources - Community letters of support ``` --- ## Ecosystem-Specific Requirements | Requirement | W3F/Polkadot | NEAR | Aptos | Gitcoin | |------------|-------------|------|-------|---------| | License | Apache 2.0 or MIT | Open source (any) | Open source | Open source | | Milestone format | Table with 0a/0b/0c | Phase-based | Flexible | Flexible | | Tech requirements | Substrate/Rust preferred | NEAR SDK / Aurora | Move language | Any | | Audit required | For DeFi | For DeFi >$50K | Recommended | No | | Community | Substrate builders | NEAR community | Aptos ecosystem | Ethereum/multi | | Typical max | $100K | $500K | $50K | Quadratic | --- ## Budget Benchmarks (2024-2025) Based on accepted proposals: | Role | Low | Mid | High | |------|-----|-----|------| | Senior Rust/Move dev | $80/hr | $100/hr | $150/hr | | Senior Solidity dev | $70/hr | $90/hr | $130/hr | | Frontend (React) | $50/hr | $70/hr | $100/hr | | Security audit | $5K flat | $15K flat | $50K+ | | Technical writer | $40/hr | $60/hr | $80/hr | | Project management | $40/hr | $60/hr | $80/hr | --- ## Red Flags That Kill Applications 1. **No testnet demo** — if you can't show basic functionality, you're asking for faith 2. **Team has no GitHub history** — your "experienced team" needs proof 3. **Milestones are vague** — "complete development" is not a deliverable 4. **Asking for too much too early** — start small, build trust, apply again 5. **Not ecosystem-specific** — "works on any chain" means "optimized for none" 6. **Missing license** — non-negotiable, include it in Milestone 1 7. **No test plan** — code without tests will not be accepted 8. **Unrealistic timelines** — 6 months of work compressed into 1 month 9. **Requesting marketing funds only** — grants are for building, not shilling 10. **Copy-paste proposals** — committees talk to each other; they share notes --- ## Application Checklist Before submitting, verify: - [ ] Executive summary is ≤ 200 words - [ ] Every team member has linked GitHub with real activity - [ ] Milestones have specific, verifiable deliverables - [ ] Budget includes hours × rate × person breakdown - [ ] License is specified (Apache 2.0 / MIT recommended) - [ ] Testing strategy is included in every milestone - [ ] Documentation is a named deliverable - [ ] You've read 5+ accepted proposals from this program - [ ] You've engaged with the ecosystem community (forum post, Discord) - [ ] A working demo or prototype is linked --- ## Grant Programs Directory | Program | Max Amount | Focus | Apply | |---------|-----------|-------|-------| | W3F Grants | $100K | Polkadot/Substrate infra | grants.web3.foundation | | NEAR Grants | $500K | NEAR ecosystem | near.org/grants | | Aptos Grants | $50K | Move/Aptos ecosystem | aptosfoundation.org | | Gitcoin Grants | Variable (QF) | Ethereum/multi | grants.gitcoin.co | | Immunefi Bounties | $1M+ | Security/bugs | immunefi.com | | DoraHacks | Variable | Multi-chain hackathons | dorahacks.io | | Uniswap Grants | $100K | DeFi/Uniswap | uniswapfoundation.org | | Aave Grants | $30K | Aave ecosystem | aavegrants.org | | Compound Grants | $100K | Compound protocol | compoundgrants.org | --- *Synthesized from 36+ accepted grant proposals across W3F (24 proposals), NEAR (12 proposals), and Aptos ecosystems. Last updated: 2026-03-19.* ## Questbook Platform Statistics (March 2026) Questbook is the dominant grant management platform in web3. Based on analysis of 200 real applications: **Overall approval rate**: ~50% across all programs ### Approval Rates by Ecosystem | Ecosystem | Approved | Rejected | Rate | |-----------|----------|----------|------| | Developer Tooling on Arbitrum One and Stylus 3.0 | 25 | 40 | 38% | | Arbitrum New Protocols and Ideas 3.0 | 13 | 33 | 28% | | Arbitrum Education, Community Growth and Events 3.0 | 16 | 16 | 50% | | Arbitrum Gaming 3.0 | 14 | 0 | 100% | | Arbitrum - Orbit domain | 5 | 9 | 36% | | Arbitrum Stylus Sprint | 10 | 0 | 100% | | Final Grantees | 5 | 0 | 100% | | Compound : Dapps and Ideas Domain | 3 | 0 | 100% | ### Key Statistical Findings - Approved proposals fill ALL available fields (including optional ones) - Milestone structure (3–5 milestones with KPIs) present in >85% of approvals - Budget line-item breakdown required for any grant >$20K - Team GitHub links present in >70% of approved proposals - **Top rejection reason**: insufficient detail / vague description (~40% of rejections) - **Second**: weak milestone structure (~35% of rejections) For deep analysis: see `grants/questbook-proposal-guide` and `grants/questbook-rejection-analysis` ## Multi-Ecosystem Questbook Analysis (March 2026 Update) Analysis expanded from 8 grant programs: TON, Polygon (2 programs), Compound, Arbitrum Stylus Sprint, DA Round, AI Agents (ai16z + Crossmint). Total: **1404 applications**. **Overall approval rate across all programs**: ~15% ### Approval Rates by Program | Program | Approved | Rejected | Approval Rate | |---------|----------|----------|---------------| | TON Grants | 103 | 300 | 26% | | DA Round | 0 | 300 | 0% | | AngelHack x Polygon | 10 | 104 | 9% | | Polygon Direct Track | 0 | 179 | 0% | | Compound CGP 2.0 | 53 | 73 | 42% | | AI Agents Agnostic (ai16z) | 10 | 115 | 8% | | Onchain AI Agents (Crossmint) | 8 | 0 | 100% | | Arbitrum Stylus Sprint | 26 | 123 | 17% | ### Cross-Ecosystem Key Findings 1. **TON Grants** (~2000+ apps): Largest program by volume. Telegram Mini App integration is a strong positive signal. Approval rate varies by category. 2. **AI Agents**: Newest category with highest technical bar. Demo/prototype strongly recommended. Reviewers are practitioners. 3. **Compound CGP**: DAO-governed — frame value for COMP holders. Technical depth in DeFi protocols required. 4. **Polygon**: Two-track system (community vs direct). User onboarding story is uniquely important vs other ecosystems. 5. **Arbitrum**: Most detailed rejection feedback. Ecosystem-specific framing is non-negotiable. ### Universal Success Factors (from 1404 data points) - Filled all available fields: >90% of approved proposals - 3–5 milestones with specific deliverables: >80% of approvals - Itemized budget: required for grants >$20K across all programs - Team with verifiable prior work: >70% of approvals - Top rejection reason: vague/insufficient detail (~40% of rejections) For program-specific guides: `grants/ton-grant-guide`, `grants/polygon-grant-guide`, `grants/compound-grant-guide`, `grants/ai-agent-grant-guide` For rejection analysis: `grants/questbook-rejection-analysis`