Web3 Grant Writing Guide
Overview
The Web3 ecosystem offers numerous foundation grant opportunities, but most applications get rejected due to unclear structure or failure to address key evaluation criteria. This guide provides a universal writing framework, evaluation criteria analysis, and common rejection reasons, applicable to mainstream grant programs like Solana Foundation, Ethereum ESP, Sui Foundation, TON Foundation, and others.
Universal Grant Application Structure
Standard Five-Section Structure
## 1. Problem Statement
- Clearly describe the specific problem you're solving
- Quantify the problem's scale (DAU/TVL data, pain point examples)
- Why existing solutions are inadequate
## 2. Solution
- Your technical/product approach
- Differentiation from existing solutions
- Why your team can deliver
## 3. Impact
- Contribution to the ecosystem (developers, users, TVL)
- Quantifiable target metrics (KPIs)
- Timeline (typically 3-6 months)
## 4. Milestones
- Phased delivery with clear outputs per phase
- Tied to funding disbursement
- Example: M1 (4 weeks) = Technical architecture document + Alpha version
## 5. Budget
- Budget allocated by milestone
- Reasonable labor costs (market rate)
- Infrastructure/tooling expenses
⚠️ Common Rejection Reasons
1. Problem/Solution Mismatch
❌ Wrong: Describing a huge problem but the solution only addresses a small part ✅ Right: Problem scope precisely matches solution capability
2. Lack of Ecosystem Relevance
❌ Wrong: "We're building a user-friendly DeFi platform" ✅ Right: "We're providing deep liquidity for long-tail tokens on Solana, solving slippage issues in existing DEXs (data: past 30 days, tokens with market cap <$100M average 3.2% slippage)"
3. Unverifiable Milestones
❌ Wrong: M1 = "Complete product development" ✅ Right: M1 = "Deploy to Testnet, pass security audit, integrate 3 partner protocols, MAU > 500"
4. Missing Team Background
Must include:
- Core members' GitHub links (demonstrating technical capability)
- Relevant project experience (especially contributions to the same ecosystem)
- Past grant delivery track record (if any)
5. Opaque Budget
❌ Wrong: Total budget $50,000 (no breakdown) ✅ Right:
- Development labor (2 engineers × 3 months): $36,000
- Audit fees: $8,000
- Server and tools: $2,000
- Community activities: $4,000
Evaluation Criteria Weighting (Universal)
| Dimension | Weight | Description |
|---|---|---|
| Technical Feasibility | 30% | Can the team deliver, is the architecture sound |
| Ecosystem Impact | 25% | Actual contribution to ecosystem developers/users |
| Team Capability | 20% | Past experience, GitHub activity |
| Milestone Clarity | 15% | Verifiable, reasonable timeline |
| Budget Reasonableness | 10% | Cost-effectiveness, focused fund allocation |
Milestone Template
### Milestone 1 (Weeks 1-4): Foundation
**Deliverables**:
- [ ] Technical design document (with architecture diagram)
- [ ] Core module code (open-sourced on GitHub)
- [ ] Unit test coverage > 80%
**Acceptance Criteria**: Code runs locally, complete documentation
**Budget**: $X,XXX
### Milestone 2 (Weeks 5-10): Beta Release
**Deliverables**:
- [ ] Deploy to Testnet
- [ ] User documentation + Demo video
- [ ] Beta users: ≥ 50
**Acceptance Criteria**: Publicly accessible, user feedback received
**Budget**: $X,XXX
Pre-Submission Checklist
- Problem is data-backed
- Solution has technical details (not just vision)
- Each milestone has verifiable deliverables
- Team members have GitHub/LinkedIn links
- Budget has itemized breakdown
- Explained why building on this specific chain
Feedback
If this skill contains incorrect or outdated information, call: agentrel_feedback(skill="grants/web3-grant-writing", issue="<description>", fix="<optional>")