grants/questbook-rejection-analysis

Questbook Multi-Ecosystem Rejection Analysis

multichainguide✅ Verifiedconfidence highhealth 100%
v2.0.0·by agentrel·Updated 3/20/2026

Deep dive into 1194 rejected applications across 8 grant programs Source: Questbook GraphQL API, March 2026

Overview

This analysis covers rejection patterns from 8 grant programs with a combined 1404 applications. It is the most comprehensive Questbook rejection analysis available, spanning TON, Polygon, Compound, Arbitrum, and AI agent ecosystems.

By the Numbers

MetricValue
Total Applications Analyzed1404
Total Approved210
Total Rejected1194
Overall Rejection Rate85%
Rejection Messages Analyzed1027

Rejection Rates by Program

ProgramRejectedTotalRejection Rate
TON Grants30040374%
DA Round300300100%
AngelHack x Polygon10411491%
Polygon Direct Track179179100%
Compound CGP 2.07312658%
AI Agents Agnostic (ai16z)11512592%
Onchain AI Agents (Crossmint)080%
Arbitrum Stylus Sprint12314983%

Top Rejection Categories (1027 detailed messages)

CategoryCount% of Messages
Team credibility issues57256%
Budget unjustified46645%
Sustainability unclear43242%
Weak milestone structure11611%
Technical concerns939%
Out of ecosystem scope818%
Vague / insufficient detail777%
Scope too broad636%
Duplicate / existing solutions606%
Differentiation missing414%

Note: one rejection message can trigger multiple categories.


Detailed Analysis by Category

Team credibility issues — 572 cases (56%)

Fix: GitHub profile with relevant commits. Name all team members. Link previous deployed projects. Mention any grants (even from other ecosystems). Solo founder? Scope down to match solo capacity.

Real rejection feedback samples:

"Hi Team,

Thank you for submitting your application and for your interest in building on TON.

Your product is great, but unfortunately, it doesn’t align with the current priorities of our grant program. At this stage, we’re focusing our resources on specific areas that address immediate strategic n..."

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

While we appreciate the concept and your interest in building within the TON ecosystem, the project appears to be at a very early stage. There ..."


Budget unjustified — 466 cases (45%)

Fix: Never write a single-line budget. Break into components. For each: what it is, hours × rate (or fixed cost with rationale), and what it produces.

Real rejection feedback samples:

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

While we appreciate the concept and your interest in building within the TON ecosystem, the project appears to be at a very early stage. There ..."

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

Unfortunately, DAO tooling is not currently within the scope of our grant strategy. Additionally, all the milestones outlined in the proposal l..."


Sustainability unclear — 432 cases (42%)

Fix: After the grant, what happens? Pick a sustainability model and detail it: freemium, protocol fees, DAO treasury, or team pivots to full-time.

Real rejection feedback samples:

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

While we appreciate the concept and your interest in building within the TON ecosystem, the project appears to be at a very early stage. There ..."

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

Unfortunately, DAO tooling is not currently within the scope of our grant strategy. Additionally, all the milestones outlined in the proposal l..."


Weak milestone structure — 116 cases (11%)

Fix: Each milestone = (1) specific deliverable, (2) completion criteria, (3) duration, (4) USD amount. Reviewers need to answer: "If we fund milestone 1, what exactly do we get, and how do we verify it?"

Real rejection feedback samples:

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

While we appreciate the concept and your interest in building within the TON ecosystem, the project appears to be at a very early stage. There ..."

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

Unfortunately, DAO tooling is not currently within the scope of our grant strategy. Additionally, all the milestones outlined in the proposal l..."


Technical concerns — 93 cases (9%)

Fix: Describe your architecture. Show you've thought through the hard parts. Include a security considerations section.

Real rejection feedback samples:

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

At the moment, we already have an APAC team working on similar translation efforts, and we’re currently prioritizing grants for technical devel..."

"After careful review, we've decided not to move forward with your application at this time. While we found your community-building focus and mission to be compelling, we identified several areas that would need significant improvement:

  • Your application would benefit from demonstrating more techni..."

Out of ecosystem scope — 81 cases (8%)

Fix: Read the grant program description carefully. Map each section of your proposal to a stated priority. Explicitly reference the program's goals.

Real rejection feedback samples:

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

While we appreciate the concept and your interest in building within the TON ecosystem, the project appears to be at a very early stage. There ..."

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

Unfortunately, DAO tooling is not currently within the scope of our grant strategy. Additionally, all the milestones outlined in the proposal l..."


Vague / insufficient detail — 77 cases (7%)

Fix: Replace generic claims with specifics. "We will build a great tool" → "We will deploy contract 0x... with functions X, Y, Z, handling N transactions/day. Validated by test suite at github.com/...". Every adjective should be backed by a number or a link.

Real rejection feedback samples:

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

Unfortunately, the project doesn’t align with our current grant strategy, and it’s unclear how it would bring meaningful value to the TON ecosy..."

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

Unfortunately, we don’t clearly see how the project brings unique value to the TON ecosystem. Additionally, the milestone structure includes on..."


Scope too broad — 63 cases (6%)

Fix: Cut it. Pick one feature and do it perfectly. Reviewers prefer "nail one thing" over "attempt everything".

Real rejection feedback samples:

"Hi Team,

Thank you for submitting your application and for your interest in building on TON.

Your product is great, but unfortunately, it doesn’t align with the current priorities of our grant program. At this stage, we’re focusing our resources on specific areas that address immediate strategic n..."

"Hi Team,

Thank you for submitting your application and for your interest in building on TON.

Your product is great, but unfortunately, it doesn’t align with the current priorities of our grant program. At this stage, we’re focusing our resources on specific areas that address immediate strategic n..."


Duplicate / existing solutions — 60 cases (6%)

Fix: Include a competitive analysis table. Name 3+ existing tools, list what they do, then show the gap your project fills.

Real rejection feedback samples:

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

While the project is interesting, we already have multiple payment solutions actively building on TON, including those supported through recent..."

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

At the moment, we already have an APAC team working on similar translation efforts, and we’re currently prioritizing grants for technical devel..."


Differentiation missing — 41 cases (4%)

Fix: Don't just explain what you build — explain why it needs to exist NOW on THIS chain/ecosystem.

Real rejection feedback samples:

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

The current version of the product doesn’t clearly stand out in terms of uniqueness, and the milestones are focused solely on marketing, which ..."

"Hey Team,

Thank you for submitting your proposal. After a thorough review, we regret to inform you that we cannot approve your grant request at this time.

At this stage, the project does not yet have a proven MVP, and the team’s execution capabilities are still untested, making it a bit early for ..."


Cross-Ecosystem Patterns

Universal Truths (All Programs)

  1. Field completeness matters: Approved proposals fill ALL fields, including optional ones. Empty sections read as lack of preparation.
  2. Specificity wins: The single most common rejection pattern is vagueness. Numbers, links, and deliverables beat adjectives every time.
  3. Milestones are gating: Every program requires milestone-based delivery. Generic "Phase 1/2/3" milestones are consistently rejected.

Ecosystem-Specific Nuances

TON Grants: Telegram Mini App integration is a strong positive signal. TON reviewers weight community access heavily.

Polygon: Both programs (AngelHack + Direct) weigh user onboarding story significantly. "How does this bring new users to Polygon/web3?"

Compound CGP 2.0: DAO alignment and COMP holder value framing is important. These are DAO reviewers, not corporate reviewers.

AI Agent Programs: Demo > description. Reviewers of AI agent grants are technical and expect working prototypes or clear proof-of-concept.

Arbitrum: Ecosystem-specific framing is critical. "Why Arbitrum specifically?" is a gating question — generic L2 benefits are insufficient.

The Resubmission Path

When a proposal is rejected on Questbook:

  1. Read every word of the feedback — especially for Arbitrum and Compound where reviewers write detailed explanations
  2. Quote the feedback in your resubmission — prove you read it
  3. Address every point — don't add words, restructure
  4. Narrow scope by 30–50% — the most successful resubmissions are significantly scoped down
  5. Add a prior-work section — show what you built since the rejection
  6. Request a pre-application call — most programs offer async or sync office hours

Red Flags (Universal)

If your proposal contains any of these, expect rejection:

  • "We will build..." without naming the tech stack
  • Budget as a single line: "Development: $40,000"
  • Milestones called "Phase 1 / Phase 2 / Phase 3" with no specifics
  • Team described only by pseudonyms with no verifiable work
  • Success measured only in GitHub stars or social followers
  • No mention of existing comparable tools
  • Timeline longer than 9 months for a first grant
  • No mention of what happens after the grant period ends
  • Generic ecosystem alignment (e.g., "Ethereum is secure and decentralized")

Generated by AgentRel. Source: Questbook GraphQL API, March 2026. 1404 applications analyzed.