How Goodspeed Scores App Ideas: Behind the 100-Point Rubric
A transparent breakdown of the scoring methodology we use to evaluate app ideas, covering demand signals, competition, monetization, and more.
## Why We Score Ideas
Most app ideas die because the builder never asked: "Is this actually worth building?"
They skip validation, spend months coding, launch to crickets, and conclude that the market is broken. But the market isn't broken. The idea was never tested against reality.
Our scoring system exists to answer one question: does this idea have the characteristics of a successful app? Not a guarantee of success, but a data-backed assessment of potential.
## The 100-Point Framework
Every app idea that enters our pipeline gets scored on a 100-point scale across multiple dimensions. Here's how the points break down and what each dimension measures.
### Demand Signal Strength (0-25 points)
This is the most important dimension. Is anyone actually asking for this?
We measure demand by scanning real conversations across 12 sources: - Hacker News discussions - Reddit threads (250+ signals per cycle) - App Store reviews - Product Hunt launches - GitHub Issues - Google Trends data - Industry review sites (G2, Capterra) - Twitter/X conversations - Indie Hackers discussions
We're not counting keyword volume. We're reading actual conversations where people describe problems, frustrations, and unmet needs. The AI extracts problem statements and groups them into themes.
**Scoring breakdown:** - 20-25: Strong, recurring demand from multiple independent sources - 15-19: Clear demand from at least 2-3 sources - 10-14: Some demand signals, but scattered - 5-9: Weak signals, possibly niche - 0-4: No meaningful demand detected
### Competition Analysis (0-20 points)
Competition is counterintuitive in our scoring. Zero competitors is actually a warning sign, not an opportunity. It usually means there's no market, not that you've found a gap.
**Scoring breakdown:** - 15-20: A few competitors exist, but they have clear weaknesses (poor UX, missing features, high pricing) - 10-14: Moderate competition with differentiation opportunities - 5-9: Highly competitive space with established players - 0-4: Either no competitors (risky) or dominated by well-funded incumbents
We look specifically at competitor weaknesses. Low app store ratings, frequent complaints in reviews, and missing features that users request repeatedly. That's where the opportunity lives.
### Monetization Viability (0-15 points)
Can this app make money? Not theoretically. Actually.
We evaluate willingness to pay based on: - Existing paid alternatives in the space - The nature of the problem (pain vs convenience) - Target audience spending habits - Monetization model fit (subscription, one-time, freemium)
**Scoring breakdown:** - 12-15: Users already pay for similar solutions, strong willingness to pay - 8-11: Monetization is viable with the right model - 4-7: Monetization is possible but challenging (the problem isn't painful enough for users to pay) - 0-3: Difficult to monetize (users expect free solutions)
### Technical Feasibility (0-10 points)
Can a small team (or solo developer) build this? Some ideas are great but require resources that indie builders don't have: massive datasets, real-time infrastructure at scale, regulatory compliance, or specialized hardware integration.
**Scoring breakdown:** - 8-10: Buildable by a solo developer with standard tools - 5-7: Buildable but requires some specialized knowledge or third-party APIs - 3-4: Requires significant infrastructure or domain expertise - 0-2: Not feasible for indie builders
### Market Timing (0-10 points)
Is now the right time for this app? Being too early is just as bad as being too late.
We evaluate: - Trend direction (is interest growing or declining?) - Technology readiness (are the needed APIs, SDKs, and platforms available?) - Regulatory environment (are new regulations creating demand?) - Cultural shifts (is user behavior changing in a way that creates opportunity?)
**Scoring breakdown:** - 8-10: Perfect timing. Growing trend, ready technology, increasing demand - 5-7: Good timing, but the window isn't urgent - 3-4: Timing is neutral - 0-2: Too early (technology not ready) or too late (market saturated)
### Target Audience Clarity (0-10 points)
Can you clearly define who this app is for? "Everyone" is not an audience. "Remote workers who manage multiple freelance clients" is an audience.
**Scoring breakdown:** - 8-10: Crystal clear target audience with identifiable communities - 5-7: Audience is defined but broad - 3-4: Audience is vague - 0-2: No clear target audience
### Differentiation Potential (0-10 points)
What's the angle? Why would someone choose this app over existing alternatives?
We look for: - Unique data or content that competitors can't easily replicate - A novel approach to the problem - Underserved audience segments - Superior UX opportunity (existing solutions are clunky)
**Scoring breakdown:** - 8-10: Clear, defensible differentiation - 5-7: Some differentiation, but competitors could replicate - 3-4: Limited differentiation - 0-2: No meaningful differentiation
## How Scores Map to Decisions
Our pipeline uses score thresholds to determine what happens next:
- **75+: PROMOTE** - This idea moves forward to architecture and development - **55-74: DEEP DIVE** - Promising but needs more research. We run additional analysis, look at niche segments, and re-score. - **Below 55: ARCHIVE** - Not enough signal to justify building. The idea stays in our database and can be re-evaluated if market conditions change.
## The Two-Pass Scoring Process
Scoring isn't a single step. We run two passes:
**Pass 1: Signal Extraction** An AI model reads each signal source (Reddit thread, app review, HN comment, etc.) and identifies the core problems being discussed. For each problem, it generates 2-3 possible app solutions. This produces a list of candidate ideas with supporting evidence.
**Pass 2: Rubric Scoring** A separate AI model scores each candidate against the rubric dimensions above. It has access to the original signals, competitive landscape data, and market context. The scoring is calibrated against known successful and unsuccessful apps.
We intentionally use two separate passes to reduce bias. The extraction model doesn't know what score threshold an idea needs to hit. The scoring model doesn't know how many signals supported the idea.
## What We've Learned from 500+ Scored Ideas
After running our scoring pipeline across 500+ ideas, patterns have emerged:
**High-scoring ideas (75+) tend to:** - Address painful, recurring problems (not nice-to-haves) - Target audiences that already pay for tools - Have 2-5 competitors with clear weaknesses - Be buildable with standard technology
**Ideas that score well but fail to convert to successful apps often:** - Have demand from a vocal minority that doesn't represent the broader market - Face regulatory or compliance hurdles not captured in scoring - Require content or data that's expensive to produce
**Low-scoring ideas (below 55) that surprise us:** - Sometimes the timing dimension is the only thing wrong. An idea scores 45 today but might score 75 in two years. We re-evaluate archived ideas quarterly.
## Transparency and Limitations
Our scoring system is not perfect. No scoring system is. Here are the known limitations:
- **Recency bias**: We weight recent signals more heavily. A problem that was big 2 years ago but quieted down gets scored lower, even if it's still relevant. - **English-language bias**: Our signal sources are primarily English. Ideas that address non-English markets may be under-scored. - **Consumer app bias**: The rubric is tuned for consumer mobile apps. B2B enterprise ideas don't fit as neatly. - **No execution scoring**: We score the idea, not the builder. A great idea poorly executed will fail regardless of score.
We're continuously refining the rubric based on outcomes. When a high-scoring idea fails or a low-scoring idea succeeds, we analyze why and adjust.
Want to see scored ideas? Check out our [ideas page](/ideas) or learn more about [how our discovery pipeline works](/features/discovery).