MVP Definition & Scoping
Clear Distinctions
MVP
Definition: Minimum viable product – validates a core value hypothesis end-to-end with minimal engineering effort and real user feedback.
Prototype
Definition: Exploratory and experimental; may be non-functional or simulated (e.g., Figma mockups, Wizard-of-Oz demonstrations).
Pilot
Definition: MVP deployed with real users in a constrained, controlled setting to gather focused feedback and usage data.
V1
Definition: Post-validation product ready to scale beyond early adopters with proven market fit and refined features.
Quick Navigation
Turning VPC into Testable Hypotheses
Hypothesis Card Framework:
• We believe [persona] needs [job-to-be-done] to achieve [gain] and avoid [pain]
• We will know this is true when [metric/behavior] improves by [X] within [timebox].
• We will test this by [MVP slice] delivered to [early adopters].
Template 1 – MVP Hypothesis Card
Transform your Value Proposition Canvas insights into testable MVP hypotheses:
Persona & Context
Question: Who is your target user and what situation are they in when they need your solution?
Problem (Pain) & Job to be Done
Question: What specific pain point does your user experience, and what job are they trying to accomplish?
Value Hypothesis
Question: What gain will your solution provide, and how will it address the identified pain?
MVP Slice We'll Ship
Question: What is the minimum feature set that can test your core hypothesis?
Primary Metric + Target
Question: How will you measure success, and what target indicates validation?
Timebox (≤2 weeks)
Question: What is your testing timeline for gathering meaningful data?
Risks & Assumptions
Question: What could go wrong, and what assumptions are you making?
How We'll Recruit Early Users
Question: Where will you find users willing to test your MVP?
Strategic Slicing Patterns
Four proven patterns for breaking down your product vision into testable MVP slices:
Concierge/Wizard-of-Oz
Manual processes behind the scenes; automate successful workflows later based on real usage patterns.
Landing Page + Waitlist
Test message-market fit signals first before building complex functionality.
Single Workflow Focus
One job-to-be-done executed exceptionally well rather than multiple mediocre features.
API-First Approach
Provide core value via simple API; build user interface after validating backend logic.
Prioritization Mini-Lab
MoSCoW Method
Must have / Should have / Could have / Won't have (for now). Quick categorization for initial feature triage.
RICE Scoring
Reach × Impact × Confidence ÷ Effort. Quantitative approach to compare features objectively using data-driven metrics.
Kano Analysis
Classify features as Delighters, Performers, or Basics through quick user pulse surveys and feedback.
Exercise 2 – Rapid Prioritization Process
01. Feature Brainstorm
List 8-12 candidate features or experiments that could be included in your MVP scope.
02. MoSCoW Tagging
Categorize each feature using the MoSCoW framework to establish basic priority levels.
03. RICE Scoring
Score Must/Should items using RICE methodology (1-5 scale works well for rapid assessment).
04. Kano Identification
Mark 1-2 features as Kano Delighters for compelling MVP storytelling and user engagement.
Template 2 – Prioritization Sheet
Use this template to systematically evaluate and prioritize your MVP features:
| Candidate Feature | MoSCoW | Reach (1-5) | Impact (1-5) | Confidence (1-5) | Effort (1-5) | RICE Score | Kano Type |
|---|---|---|---|---|---|---|---|
| User Registration | Must | 5 | 4 | 5 | 2 | 50.0 | Basic |
| Email Notifications | Should | 4 | 3 | 4 | 3 | 16.0 | Performer |
| Advanced Analytics | Could | 3 | 4 | 2 | 5 | 4.8 | Delighter |
| Social Media Integration | Won't | 2 | 2 | 3 | 4 | 3.0 | Delighter |
| Mobile App | Should | 4 | 5 | 3 | 5 | 12.0 | Performer |
| API Access | Could | 2 | 3 | 4 | 3 | 8.0 | Delighter |
Scoring Guidelines
Reach
Question: How many users will this feature impact?
- 1 = Very few
- 3 = Some users
- 5 = All users
Impact
Question: How much will this feature improve the user experience?
- 1 = Minimal
- 3 = Moderate
- 5 = Massive
Confidence
Question: How confident are you in your Reach and Impact estimates?
- 1 = Low confidence
- 3 = Medium confidence
- 5 = High confidence
Effort
Question: How much work will this feature require?
- 1 = Minimal effort
- 3 = Moderate effort
- 5 = Maximum effort
Rapid Prototyping Approaches
Bias towards open-source solutions to reduce vendor lock-in, increase development speed, and maximize learning opportunities.
Stack Recipes by Product Type
Web App (CRUD + Auth + Dashboard)
Data / AI MVP
Open-Source Toolbox
Your curated collection of battle-tested open-source tools for rapid MVP development:
Headless & Admin
Observability
Analytics: PostHog
A/B Testing: GrowthBook, Flagsmith
Errors: GlitchTip, OpenTelemetry
Measuring the MVP
Define Success Before You Ship
Activation Metric
Must-have: First value delivered in ≤5 minutes. Measures how quickly users experience core product benefit.
North-Star Metric
Example: Weekly active workflows per user. Tracks sustained engagement and product-market fit signals.
Guardrail Metrics
Error rate, time-to-value, churn proxy. Ensures quality doesn't degrade while optimizing for growth.
Open-Source Instrumentation Stack
Feature Flags & A/B Testing
GrowthBook: Statistical A/B testing platform.
Flagsmith: Feature flag management with targeting and rollout controls.
Error Monitoring
GlitchTip: Sentry-compatible error tracking and performance monitoring for identifying and fixing issues quickly.
Logging & Observability
OpenTelemetry: Standardized telemetry collection.
Template 3: MVP Plan
Create a focused, actionable MVP development plan:
1Problem Statement
Prompt: What specific problem are you solving? Be precise and measurable.
2Persona & Job to be Done
Prompt: Who is your ideal user and what job are they hiring your product to do?
3Hypothesis & Success Metric
Prompt: What do you believe about user behavior, and how will you measure validation?
4MVP Scope (Must/Should)
Prompt: What features will you include in your first release?
5Tech Approach (Stack Recipe)
Prompt: What technologies will you use to build your MVP?
6Instrumentation Plan
Prompt: How will you track user behavior and measure success?
7Risks & Mitigation
Prompt: What could go wrong, and how will you address potential issues?
8Timeline & Owner
Prompt: When will you deliver, and who is responsible for each component?
Ready to Build Your MVP?
Start with one strategic slice, measure what matters, and iterate based on real user feedback.