Idea Validation: Market demand, Product
Product-Market Fit Survey
Ask users how they would feel if they could no longer use a product

How: If over 40% of customers say they would be very disappointed, there is a product/market fit. Allow for open-ended responses to understand why. If you are close to 40%, learn what product or segment tweaks can get you above the line and what solutions customers consider as viable alternatives to their problems.
Why: While a product/market fit survey provides an excellent way to gauge customers' dependency and fondness for an existing product, it does not validate actual product/market fit. If you have to ask whether you have product/market fit, the answer is simple: you don't. Product/market fit surveys can give false positive results Ð especially in the early stages of a product.
Scale when it’s time
The timing of when you decide to scale your idea can make or break your business. Scale too early, and users will leave. Scale too later, and a competitor will beat you to the punch.
To help determine when it is time to push ‘GO’, the Product-Market fit survey invented by Sean Ellis, can come in handy.
Ask a simple question…
The product-market fit (PMF) survey asks the customer one very specific question:
How would you feel if you could no longer use [this product]?”
Let participants answer with one of these mulitple choice responses:
- Very disappointed
- Somewhat disappointed
- Not disappointed (it isn’t really that useful)
- N/A – I no longer use [product]
By comparing nearly 100 startups, Sean Ellis, who came up with the survey format, found that if over 40% of users responded that they would be “Very dissapointed” to stop using the product, there’s a great chance that the solution had found its Product-Market fit. He found that those companies that scored below 40% all struggled to reach traction.
This question intentionally focuses on disappointment (a negative emotion) rather than satisfaction. Why? As Rahul Vohra (Superhuman’s founder) explains, asking about negative impact reveals how necessary your product is, whereas asking if people like your product can invite polite or overly positive bias. If a user says they would be “very disappointed” without your product, it implies your product has become a crucial, irreplaceable part of their life or work. That’s a strong sign of product-market fit. Conversely, if most users shrug it off, saying they wouldn’t be disappointed, it suggests your product is nice-to-have at best.
Count the “Very disappointed” – this is the key metric. The PMF score is essentially the percentage of respondents who chose “very disappointed”. For example, if 200 users responded and 80 of them said very disappointed, your PMF score would be 40% (80/200 * 100). You can ignore anyone who answered “N/A – don’t use” in the calculation, since they’re not current users. The focus is on the ratio of users who genuinely depend on your product.
It’s important that you survey the right users (more on that in the step-by-step guide) and that you phrase the question exactly as above. Minor wording changes can affect results. Using Ellis’s tried-and-true phrasing ensures you’re benchmarking against the same measure used by many other companies. Tools like survey.io (created by Ellis) and others automatically include this question template. But you can just as easily implement it in any survey tool of your choice.
Interpreting the PMF score (≥ 40% = fit)
So what does your percentage of “very disappointed” responses mean? The rule of thumb is:
- 40% or higher – You likely have product-market fit. Hitting this benchmark means a significant chunk of users consider your product a must-have. Sean Ellis observed that products that achieved viral growth or scale almost always cleared ~40% on this metric. If you’re at or above 40%, that’s a green light to focus on scaling growth, since you have validation that users truly value the product.
- Below 40% – You haven’t reached product-market fit yet. If only 10%, 20%, or 30% of users would be very disappointed without the product, it implies that the majority could take it or leave it. In this case, the product likely needs improvements, a better focus, or even a pivot to better meet market needs. You should prioritize learning from users and increasing that score (we’ll discuss how) before pouring resources into growth. As Ellis and others caution, trying to scale without PMF often leads to wasted effort and failure.
It’s worth noting that 40% is a high bar – and intentionally so. At first glance, 40% might not sound like a lot, but in practice very few products exceed that. For context, when Slack was a rapidly growing success in 2015, a PMF survey of 731 Slack users found 51% would be very disappointed without it. That comfortably beat the benchmark and confirmed Slack’s strong PMF. But even Slack – a beloved product – didn’t get 70% or 90%. Rahul Vohra points out that you might expect an amazing product to score higher, but even the best often hover around 50%. This underscores that 40% is a meaningful threshold. Clearing it means you’re in great company; falling short is the norm and indicates room to improve.
Slack’s PMF survey results (2015) – 51% of users said they would be “very disappointed” without Slack, far above the 40% benchmark. This confirmed that Slack, with ~500k users at the time, had achieved product-market fit. Even such a popular product only got about half of users at the highest level of disappointment, showing how tough the 40% bar can be.
When interpreting your score, consider the context of your user base as well. If you surveyed a broad cross-section of all users, the result reflects overall PMF. But what if you’re targeting multiple segments or a new user cohort? It’s possible one segment loves you (e.g. 50%+ “very disappointed”) while another is lukewarm (10-20%). In that case, your aggregate might be below 40, but you do have PMF with a certain niche. Many products find strong fit in a specific market segment before expanding. If you see this pattern, you might interpret it as “We have PMF with Segment A but not with Segment B”. You could then decide to focus on Segment A (as your beachhead market) and postpone or rethink Segment B. We’ll discuss segmentation more later.
Also, treat 40% as a guideline, not a hard guarantee. Hitting 42% doesn’t mean your startup will automatically succeed, and 38% doesn’t doom you. It’s one indicator (albeit a valuable one). Use it alongside other metrics and qualitative insight. For instance, if you have 35% but your retention is excellent, you might still be in a good place – or you might be just shy of PMF and a few tweaks could push you over the line. On the flip side, a 45% with very few total users might mean a small group really loves it, but you’ll need to see if that love extends to a wider audience. Trend over time is key: if your PMF score is rising toward 40%, you’re on the right track, whereas a stagnant low score suggests deeper issues.
In summary, ≥40% “very disappointed” = product-market fit, go for growth; <40% = not there yet, time to learn and improve. This simple rule, born out of Ellis’s research, has stood the test of time across many startups. Next, let’s talk about potential pitfalls so you interpret your PMF survey correctly.
And then follow up…
Receiving a PMF score is a great metric to gauge your progress on your road to scale. However, if you don’t follow up with more qualitative questions or more experiments, it can be hard to understand exactly what impacted your score to move from one plateau to the other.
Typical follow-up questionns
The followinng can serve as inspiration to what kind of questions you might ask your customres. As with all research, if you can find this evidence looking at real behavioral data you will want to do that. Real evidence, observing what people do, will always trump opinions and what people say. You build better products by asking better questions.
- Please help us understad why you selected this answer. (Open ended question)
- Have you recommended this product to anyone? (No, Yes: ___)
- What type of person do you think would benefit most from the product? (Open ended question)
- How can we improve the product to better meet your needs? (Open ended question)
- How often do you use our product? (Never, Once a year, A couple of times a year, Once a month, A couple of times a month, Once a week, Every day)
- How would you feel if you could no longer use our product? (Very disappointed, Somewhat disappointed, No reaction / neutral)
- Have you ever recommended us to friends or family? (Yes / No)
- What do you think sets us apart from our competitors? (Open ended question)
- What could we improve on? (E.g. Customer Support, Reliability of Service, Design, Functionality, Other)
- Could you tell us a bit about yourself? What’s your employment status? (Open ended question)
- How did you discover/come across this product? (Blog, Google/Search, Twitter, LinkedIn, Facebook, Word of mouth, other)
- What would you likely use as an alternative if this product were no longer available? (I probably wouldn’nt use an alternative, I would use an alternative: ___)
- Would it be okay if we followed up by email to request a clarification of one or more of your responses? (Yes - enter email / No)
How does Product-Market Fit look like?
“If you have to ask whether you have Product/Market Fit, the answer is simple: you don’t.” – Eric Ries
You ultimately want to put yourself in a position where you can observe real user behavior that proves that you have indeed found Product-Market fit.
How many responses do I need?
Buffer conducted a study using the PMF survey and found that they only needed 40-50 respones for the results to carry significance. The critical factor is ensuring a diverse range of respondents and understanding their characteristics. It is essential to avoid surveying customers who are not invested in your product.
To prevent skewed results, it is imperative to collect feedback from users who are actively engaged with your product or company, possess a basic understanding of its core features, and have used your product within the two weeks preceding the survey. This approach guarantees fresh and relevant insight.
It is worth noting that the “Sean Ellis Product Market Fit Survey”, also referred to as the “40% Test,” calls for a 40% response rate after distributing the survey to the designated customer group.
Sean Ellis recommends to survey a crowds with the following characteristics:
- People that have experienced the core of your product offering
- People that have used your product at least twice
- People that have used your product in the last two weeks
When should I send Product Market Fit surveys?
While identifying the appropriate individuals to survey is important, determining the best time to send out the survey is equally crucial. Should the survey be sent during the product development phase? Following a sales pitch? Or, perhaps, during the early hours of the morning?
So when is the right time to run a product-market fit survey?
And if you haven’t hit PMF yet, how often should you repeat it? Timing can influence the usefulness of the results, so consider these guidelines:
- After users have had sufficient experience. Don’t send a PMF survey immediately after launch or sign-up. The users need to have used the product enough to know its value. A common practice is to wait until a cohort of users has been active for a few weeks. For example, you might decide to survey users who signed up at least 4 weeks ago and meet the usage criteria (2+ uses in last 2 weeks). If your product has a free trial period, a good time might be toward the end of the trial for those who engaged with it. Essentially, pick a point when users have gone through the core user journey at least once or twice.
- At key product milestones. If you’ve just released a major update or pivoted your product, it’s wise to run a PMF survey after users have experienced the change. This will tell you if the change improved things or not. For instance, if you add a big feature that was often requested, surveying after a month of that feature being live can show whether more users are now very disappointed without the product. Buffer used the PMF survey for a new feature (Power Scheduler) during its beta period to gauge its reception. That’s a smart move to evaluate a feature’s fit.
- When growth has stalled or churn is high. If you’re seeing warning signs like flattening growth, high churn, or lukewarm engagement, it might be a good moment to run a PMF survey. The results can diagnose whether lack of product-market fit is a root cause. Sometimes teams assume marketing or sales is the problem, but a PMF survey can reveal that users aren’t truly loving the product (e.g. you get only 15% very disappointed), indicating you need to improve the product before trying to accelerate growth.
- Before scaling efforts or fundraising. Many startups choose to run a PMF survey ahead of key inflection points – like deciding to scale up marketing spend, or when preparing for a funding round. If you find you have a solid PMF score (40%+), you can push the throttle with more confidence (and show that data to investors as evidence of traction). If not, you might delay aggressive scaling and focus on product for a bit longer.
How often should I send out the PMF Survey?
Now, regarding frequency: how often should you repeat the survey?
Periodically during product development. If you haven’t hit PMF yet, consider running the survey in iterative cycles. Many teams do it every few months while they’re in the discovery/iteration phase. For example, you might run it, implement changes, then run it again after 2-3 product update cycles (maybe 3-6 months later) with a new set of users to see if the score improved. Superhuman did essentially this – measuring, then re-measuring after a few quarters of work. Regular check-ins help quantify your progress toward PMF.
Also consider continuous/rolling surveys. If you have a steady stream of new users, you can integrate the PMF survey into your ongoing user feedback process. For instance, every month you might survey the cohort of users who joined 6-8 weeks ago. This rolling approach means you’re always collecting fresh data on the latest user sentiment. It can be useful to spot trends: is the PMF score improving as your product updates roll out? Are newer cohorts happier (which they should be if product is better)? Just be careful to not resurvey the same individuals repeatedly in a short span – focus on new users or those who haven’t been surveyed for a long time.
It’s relevant after major product or strategy changes. Even if you’ve reached PMF, you might want to run the survey again whenever you make a significant change that could affect user perception. For instance, if you alter your pricing model or target a new user segment, it’s worth checking if those users find the product as indispensable as your original base. Also, as your user base broadens, the PMF score might shift. Continual measurement ensures you’re still building something people deeply want.
Also, you don’t want to sent it too frequently to the same users. Avoid sending the survey to the same group of users too often (more than once in a few months). If a user sees it repeatedly, the novelty wears off, and they may ignore it or give less thoughtful answers. Also, if you do implement improvements they suggested, give them time to experience those changes before asking again if they’d be disappointed without the product.
In practice, many teams will do a PMF survey 2-4 times a year in the early stages. After clearly achieving product-market fit and moving to growth mode, some may do it less often (maybe annually or when needed), focusing more on growth metrics. However, some companies choose to keep PMF as a north-star metric and measure it continuously even at scale, since it’s an indicator of user sentiment that can flag issues if it starts dropping.
If your product has distinct user types (e.g. a marketplace with buyers and sellers), you should run the survey for each type separately. The frequency might differ if one side has more turnover.
Run the PMF survey whenever you need a clear read on whether you have product-market fit (or how it’s evolving) – typically after users have had time with the product and whenever significant changes occur. Re-run it periodically to track your PMF score improvement as you iterate. The survey is not a one-and-done thing; it’s a tool you can wield multiple times on the journey to a solid product-market fit and beyond.
Ongoing Product market fit surveys
As you progress on your journey toward creating a successful product, the Product Market fit survey result goes from being a key result you are chasing to being another health metric or KPI that you are continuously monitoring. Just like the Net Promoter Score (NPS). In this case, you want to contiously ask a sample of customers or users from various segments and in various stages of your product life-cycle. Try to avoid asking the same person twice in in the middle of the lifecycle, but insist on for instance including the survey question in your exit-survey.
This will allow you to monitor how customer sentiment changes over time as users progress from being just onboarded, to being active, to existing.
Product Market Fit Surveys grows in importance as your product grows
As a product grows, it can be challenging to determine whether new features or offerings are truly valuable to customers. It’s crucial to maintain a laser focus on key metrics, even after achieving product market fit.
To avoid getting carried away by positive buzz, strive to separate marketing and product launch as much as possible, keeping new features or products in beta until they are confident they provide value to customers.
This is where the Product Market Fit survey becomes especially valuable. By conducting surveys before the general public is aware of a new feature or product, you can gather valuable feedback and determine the feature or product’s effectiveness.
Why Run a PMF Survey?
Running a Product-Market Fit survey can provide critical insights and advantages for your product strategy:
PMF can feel abstract, but this survey gives you a concrete score to gauge it. By measuring what percentage of users would be very disappointed without your product, you turn PMF into a trackable metric. For example, a result of 45% “very disappointed” signals strong fit, whereas 20% indicates you have work to do. This quantification helps you benchmark progress over time.
The 40% rule provides a strategic decision point. Sean Ellis and others advise that if ≥40% of users are very disappointed (PMF score ≥40), you likely have PMF and can confidently focus on growing the company (scaling marketing, sales, etc.). If you’re well below 40%, you probably need to iterate on the product before pouring resources into growth. In Rahul Vohra’s words: “If more than 40% of your users would be very disappointed without your product, then you should focus on growing… If less than 40%, then you’ll probably struggle to grow.”
If more than 40% of your users would be very disappointed without your product, then you should focus on growing… If less than 40%, then you’ll probably struggle to grow.” – Rahul Vohra
The survey’s responses highlight who values your product most and why. By looking at those users who answer “very disappointed,” you can profile your best-fit customer segment and their favorite product benefits. This helps refine your target market and messaging.
A PMF survey usually includes follow-up questions that capture why users value the product and how it could improve. This yields qualitative data on feature gaps, pain points, and opportunities. You’ll hear in users’ own words what holds some back from loving the product. These insights can directly inform your product roadmap – essentially telling you what to build or fix to increase product-market fit.
Showing a clear PMF metric can rally your team around the goal of improving it. It’s a powerful way to communicate status – e.g. explaining that “our PMF score is 22%, we need to hit 40%” gives a tangible target. Many startups make the PMF score a top-level KPI until they hit the benchmark. Moreover, demonstrating a ≥40% PMF score to investors can strengthen your case that you have a viable product that’s ready to scale (complementing retention and revenue metrics).
It’s fast and cost-effective. Conducting a PMF survey is relatively quick and inexpensive. You only need on the order of 40-50 responses to get directional results, which can often be gathered in days or weeks. This is much faster than waiting months for cohort retention data or other signs. It’s an easy experiment to run for the amount of insight it yields.
In short, a PMF survey is a high-impact tool to evaluate if you’ve built something people truly want. It takes the guesswork out of product-market fit and provides guidance on what to do next – whether that’s pressing the gas pedal or going back to the lab. As Buffer’s team noted, there’s plenty of talk about the importance of PMF, but not as much about how to measure it. The PMF survey fills that gap with a practical, proven approach.
Common false positives & how to avoid them
While the 40% test is powerful, there are a few pitfalls that can lead to false signals – especially false positives (thinking you have PMF when you really don’t). Be mindful of these when designing and analyzing your survey:
- Surveying only your biggest fans. If you only ask your most active or happiest users to take the survey, you’ll likely get an inflated PMF score. For example, Buffer initially sent their PMF survey to just the top 200 users of a new feature and saw an extremely high “very disappointed” rate (nearly 80% in that small sample). That was misleadingly high because it was a skewed audience of power users. To avoid this, don’t cherry-pick only evangelists; include a broader set of qualified users (all who meet the usage criteria, not just the super-users). We’ll cover how to choose the right users in the guide, based on Ellis’s recommendations.
- Including users who aren’t fully onboarded. Conversely, surveying people who haven’t experienced the core product value can create false negatives (understating your PMF). If many respondents barely used the product, of course they’ll say “not disappointed” – they haven’t invested enough to care. This could make your PMF score look worse than it actually would be with true adopters. The solution: only survey users who have had sufficient exposure (e.g. used the product at least twice and recently). In practice, that means filtering out brand-new signups or inactive churned users. That’s also why the “N/A – I don’t use it anymore” option is included – you can exclude those responses from your calculation.
- Low response bias. Pay attention to who responds to your survey. Often, extremely happy or unhappy users are more likely to respond, while the indifferent ones ignore the survey email. This can skew results. If your response rate is low, consider that your “very disappointed” percentage might be overstated (if mostly fans responded) or sometimes understated (if fans didn’t bother but disgruntled did – less common for this type of question). To mitigate bias, try to get as many responses as possible (send reminders, make the survey easy), and look at the overall response rate. If only 5% of those asked responded, be cautious in interpretation.
- Small sample size. A very small number of responses can make the percentage volatile and not statistically reliable. For instance, if you only get 10 responses and 5 say very disappointed, that’s 50% – sounds great, but with such a tiny sample it may not mean much. Aim for at least ~40 responses minimum. If you can get more, great, but 40-50 tends to be enough to see a clear signal according to Hiten Shah and others. Below that, treat results as preliminary. You can always run another survey or extend it until you hit a more confident sample size.
- Mixing different user segments. If you mix very different types of users in one survey, the results might average out in a way that hides insights. For example, suppose you surveyed both free trial users and paying customers together and got 30%. It could be that paying customers were 50% and trials 10%. The average is low, but paying users actually love it. Without segmenting, you’d get a “false negative” and might panic. The lesson is to consider segmentation in your analysis. You can ask a question like “What type of user are you?” or use your user data to break down responses later. Common splits: paying vs free, new vs long-term, use-case A vs B, etc. This way you avoid misinterpreting a blended result. (Many teams, including Superhuman and Slack’s researchers, segment the follow-up answers by how people answered the core question.
- Misreading the 40% threshold. Remember, 40% is not a magic on/off switch but a guideline. Treat the survey as one input. A “false positive” could occur if you just surpass 40% but other indicators (e.g. retention) are poor – perhaps your product has a passionate niche but also high churn. In Buffer’s example, sending the survey to an ultra-engaged group yielded 78% saying very disappointed, but the broader user base did not retain as well after the initial excitement. High PMF survey scores should ideally align with healthy usage metrics. If they don’t, investigate why – are users saying they’d be disappointed but still leaving? That might reveal issues like pricing or competition despite strong product affection (or it could just be survey bias). Use the PMF survey in tandem with behavioral data.
- Repeating the survey with the same users too soon. If you survey the exact same users multiple times, especially in short succession, you can taint your results. Users might learn that you’re aiming for a “yes” (very disappointed) and respond differently, or they might be influenced by your prior communications. Sean Ellis recommends not surveying the same individual more than once in these PMF surveys. Instead, when you rerun the survey later, target a fresh cohort of users (e.g. new users acquired since the last survey, or users who weren’t surveyed previously). This avoids bias and “survey fatigue” that could lead to false readings.
By being aware of these potential pitfalls, you can ensure your PMF survey results are accurate and actionable. The key is to follow best practices in sampling (right users, enough users) and to interpret results in context. Now, let’s get into the step-by-step guide for running a PMF survey properly.
Step-by-Step guide to running the PMF Survey
Ready to run your own Product-Market Fit survey? Follow these steps to ensure you execute it effectively and glean the insights you need. We’ll cover everything from choosing whom to survey to sending questions, analyzing results, and acting on them.
1. Choose the right user segment
The first step is deciding who to survey. Targeting the right users is crucial for meaningful results. As discussed, you want to survey users who have had enough exposure to your product to form an opinion, but not only the ultra-loyal ones. Sean Ellis provides a clear guideline on this. He recommends surveying users who meet all of these criteria.
- Have experienced the core value of your product. They’ve used the main feature or benefit that your product offers.
- Have used your product at least twice. This ensures they’ve come back and aren’t basing answers on a one-time glance.
- Have used your product in the last two weeks. They are recently active, so the experience is fresh and relevant.
These criteria define a qualified user segment for the PMF survey. In practice, this might mean: users who signed up over 2 weeks ago (so they had time to use it) and have logged in at least twice in the past 14 days. If your product is an app, it could be users who have performed a key action (posted a task, sent a message, completed a workout, etc.) multiple times.
By filtering for these active users, you avoid including folks who churned long ago or never really tried the product (whose “not disappointed” responses would skew your results unfairly). Buffer’s team learned this when they first surveyed only highly engaged beta users, then refined the segment. Ellis himself advised them to focus on users who had used the feature at least twice in the last two weeks. This produced more balanced results.
If your user base is small (e.g. an early-stage startup with <100 users), you may end up surveying most or all users who fit the above. That’s okay – you just might not be very strict on “twice in 2 weeks” if you don’t have many active yet. Aim for the ones who have truly used the product and seen what it does. If you only have 20 such users total, consider running the survey but know that you’ll need a high response rate or multiple runs as you get more users.
On the flip side, if you have thousands of users, you don’t need to survey every single one. You can sample a subset that meets the criteria. Ensure it’s a random sample (or evenly distributed) among those qualified users to avoid bias. For example, pick 500 users who fit the engagement criteria and email them. The more you survey, the more responses you can potentially get – but even surveying a few hundred can be enough to net ~50+ responses.
Key tip: Don’t survey users who just signed up or ones who haven’t logged in recently. Their input won’t accurately reflect your product’s value. Also avoid exclusively surveying your top 1% heavy users unless that’s your only user group – include the broader active base. By choosing the right segment, you set yourself up for a credible PMF measurement.
2. Pick a survey tool & channel
Next, decide how you will deliver the survey to users and collect their responses. You have a lot of flexibility here – the PMF survey is short and can be administered via any number of tools. Some popular options:
- Online survey tools. Services like Typeform, Google Forms, SurveyMonkey, or JotForm work well. Superhuman’s team, for example, emailed users a link to a Typeform survey for their PMF questions. These tools allow you to set up the core question and follow-ups, then get a shareable link.
- In-app surveys: If you have a web or mobile app, you might use an in-app survey or popup. Tools like Qualaroo (which Sean Ellis also helped create), Intercom, or custom modals can pose the question right within your product. This can catch users while they’re engaged, potentially boosting response rates. However, be careful to target it to the right segment (e.g. only show to users who have logged in X times).
- Email + link: A straightforward approach is to send an email to the selected users with a brief invitation and a link to the survey. The email might say something like, “We’d love your feedback: Please take this 2-minute product fit survey.” Because the PMF survey is just a few questions, emphasize how quick it is. Rahul Vohra’s team sent a simple email with a Typeform link to their users. You can personalize the email and explain that their input will directly shape the product – which can encourage participation.
- Embedded email survey: Alternatively, you could embed the main question directly in an email (some email platforms allow a one-question poll within the email). But since PMF surveys usually have follow-up questions, a link to a form is usually easier.
No matter the tool, ensure that the user experience is smooth: it should take only a minute or two to complete all questions. Mobile-friendliness is a plus (many users might open on mobile). One reason Superhuman chose Typeform was its user-friendly interface – consider that a good example. Google Forms is a free option that also works fine for something this simple.
For channels: if your users are accustomed to communications via email, that’s generally the best way to reach them. If your product mostly engages users via mobile app notifications, you might send a push or in-app message with the link. The goal is to get their attention in a way they trust and will respond to.
Also, consider if you need to offer any incentive to respond. Usually, for a PMF survey, you do not need incentives like gift cards – users who care about your product will want to help improve it. You can mention that you value their feedback and it will help you serve them better. That intrinsic motivation is often enough, especially if you keep it brief.
Finally, be mindful of timing. If sending email, avoid weekends or times when your audience might ignore it. If using in-app, trigger the survey after the user has completed a session or important action (so they’re not interrupted mid-task). Getting the tooling and channel right will set you up to maximize responses.
3. Send the core question + follow-ups
Now it’s time to craft and send out the survey itself. The core question you must include is the product-market fit question:
**“How would you feel if you could no longer use [Your Product]?” (Very disappointed / Somewhat disappointed / Not disappointed / N/A no longer use). **
This question is the linchpin of the survey – make sure it’s worded exactly as above for consistency. In your survey tool, set it up as a multiple-choice question with the three (or four, if including N/A) options.
Beyond the core question, it’s highly recommended to ask a few follow-up questions to gather qualitative insights. Sean Ellis’s original survey template and many experts suggest adding 3 key open-ended questions:
- What type of people do you think would most benefit from [Product]? – This asks users to describe who they believe the product is really for. It helps you understand your target audience/market from the user’s perspective. For example, a user might answer, “Freelance designers and content creators who need to manage projects on the go.” Collecting these answers can paint a picture of your ideal customer profile (or multiple profiles). It might reveal use cases or niches you hadn’t officially targeted but that users see as a fit.
- What is the main benefit you receive from [Product]? – This prompts users to articulate the core value they get. You’re essentially asking, “Why do you use our product? What does it do for you?” The answers highlight the primary benefits driving usage – e.g. “It saves me time by automating my scheduling” or “It helps me stay connected with my team.” This shows you which benefit is most important to your satisfied users. You’ll want to double down on these strengths in your product development and marketing.
- How can we improve [Product] for you? – This invites users to give suggestions or point out pain points. It’s an open call for improvements. Users might mention specific missing features, frustrating aspects, or ideas that would make the product more useful to them. For example, “I wish it had an Android app” or “The analytics dashboard is confusing, simplifying it would help.” These answers are gold for uncovering what keeps some users from being fully satisfied. You’ll likely see patterns in the feedback that point to clear areas to address.
These three follow-ups (sometimes dubbed “the Superhuman follow-up questions” because Superhuman used exactly them) are tried-and-true. They align with the advice of focusing on target audience, key benefit, and improvements. By asking them, you’ll not only measure PMF with the core question, but also gather actionable context to interpret your score and boost it.
In some cases, teams add a couple more questions, such as:
- “Have you recommended [Product] to anyone? If not, why not?” – This can gauge loyalty/word-of-mouth. (If many say yes, it correlates with PMF; if not, reasons might overlap with improvement suggestions.)
- “How did you discover [Product]?” – Useful for marketing info, though not directly PMF, it can tell you channels that bring interested users.
These are optional and can make the survey a bit longer. Use your judgment – the above three follow-ups usually suffice for product insights. Remember, keep the survey short and focused. Typically 4–6 questions in total is plenty. Users should be able to complete it in 2-3 minutes.
When you send out the survey (via the chosen tool/channel in step 2), include a friendly note. For example:
“Hi [Name], as someone who’s been using [Product], we’d love your feedback to help us improve. We have a very short survey (one multiple-choice and a few optional short-answer questions). It will only take about 2 minutes and will directly influence our product direction. [Survey Link] Thank you so much – we really appreciate it!”
Assure them it’s quick, and emphasize that their feedback matters. This can boost response rates and the quality of answers (people will put thought into open-ends if they know you’ll act on them). It can also help to mention that their responses are anonymous (if you’re keeping it that way) or that you won’t directly tie it back to them in any negative way – you want them to be candid.
Once the survey is sent, give users a reasonable window (several days up to a week) to respond, and consider sending a polite reminder after a few days to those who haven’t filled it out (most survey tools can handle this or you can manually email a reminder). With the survey out in the wild, you can move on to the next step: ensuring you get enough responses.
4. Collect at least 40–50 qualified responses
To confidently interpret the PMF survey, aim to collect a minimum of ~40-50 responses from qualified users. This number is a rule of thumb that several experts have pointed out as sufficient for a directional read. Hiten Shah, who ran the 731-user Slack PMF survey, has advised that 40 responses is usually enough to see a clear signal, and more isn’t necessary for the core percentage to be meaningful. Buffer’s team likewise found that even with around 50 responses they got significant insights.
Of course, if you can get more responses, that’s great – it will only increase the confidence in your percentage and give you more qualitative data to analyze. But don’t stress if you’re not in the hundreds; quality matters more than quantity. 40 thoughtful responses from the right users are far more valuable than 200 responses that include a bunch of unengaged users.
Here are some tips to reach that response count:
- Survey enough users to begin with. Expect that only a fraction of those surveyed will respond. Response rates can vary – for a highly engaged user base, you might get 30-50% responding; for a typical scenario, maybe 10-20%. So, if you need ~50 responses and you estimate a 20% response rate, survey ~250 users. If you have fewer available users than that, you’ll need a higher response rate (which means personal outreach might help). In Slack’s case, because the survey was shared broadly, they got an unusually high number of responses (731), but you don’t need that many.
- Send reminders. A gentle reminder to those who haven’t responded can bump up your numbers. Sometimes users intend to fill it out but forget. A short email or message like “Just a reminder – we’d really love your input in our 2-min survey if you haven’t had a chance yet. [Link] Thanks!” can bring in additional responses.
- Leverage multiple channels if needed. If an email isn’t getting attention, you could follow up via an in-app notification (“Please take our quick survey”) or vice versa. Be careful not to badger users, but a nudge in a different channel can catch someone who missed the email.
- Monitor the incoming responses. Make sure you’re getting replies from the intended segment. If you find that some unqualified users responded (e.g. someone who chose “N/A I don’t use it anymore” or clearly has 0 sessions in your data), you might decide to exclude them from the tally or replace them by surveying additional users. The Buffer example included an “N/A” option and got a couple of those【15†】; typically, you exclude those from your calculations (since the person isn’t a current user, their disappointment level isn’t relevant). If you see many N/As, it might mean your list included too many inactive folks.
- Aim for diversity in responses. Check if one type of user is over-represented. If all your responses come from, say, one customer account or one demographic, and you intended to have a mix, you might need to reach out to others to balance it. This is more of an issue in B2B where one company might give you many responses – ensure you get input from multiple customers.
If despite your best efforts you only get a very low number of responses (<30), interpret the results with caution. You might still glean some insight (especially from the open-ended answers), but you should treat the PMF percentage as preliminary. In such cases, consider running another survey later when you have more users or trying a different approach to boost responses (maybe personally emailing users or even interviewing some).
On the other hand, if you get an overwhelming response, that’s a great sign of engaged users. Just make sure to close the survey once you have enough (or after a fixed time window) so you can move on to analysis. There’s no harm in leaving it open longer, but at some point you want to crunch the numbers and take action.
5. Calculate and interpret the score
Once you’ve got the responses in, it’s time to calculate your PMF score and see where you stand. Fortunately, calculation is straightforward:
PMF score = (Number of “Very Disappointed” responses / Total number of responses (excluding N/A) ) * 100%.
Most survey tools will show you the breakdown of answers. For example, you might see something like: 30 users chose “Very disappointed,” 15 chose “Somewhat disappointed,” 5 chose “Not disappointed,” and 0 chose N/A (out of 50 responses total). In that case, PMF score = 30/50 = 60%. If there were N/As, say 5 out of 55 respondents chose “N/A – don’t use,” you would exclude those 5 from the denominator: if 25 very disappointed out of 50 valid responses, that’s 50%.
Many tools let you filter or cross-tabulate, but you can also just do this math manually. The result is your product-market fit score (in %).
Now, interpret it using the guideline we discussed:
- If your PMF score is around 40% or higher, congratulations – that’s a strong indicator of product-market fit. You’ve hit the benchmark Ellis identified. This means a significant share of users really care about your product. The next step is likely to double down on growth (scaling user acquisition, marketing, sales) now that the product has proven value. However, don’t get complacent – there’s still 60% or so who aren’t very disappointed. You can work to convert more of those into rabid fans. But fundamentally, scoring ≥40% validates that you’re on the right track.
- If your score is well below 40%, say in the 10-30% range, then you likely do not have product-market fit yet. Don’t be discouraged – many startups fall into this category initially. Superhuman’s first survey came in at only 22% “very disappointed”, clearly indicating PMF was not there yet. The key is to use this as a baseline and a motivator to improve. A low score tells you that you shouldn’t focus on aggressive growth or scaling up right now; instead, focus on why the majority of users aren’t very disappointed. The follow-up answers will be crucial here (we’ll get to that in step 6). In short, a low PMF score means back to iterating on the product – perhaps refining the value proposition, fixing issues, adding key features, or maybe targeting a different market segment. Do not ignore a sub-40% result or assume users will just eventually get it – take it as evidence that changes are needed.
- If your score is just under 40% (e.g. mid-30s), you’re on the cusp. It suggests you’re close to PMF but not quite there. Likely some users love it, but enough are on the fence or indifferent that you haven’t crossed the threshold. This is actually a common scenario for decent products that haven’t nailed their focus. Treat it similar to below 40, but perhaps minor tweaks could push you over. For instance, maybe one annoying flaw is holding people back from loving it – fix that and you might hit 40% in the next survey. Or maybe you need to hone in on a subset of users who appreciate it more. It’s a signal to keep refining.
- If your score is extremely high (say >60%), that indicates an exceptionally strong product-market fit, at least among the surveyed group. It might happen if you surveyed a very enthusiastic early user base or a tight niche. While that’s great, sanity-check that it’s not a biased sample (see false positive cautions). Assuming it’s legit, you have a passionate user base – make sure as you grow to new users, they feel the same way. A very high score can actually be an indicator that you’ve zeroed in on a core niche (which is great) but you might find that broader audiences might have lower scores. So if your strategy is to expand beyond the initial niche, keep measuring as you reach new cohorts. But if your business can thrive focusing on that niche, you’re in an excellent position (you might even have some pricing power or high NPS to go with it).
After calculating the overall score, it’s highly valuable to segment the results by any criteria that might yield insight:
- Check the “very disappointed” percentage among different groups: e.g. new users vs old, paying vs free, different use cases, etc. You might find, for example, paying customers have 50% very disappointed whereas free users are 25%. That tells you paying users see more value – maybe because of premium features or simply they’re more invested. Or you might find one customer persona is at 45% vs another at 20%. These nuances help inform strategy (maybe focus on the high-PMF persona).
- Also look at the other answer tiers: What percent said “somewhat disappointed”? Often, the “somewhat” group is on the fence – they see some value but not enough to be very upset without the product. This group is your low-hanging fruit for improvement. If you convert many of the “somewhat disappointed” users into “very disappointed” (by addressing their needs), your PMF score will rise. For instance, if you had 50% somewhat and 20% very, you have a large chunk in the middle that could potentially be pushed upward. Compare the follow-up answers of the somewhat group vs. the very group to see what’s lacking.
- If you included an N/A option and got a number of those, note how many. If it’s a lot, it might indicate an issue with user activation (some users haven’t adopted the product yet). You might not count them in the score, but it’s still information – maybe you need to improve onboarding so fewer users end up in “I’m not using it anymore” territory.
Finally, put your PMF score in context with other metrics and qualitative observations. For example, cross-check it with your retention rate or churn. Do they align (e.g. high PMF score and high retention)? If not, dig in – maybe a subset loves the product (gave you the high score) but many others churn quickly (and didn’t respond or were excluded). In such cases, your PMF score might be reflective of a core that loves you, and you have an onboarding problem for others. This is the kind of interpretation that will lead to the next step: taking action.
6. Translate insights into product actions
Calculating your PMF score is just the beginning. The real value of the survey comes from analyzing the insights (especially the open-ended feedback) and then taking concrete actions to improve your product and its market fit. This step is about turning data into a game plan.
Here’s how to make the most of your PMF survey results:
Segment your audience and identify your “love group.” Look at the responses to the question “Who would most benefit from the product?” and also consider any user attributes you have. Try to draw a profile of those who answered “very disappointed” – your highly satisfied core users. Rahul Vohra calls these your “high-expectation customers” – the users who really get the value. For Superhuman, analysis showed that their very disappointed users tended to be founders, executives, and managers in certain industries. This guided them to focus on that segment. Do you see a common thread in your fans? It could be a user persona, a particular use case, or a certain size of company, etc. Write down the characteristics of your “must-have” user. This is your target persona – the people for whom you definitely solve a burning need. Going forward, you may want to tailor your product even more to these folks and make sure your marketing targets them. It might feel like narrowing your market, but succeeding with a core market is better than being mediocre for a broad one.
Highlight the top benefits to preserve. Compile the answers to “main benefit you receive from the product.” This often reveals what you should double down on. It’s common to find one or two benefits mentioned repeatedly. For example, users might overwhelmingly say something like “It helps me collaborate easily with my team” or “It saves me an hour every day”. Those core benefits are your product’s value pillars. Make sure in your product strategy you continue to enhance and communicate those benefits. You might allocate, say, 50% of your development efforts to further strengthening those top benefits. If speed is the main benefit, keep making the product faster and more efficient. If it’s ease of use, continue simplifying workflows. These are the things you’re doing right – protect and amplify them.
Identify and address key weaknesses or requests. Next, digest the feedback from “How can we improve the product?”. You’ll likely see patterns in the suggestions or complaints. List out the most frequently mentioned improvements. They might range from bug fixes (“please fix the crashes on upload”) to feature requests (“I really need a way to export reports”) to experience issues (“the mobile app is too limited”). Now, you can’t do everything at once – so prioritize. A good approach used by many teams (including) is to categorize the suggestions by impact and effort. For example, mark each improvement idea as high vs low impact (on user satisfaction/PMF) and high vs low effort (to implement). Focus on the high-impact, low-effort changes first – the “quick wins.” These might be small fixes or tweaks that a lot of users mentioned. They can give a boost to user happiness relatively quickly. Next, consider high-impact, high-effort items – those might become longer-term projects or major features on your roadmap. Low-impact items can probably be set aside for now.
By addressing common improvement themes, you aim to convert more of the “somewhat disappointed” users into “very disappointed” fans. For example, if many “somewhat” users said “I like the tool but it lacks feature X,” adding feature X (if it aligns with your vision) could push them over the edge to loving the product. Allocate a significant portion of your roadmap to these top user requests/issues – perhaps the remaining 50% of effort, complementing the 50% on strengthening core benefits.
Develop an action plan and roadmap. Now that you know what to amplify and what to fix, translate that into a concrete plan. This might include:
- Changes in product strategy or positioning - e.g. if your survey revealed a different target audience than you assumed, you might reposition marketing toward that group, or even tweak the product to suit them better.
- Product roadmap updates - adding the high-impact features to your development backlog, scheduling bug fixes or UX improvements, etc. Superhuman, for example, created a product roadmap heavily influenced by their survey responses – then executed it to move their PMF score up.
- Onboarding improvements. If the survey indicated some users didn’t understand the product (some “not disappointed” answers might be due to poor onboarding), work on your user education, tutorials, or trial experience.
- Customer communication. Sometimes a need can be addressed by educating users on an existing feature. If a few respondents said “I wish it could do X” and your product does X but they didn’t realize, that’s a hint to improve UI or communication around that feature.
- Possibly pricing or packaging changes. On occasion, feedback might indicate that certain valuable features are locked in a higher tier that many need – and moving things around could increase perceived value.
Communicate with your team about the findings. Share the PMF score and the top insights from the survey. It can be incredibly motivating for a team to hear quotes from users about what they love and what frustrates them. It grounds everyone in the mission to improve those metrics. At Superhuman, they made the PMF score their most important number and tracked it openly. You could do similar, creating visibility and accountability to raise it.
Finally, execute on the improvements. As you ship changes, keep an eye on other metrics (engagement, retention) to see if they improve, and prepare to run the PMF survey again after users have experienced the updates (more on the timing in the next section). For Superhuman, this cycle of surveying, segmenting, improving, and repeating led them from 22% to 58% PMF score within several months – a huge win that paved the way for scaling their user base. In summary, treat the PMF survey as a continuous feedback loop:
Survey → Analyze → Act → Repeat
The insights you gain should directly inform product decisions. By systematically converting feedback into product enhancements and focusing on the users who matter most, you’ll steadily increase your product’s market fit. This is how you turn a so-so product into a loved product. And if you already hit PMF, these actions ensure you maintain and deepen it as you grow. With the steps covered, let’s talk about when to run this survey (timing and frequency) and then look at some real-world PMF benchmarks and examples for inspiration.
Popular tools
The tools below will help you with the Product-Market Fit Survey play.
-
PMF Survey
A free tool by Sean Ellis to measure Product-Market fit as defined by the man himself.
Product-Market Fit Survey examples
Survey on Slack
An open research project with the mission of discovering why Slack is so addictive found that it not only hard Product-Market fit, but also why it worked annd what could be improved.
Buffer
To test product-market fit, Buffer surveyed their most engaged users with Sean Ellis’s PMF survey. They focused on understanding user reliance by asking how disappointed they would be if unable to use the product. With responses from about 40-50 users, Buffer gained valuable insights into the significance and impact of their social media tools.
Source: A Simple Guide to Measuring the Product-Market Fit of Your Product or Feature
Superhuman
Superhuman’s PMF survey asked users about their potential disappointment if they couldn’t use the service anymore. This strategy pinpointed necessary improvements and customer segments, guiding product refinement.
Source: Building a Superhuman growth funnel to find product-market fit
This experiment is part of the Validation Patterns printed card deck
A collection of 60 product experiments that will validate your idea in a matter of days, not months. They are regularly used by product builders at companies like Google, Facebook, Dropbox, and Amazon.
Get your deck!Related plays
- How to find Product/Market fit by Ryan Law
- False Positives and Product Market Fit by Tristan Kromer
- Product/Market Fit Survey by Tristan Kromer
- Part 4: The only thing that matters by Marc Andreessen
- A Simple Guide to Measuring the Product-Market Fit of Your Product or Feature by Leo Widrich
- The Real Startup book - Product/Market Fit Survey by Tristan Kromer, et. al.
- Product Market Fit Survey: Best Practices and Examples by Ramnish
- Sean Ellis Score by Anders Toxboe
- A Simple Guide to Measuring Product-Market Fit by Leo Widrich
- How Superhuman Built an Engine to Find Product-Market Fit by Rahul Vohra
- Sean Ellis Test: A Method to Figure Out Product-Market Fit by Pisano Academy
- Product-Market Fit Survey: Best Practices and Examples by Retently Team