I see proposals against NPS like this from time to time, but they always tend to ignore the underlying science that went into making NPS in the first place, primarily that there was a very strong correlation between the calculated NPS score and business performance.<p>Also, I find that many of these articles have somewhat of a caricaturish view of how NPS is used in the real world. "It's not 'oh no, our NPS score is down, let's hire more people to focus on customer 'happiness'", it's instead that NPS can be like a canary in a coal mine - an unexpected drop in NPS is something to investigate and get more information on what went wrong, usually from more detailed/free-form questions in the NPS questionnaire.<p>As a software engineer I <i>love</i> reading our NPS reports, because they always have great tidbits of info that are often difficult to deduce just by looking at standard analytics.
The point of NPS vs free cash flow or operational metrics is that it is in theory a leading indicator instead of a trailing indicator.<p>It's nice (but a bit too late) to know that your business is screwed because your customers left, it's another to know that your business is about to be screwed because your customers are on the verge of leaving if an opportunity appears.<p>The criticisms of NPS pointed out (that it's a measure of a high-variance metric) are fair, but the conclusion is not.
Why do everyone seem to think that management does not have actual good intentions with measuring NPS as a KPI? The companies I've worked with have all had genuinely good intentions making the best possible experience for the user, because that is what ultimately wins in the end. If someone scores 1-6, you ask them to provide additional feedback, to learn from it - problem solved and everyone has all the info they need to go about their work.<p>There seems to be this idea that KPIs are evil and interpreted in vacuum. It's rarely like that.
What’s more interesting to me than the NPS scores are the free form comment section below the 0-10 rating.<p>Never mind that anything below X is considered neutral or a detractor. You asked a user / customer/ etc to rate you on a scale of 1-10 and then tell you why.<p>Want to know what you’re truly screwing up on? Take the feedback on your 1-6 scores seriously and you can find the low hanging fruit to take a product from mediocre to a great user experience.<p>That’s the true NPS value. It’s all about how you handle the feedback.
Background: I have designed and implemented NPS feedback systems in several companies.<p>I noticed that people are concerned about this when it was not implemented correctly, i.e. as a complimentary question in the long questionnaire or as a follow-up question at the end of the journey.<p>The NPS collection mechanic has many advantages:<p>1. The mental entry point for the user is very low - just 1 question.<p>2. It is very useful to track user feedback over their lifetime (ask every 6 months) and actively resolve issues if NPS shows it.<p>3. Another great thing is not a numerical feedback, but a written answer that gives a lot of really useful ideas for improvement.<p>4. And finally, you don't need any specific context to show the question, so it can be embedded in almost any stage of the user experience.
This post misses that the primary benefit of an NPS program is not the metric—that's a nice byproduct—it's the dialogue with customers.<p>As a single metric Net Promoter Score is OK. It's extremely easy for customers to answer. So there's volume in the number of answers. You can learn more by segmenting different ways—Users vs Admins, Enterprise vs SMB, Verticals, etc..<p>The company I currently work for puts a lot into NPS, not necessarily the metric, but the process. Everyone that leaves feedback gets a follow-up email and interview from someone in Product or User Research. That feedback is then organized in Productboard where it is grouped with similar requests/issues. Overall those interviews heavily weigh what features we refine and build.<p>For some background, I previously designed an NPS tool and have written about NPS pretty extensively.<p><a href="https://solomon.io/understanding-net-promoter-score/" rel="nofollow">https://solomon.io/understanding-net-promoter-score/</a>
I suspect the concept of NPS is, if not the source of, a major driver of the whole "anything less than 10/10, 5 stars, <i>etc.</i> is basically a fail" meme. There seem to be signifiant toxic effects from this on customer-facing employees, in my experience.
Companies with a promoter level score during the pandemic grew their business. There is a direct correlation between a promoter NPS and revenue growth. The percentage of companies that now provide a promoter level service is now over 20%, up from 13% before the pandemic. The percentage that are rated a detractor level score also grew to nearly 50%. These org’s revenues either shrank or stagnated. This is an existential issue for organizations, not a nuisance KPI.<p>Companies that are leaders in NPS are leaders in employee engagement. This post sounded pretty frustrated with an employer/contract…<p>NPS is a measure of past experiences, it isn’t a GPS navigation system to improvement. It is also 1 question, so it is the most simple to collect. However even this gets screwed up all the time. Interfaces such as a telephone where 10 can be measured as the first digit 1, or PIN pads where they are a nuisance to a customer when in a hurry to pay and missed when paying with NFC.<p>Understanding negative NPS is like understanding a traffic ticket with no other data. Why did I get a traffic ticket? Well, what did or didn’t you do and how fast were you doing it? Were you a jerk to the cop? Was the cop stressed by something else that day?<p>To be a leader in NPS is an organizational effort, not the result of divining an insight from a few survey answers. Ideally, customer journeys maps have been developed and problem areas are self evident before an NPS score arrives.
My "problem" with NPS is when essential services, which I have next to no choice over using, ask me if i would recommend them to others.<p>No, IRS, I would never recommend your service to friends or family. That's not really indicative of anything.
It's not fair to compare how "important" different metrics are to the whole business. A business is composed of and built on top of many sectors. That's why we have Finance team, Sales team, Marketing team, Product and Development team..., and not just one "we're a company" team! Free cash flow is important for business operations. Churn is important to predict sales. NPS, though not directly affecting the "money", might be a good indicator for the marketing team. There are a lot more important marketing metrics than NPS, but for a big corporation's branding team, especially in consumer goods industry, it might be useful to look at the "trend" of their NPS over a long period of time.
NPS has noise just because of its nature of being collected with surveys. Like any questionnaires or surveys, if you design it well, you can reduce the noise.
This post is correct about two things - first, that NPS does not give you direction on how to act. Second, that statistical realities make sampling approaches like NPS very noisy. I don’t think either discounts the concept of NPS itself. “Bad statistical work” is going to be bad regardless of what metric we’re talking about. And a metrics being too abstract to action doesn’t make it useless - at least not any more so than something like free cash flow. Yes, connecting the dots to some outcome is difficult. But it’s a pulse on the business, and if done right, that has some inherent value.
You know what would be vastly better than NPS?<p>Ask people every so often how they're feeling about the product. Let them pick a thumbs up or a thumbs down. If they pick thumbs down, say "we're sorry you hear that. Please let us know what's bothering you", and display a little text box. Carefully read every comment you receive.<p>Asking unhappy users for feedback is a great idea. The 0 to 10 scale and - especially - the bizarre calculation NPS does with it - is statistical malpractice that just isn't mathematically capable of doing what NPS supports claim.
I was unfamiliar with NPS until about 12 hours ago (but not because of this post).<p>NPS has had a strong prediction of stock price (I learned this morning).<p><a href="https://www.marketwatch.com/story/this-surprising-investing-strategy-crushes-the-stock-market-without-examining-a-single-financial-metric-11638985326" rel="nofollow">https://www.marketwatch.com/story/this-surprising-investing-...</a>?<p>Of course, it’s just opinion. But maybe there’s some use to NPS after all.
I didn't see discussed here the main drivers you might want NPS vs. outcome- (and company) specific metrics:<p>1) it's harder to game and/or bullshit. This is a massive issue with surveying customers and then using their responses to feed back into internal performance goals & rewards.<p>2) because it has grown to be such a common customer experience metric, it's possible to benchmark against others, which isn't really possible with question types that are very specific to your business<p>I'll agree that NPS is overrated, mostly because the Net Promoter organization and survey companies have spent large amounts of time and money pushing it as The One True Metric for decades, but it still has its merits and its place.
> (The most irritating — and by far the most common — reason companies seem to measure NPS is that it’s standardized, but more importantly that everyone else does it. Which is fine for one-time-use benchmarking, but not a good basis for internal KPIs.)<p>Except the alternative isn't to stop tracking NPS the alternative is to use unstandardized customer survey metric's, this is worse because you have no true industry comparison & can be gamed more easily since they define the formula!