Much has already been said about the limitations of NPS as a measure of customer loyalty. Where NPS falls down is clearly in the way it buckets customers into three broad categories of promoters, passives and detractors.
Here are a few ways NPS is poorly used today:
- NPS scores fluctuate wildly leaving customer insights teams scrambling to figure out why. They'll make assumptions and mistakes along the way as they attempt to produce accurate insights.
- NPS surveys are sent to customers but only to select segments of the entire customer base. Another scenario is the survey goes to all customers but not every customer completes it. At the end of the day this isn't a national census - there's no obligation to complete every NPS survey from every company you've ever purchased goods or services from.
- NPS category breaks are arbitrary and differ from one industry to the next. Thanks to Kim Witten PhD for bring this to my attention! To quote Kim "There is no evidence that these categories exist as such and category break points aren’t universal across industries or cultures; a 7 for an airline wouldn't deter me one bit, but that same 7 for a pizza restaurant would definitely give me pause."
- There are inconsistencies around when NPS surveys are conducted. Sending three in one month and nothing for the next seven is not as good as sending one per month consistently for 12 months. Consistency means you can safely analyze the impact of each CX initiative over that period.
- Some organizations lack the maturity to embrace negative feedback and only want to look at NPS promoters. This comes at the detriment of detractors and passives who are ignored, discarded, thrown out. Their voice doesn't filter up to the people designing their next customer experience.
In theory, more promoters is a good thing because it signals your organization is doing all the right things. The growth trajectory is clear and nothing can stop you from getting to the promised land.
In reality, NPS scores are meaningless if customers don't care about word of mouth referrals. It's also pointless if the people you're surveying don't understand what their score means.
For example, a respondents scores your brand a 6. To them, their customer experience wasn't great but it also wasn't abysmal. In their eyes a 6 is a fair score. They're certainly not warning friends, family and co-workers to never buy from you. Yet under the NPS model, this customer is a detractor and they are supposedly doing exactly this.
Another way of looking at this problem is to consider changes in NPS as customers move between these arbitrary buckets.
Consider the following example: a new CX initiative is rolled out. Not long after it's discovered 60% of customers who previously scored a 2 now score your brand 6. (see graphic below).
That's actually a huge improvement and is a clear signal you're on the right track! Looking at this strictly through the lens of NPS however, those customers still hate you. I guess someone's getting fired 🤷♂️.
Annoyingly, if that same segment went from a 2 to a 7, they're now passively on the fence. An even worse example is when a customer improves their score from a 6 to an 9. Not much of an improvement is it? Using the previous extreme examples (only looking at it strictly through the lens of NPS rather than common sense) we can see they've gone from warning their friends to stay away to shouting about you from the rooftops. What gives?
This clearly isn't the reality on the ground but executives love to see numbers light up on their dashboard. It makes them sleep safe at night.
Organizations focused on converting as many detractors as possible into promoters are missing out on genuine gold when it comes to customer insights. What was it specifically that drove NPS 2 customer to score a 6 or 7? Which specific CX initiative was responsible? Was it a mix of initiatives? Did CX even influence the decision to score higher? What can we do better next time? Understanding the 'why' behind your high level score unlocks so much potential. This might be something that can be studied, understood and replicated out across the whole organization.
While this example is obviously just to illustrate a point, the point is nonetheless valid.
So the question everyone wants answered is - what's the alternative to NPS?
There are a few ways to approach solving this problem. Depending on your organization and personality type you'll likely fall into one of the following camps:
- "NPS is pointless, [CSAT/CES/PES/Customer Health Score] is king!"
- "Traditional NPS is flawed, simplify the scale from 10 down to 5 points."
- "Traditional NPS needs improving, let's crank it to 11!"
- "Let's score for NPS the traditional way but focus on the comment field."
That last viewpoint deserves the most praise. When you ignore the score entirely you only get half the picture - a qualitative one prone to human bias as you go looking for the problems you know exist (those known knowns if you will). Ignore the qualitative and you will have no idea why that customer scored you the way they did and what you can do about it to improve CX, drive revenue and win market share. When you focus on both the score and the comment you'll get a much stronger feel for what needs to be done next.
NPS score by itself is meaningless without context. Understand the 'why' behind whatever scores customers give you and you'll figure out your unique path to revenue growth. Nobody has to get fired 👍.
Since we're on the subject of understanding the 'why', there is a platform used by customer insights teams to understand qualitative, unstructured data (at-scale). Kapiche helps teams automatically identify themes and emerging issues - including the ones you didn't know existed. This is unfortunately where most text analysis tools fall down - by automating what humans already do. This article by Ryan Stuart explains why traditional text analytics tools have failed CX.
Traditional text analytics tools might have failed but there are alternatives. Here's a short demonstration video of how to understand the 'why' from your customer feedback.