NPS Weighting—Should You Do It?

NPS Weighting—Should You Do It?

I’ve lost count of the number of times I’ve been asked to weigh in on an internal debate a Kapiche customer is having about whether they should be weighting their NPS score (similar questions apply to other CX metrics as well).

Reflecting on those discussions, I’ve come to realise that there are good arguments on both sides of the fence. I’d argue, if you’re trying to make a choice between weighted or unweighted NPS numbers, then the answer is always to track both.

Before we dive in, let's start with the basics.

What is NPS Weighting?

Weighting your NPS (or other CX metric) means applying weights to the score such that the score is representative of your customer demographics. This is best understood with an example.

Let’s assume your customer base is 10% men 90% women. For one particular episodic NPS survey, 50% of responses come from men and 50% from women with an overall NPS score of +20. This score is the raw or unweighted score. It’s certainly a valid NPS score, but it isn’t reflective of our customer base. If we were to weight this score, we’d recalculate it by adjusting the contribution men are making to the score down to 10% and the women up to 90%. After doing this, we could find significant shifts in the score.

Usually, you will want to weight by a demographic dimension of your customer base. Gender is usually a safe choice, but you could also look at alternatives, like age or location. Just make sure you aren’t introducing some unconscious bias in the way you choose to weight.

More recently, there has been some chatter about weighting by revenue. This is a bad idea, because it isn’t very scientific. Instead, you’d be better off trying to assign a dollar value to your NPS points.

So, Weighted or Unweighted NPS?

Both! There are a bunch of reasons why, but one of the most important is so you can see when there’s a big difference between unweighted/weighted NPS (or any other CX metrics). If there is a large discrepancy, this could indicate one (or a few) different issues are at play.

Here’s three possible reasons why that might be the case:

  • There might be a sampling problem where your method of selecting people to enrol in the survey is flawed. For example, you might be getting 50% male and 50% female, but your customer base is 10% male and 90% female in reality.

  • There might be under-engaged survey respondents or a problem within a particular section. Without comparing the scores, you won’t be able to identify gaps in the data—skewing the results towards the respondents who actually engage.

Insights teams are judged by their ability to provide high impact insights to decision makers that they can use to decide what to do next. Relying solely on unweighted data could jeopardize your ability to do that.

For example, let’s say you have a customer base that’s 90% women and 10% men, but your sample rate somehow includes 50% women and 50% men. From looking at the unweighted NPS response data, a significant insight might be: ‘Something’s driving the NPS down by 5 points, and it seems to be localised to men. Let’s do something about that to improve our score by 5 points.’ And so, your company could put more effort towards addressing its male customers.

Conversely, there could be something in women’s eyes that’s bringing your score down 3 points. For example, you could discover a valuable insight that women really dislike poor customer service from frontline staff, and it is causing a 3 point decline in unweighted NPS..

On the surface of it, if you were to rely solely on unweight NPS, you would prioritise actioning the insight from the male responses, because they are having an impact of 5 on the NPS score. However, if we were to weight that impact, it would reduce to 1 point, because only 10% of our customers are male. The weighted impact of the insight we identified about our women respondents is 5.4, compared to the unweighted impact of 3. Armed with this information, we should definitely prioritise addressing the insight we found about women!

At the same time, we should also be asking ourselves another question: Why is our survey sample so drastically skewed towards responses from men? They make up 10% of our customers, but 50% of responses to this survey. It could indicate that women are far less engaged with our product or service, that there is an issue with the survey distribution, or that there is something specific to this touchpoint in the user journey that is causing this phenomenon that we should be aware of. Either way, it warrants more investigation.

When to use Weighted NPS

Personally, I always start with analysing the raw unweighted NPS data. I’ll then use the weighted NPS score as a sanity check for the impact or veracity of any insights I’ve found. That way, I’m ensuring the insights I present to the business are the most impactful. In addition to this, I’m constantly on the lookout for large discrepancies between the weighted on unweighted NPS scores. Investigating these differences can themselves produce some valuable insights.

Share to: