Webinar

The complete guide to analyzing NPS

The complete guide to analyzing NPS data. Learn everything you need to know, from the smart way to calculate an NPS score to how to visualize NPS to segmentation and driver analysis with NPS.  

In this webinar you will learn

Here’s a little summary of some of the subjects we cover in this webinar

  • The recoding trick for speeding up NPS analysis
  • How to stat test differences in NPS scores between sub-groups, touchpoints/journeys, and over time
  • How to analyze open-ended reasons/verbatims for NPS scores
  • Effective ways at visualizing all the key NPS outputs
  • Driver analysis
  • Using NPS for market segmentation
  • Benchmarking NPS scores
  • Quantifying the ROI of NPS scores
  • The controversy: why some market researchers hate NPS, while companies love it

Transcript

Today’s goal is to give you the complete guide to analyzing Net Promoter Score or NPS.

I’ll go over the basics, so if you haven’t analyzed NPS before you’ll get a lot of good ideas. But, as the webinar title suggests, this is the complete guide, so I’ll also cover more advanced topics.

I’m presenting using Displayr, but everything can also be done in Q.

If you have any questions, please type them into the questions field in GoToWebinar, and I'll go through as many questions at the end as possible.

Book a Demo

If you don’t have Displayr and want to learn more about it, you can book a demo using the web address shown here.

Overview

I’ll start by looking at the two ways of calculating NPS and why some people love and others hate NPS. Then, I’ll get into the fundamentals of turning NPS data into strategic value.

The Question(s)

It all starts with a simple question. How likely are you to recommend something to colleagues or friends. People are given 11 options, ranging from 0, which is not at all likely, to 10, which is extremely likely.

Often there’ll also be follow-up questions, asking people why they said what they said.

Tabulating Percentages

We then tabulate what percentage of people chose each of the 11 options.

Summing Percentages

If people chose a 9 or 10, they’re considered “Promoters.” These people are most likely to promote the brand or product to colleagues and/or friends. The theory is the more Promoters, the more referrals, and the higher your sales. You can see 50.6% of people are Promoters in this study.

If people chose a score in the 0 to 6 range, they’re considered “Detractors.” 16.4% of people in this study are Detractors.

Calculating Net Promoter Score (NPS)

So while 50.6% of people will say nice things about the brand, this is offset by 16.4% who’ll say bad things.

The difference between these two percentages is the net level of promotion or the Net Promoter Score, which again is often referred to simply as NPS. The NPS for this study is 34.2.

Sometimes NPS is presented as a percentage. That is, 34.2%. Other times, it’s just a number, 34.2.

In practice, experienced market researchers don’t calculate NPS this way.

The Smart Way to Calculate NPS

Instead, experienced market researchers recode the likelihood question. I’ll explain.

Here are the percentages again.

We’ll start by computing the average rating from 0 to 10.

So, the average rating is 8.2.

When we compute the average of 8.2, it’s the average of these values you see here.

Weighted by the number of people that chose each option.

The trick is to change or recode these values, assigning a value of -100 to the Detractor categories, 0 to the Passive categories, and +100 to the Promoter categories.

Once I do that, you can see the average is 34.2.

So, we get the same answer with this different approach.

The Fastest Way to Calculate NPS

And, you don’t need to even recode manually like I just did. If you click on a “Likelihood to recommend” variable, you can then just click the automatic NPS recode.

And a new variable is created automatically. I’ll rename it.

As you can see, this is very fast and easy, and as I’ll soon show, this alternative way is also much smarter. But let’s take a slight diversion and discuss the history of NPS.

Loving & Hating NPS

Our CEO used to hate NPS, but now he loves it.

Why Many Market Researchers Hate NPS

A lot of market researchers hate NPS. It’s pretty common for people to give a low rating and follow up by saying that they’ll always give a low rating if you use NPS.

The reason they hate NPS is that many of the claims made about NPS aren’t really true. There’s a lot of snake oil selling going on.

  1. The first big aspect of this relates to the cutoffs. Why 9 or 10 for Promoters but not 8? The cutoffs are pretty arbitrary, and there’s a lot of evidence for this:
    1. For example, Europeans are just less likely to give 9s and 10s than Americans.
    2. Also, the link between these ratings and whether people actually give recommendations isn’t strong.
    3. Lastly, promoting is a behavior, but the question asks about intention, so there’s a bit of a disconnect.
  2. According to the Harvard Business Review paper that first promoted NPS, it’s the best predictor of growth, but there’s doesn’t seem to be any serious evidence that supports this claim.
  3. Another weakness is that the calculation ignores a lot of information. For example, someone who gives a 6 is obviously very different than someone who gives a 0, but NPS ignores this.
  4. And the last common complaint is that before NPS, market researchers used average customer satisfaction, and it mostly did the same thing. There’s no published data suggesting NPS is better, and many market researchers got pretty irritated when all their old data was tossed in favor of this bright and shiny new thing called NPS.

Why NPS Triumphed & Became the Standard

Despite a lot of grumbling, NPS has long since triumphed and became the standard:

  1. First, it’s simple. Anybody can use it and understand it. It’s certainly easier to explain than average customer satisfaction. When explaining customer satisfaction, you had all these weird conversations, such as why is the top category a 5.
  2. Second, it’s short. You can ask just the one question. Or, you can add a couple of follow-ups about likes and dislikes if you want. This makes it practical to use. For example, when you take our NPS survey, our Customer Success and Support teams read and analyze responses and then take corrective action if needed. This is only practical because NPS is short and simple. With a 20-minute survey, people would say all this great stuff, but those teams wouldn’t have acted on individual responses and the feedback loop would never close.
  3. It’s also versatile. You can use it to compare brands, products, employee happiness, touchpoints, etc.
  4. Fourth, it’s a concrete business outcome.
    1. A negative NPS implies more people are complaining about your business than promoting.
    2. An NPS increase should lead to an increase in sales, since people often buy based on recommendations.
    3. Customer satisfaction, by contrast, is neither concrete nor a business outcome.
    4. What exactly does “satisfied” mean? It’s an abstract concept.
    5. For example, what’s the commercial implication of satisfaction score of 2.9?
    6. Or what’s the commercial implication of a drop in satisfaction from 4.4 to 4.2? Is that meaningful?
    7. NPS is simply easier to interpret.
  5. When used sensibly, NPS is predictive. If somebody completes your survey, gives a 0, and says your software stinks since it won’t export to PPT, you know you’re in trouble if you don’t fix that.
  6. Now, many market researchers can justifiably say that it’s likely possible to create something even better than NPS. But, you’d have difficulty getting it off the ground. Simply, NPS is now the standard. It’s locked in.

Comparing NPS Results

You’ll recall before I mentioned that NPS ignores a lot of information. This means it’s quite volatile. It can be hard to determine when NPS differences are meaningful or not.

Fortunately, this is an easy problem to solve. You just need to use stat testing.

That is, Displayr’s automatic stat testing will give you the correct results for comparing NPS results if you use the automatic NPS recode that I showed earlier.

Comparing NPS within sub-groups

All we need to do is drag the NPS variable onto the page to create a summary table and then just add other variables like age to the columns of the summary table to

As you can see, NPS is lower for 18 to 24 year olds, and the red font and downward arrow tells us the difference is statistically significant.

So, we’d say 18 to 24 year olds have a significantly lower NPS based on this crosstab, but the other differences aren’t significant.

Comparing NPS over Time

Most of the great NPS crimes we see relate to looking at NPS over time.

By default, the stat testing here says the NPS in week 2 is significantly lower than the other three weeks. If we want to stat test adjacent weeks, we need to instead change our stat testing options.

So, the significant changes week over week are the ones from week 1 to week 2 and week 2 to week 3.

But this type of stat testing in general that compares data over time isn’t great, as the confidence level, such as whether it’s 90% or 95% confidence, becomes very important. And, things change based on whether you use weekly, monthly, or some other time period.

There’s a better approach.

What we want instead is something called a spline.

The black line shows our best estimate based on all the data. Note that it’s showing much less fluctuation than the raw data. The black line is always between 32 and 38.

And the pink shows our uncertainty. It’s basically telling us that the uncertainty is much greater than the movement as a whole, so we don’t have enough evidence to conclude that NPS is changing over time.

Benchmarking Against Competitors

The best way of benchmarking against competitors is to do a survey, like this one here that asks NPS for people’s main cellphone company.

Create crosstabs w/ NPS x Main phone company

We quickly see here that AT&T has a problematic NPS that’s significantly lower.

But there are lots of situations where it’s not commercially practical to collect high quality NPS data for competitors. For example, we can easily ask our customers NPS, but getting a list of competitors’ customers and asking them NPS really isn’t feasible.

Let’s say our NPS for the current quarter is 64. How do we evaluate that without surveying competitors’ customers?

This is what makes NPS so good. In the old days, a client would say their customer satisfaction score is 3.8 and ask if that’s good. The answer would be “Kind of.”

Fortunately today, enough companies do NPS surveys that it’s easy to get a good idea of what to expect.

A Lot of Publicly Available Benchmarking Data

Here’s some 2021 data from Satmetrix who are the originators of NPS along with Bain. I can look at our score of 64 and quickly say that we’re well above most industries, which is great.

Remember, the average NPS for our cellphone study is 34. You can look at this chart and see that’s very consistent with the published benchmark.

Retently Data

If you hunt around, you can find even better data. Displayr is technically a Software as a Service, or SaaS company for short. The average NPS for SaaS companies is 40. So, we’re well above average.

Warning! NPS = Easy to Manipulate

It’s worth noting that NPS is easy to manipulate.

You want your NPS and any comparison NPS to be obtained from a representative cross-section of the brands’ customers.

There are a few easy ways to deliberately or inadvertently inflate NPS scores:

One company used to get their staff to hand out NPS questionnaires to diners during meals and collect them at the end. Their NPS scores were more than 30 points above what they got from emails.

Companies also “clean” or remove people who give very low scores from their data.

Not sending survey reminders is also a problem, as it means that your lovers and haters, in other words those w/ the strongest feelings, are more likely to respond.

And failing to weight NPS data to address non-representative samples is also a problem. It can lead to either inflation or deflation. Weighting, which you can do in Displayr, is often necessary.

A way of deflating NPS is to get the data from non-customers and triallists. So, if you’re asking your customers to rate competitors, you’re inadvertently making this mistake.

ROI

So, how do you work out if it’s worth improving your NPS?

Calculating ROI for NPS Improvement

Using NPS for Segmentation

This next approach is really cool.

You form one segment of core customers. These are people who are in your target market or are your ideal customers. For example, our core customers here at Displayr are market researchers who work w/ quantitative survey data.

The second segment is customers who aren’t in your target market but have a high NPS. These are people you make happy without even trying. They’re a great opportunity for sales and marketing.

The third segment is people you’ve sold to but are unhappy and aren’t in your target market. These are the people to avoid. This is really important. You want to filter these guys out from any analysis of what people want, as they just skew all results.

Data Visualization

Gauge plots like the one left work well as they allow you to deal with negatives. It’s clear scores range from -100 to +100.

And sparklines like the one to the right work well for showing trends.

Here’s the same basic idea, but we use bars. Here we again emphasize the range from -100 to +100 and call out higher results w/ green.

Here we use more traditional types of charts. Due to the tendency of people to over-analyze NPS data, we use little boxes to automatically highlight significant differences. For example, we can see the 18 to 24 and 35 to 44 age groups have significantly different NPS results, but the gender difference isn’t significant.

Here’s another example. Some NPS studies ask NPS by different touchpoints or stages in a customer’s journey. For example, “How likely would you be to recommend your supermarket’s bakery to friends and family?” Here, we’re looking at NPS for different supermarket departments or sections.

The NPS are color coded with a traffic light system.

It’s a dashboard, so we can use filters to focus on specific segments and stores.

So, how is the 16 for milk computed? We can click on it to drill down.

Nested pie charts allow you to easily see the NPS breakdown. We see, for example, 39% are promoters and slightly more of them gave a 10 than a 9.

Journey/Touchpoint NPS

If you want to compare touchpoints across brands, a scatterplot like this can be very effective. We clearly see that Boost Mobile is first on a few touchpoints.

Journey/Touchpoint NPS with Significance Circles

Here, we use black circles to highlight significant differences.

Profiling NPS with Demographics

This is a visualization that tells a lot of stories. It shows NPS by demographics and brand. It allows us to quickly see the biggest outliers are young people and those of Hispanic or Latino origin.

The bubble sizes represent the percent of sample for each segment.

Text Analysis with AI

It’s often a good idea to collect open-ended text data.

For example, it’s common to ask “why” people gave their score. I don’t like these as I find the resulting data is pretty ambiguous. Let’s say somebody gives you a rating of 8 and mentions price. Does that mean your price is good or are they saying they would have given you a 10 if your price was lower? There’s no way to know.

For this reason, I like to instead ask people what they like and what they dislike.

Dislikes

These are the reasons why people said they dislike their cellphone company. The next step is to categorize or code this text data.

With AI advancements, Displayr makes it a lot easier and faster to accurately code text data, and I strongly recommend checking out our CEO’s recent webinar on this topic if you haven’t already.

I’ll quickly code these dislikes.

Click “Dislikes” in data set and go to + > Text Categorization > Only one theme

We can create categories or themes manually. For example, we see 512 people said “Nothing.” I’ll tackle those manually.

Add “Nothing” theme

Or we can use AI to create themes for us automatically. I’ll use AI to create four additional themes.

Change the number of themes to 4 and click “Create”

We can then automatically classify responses into the existing themes using AI.

Click “Classify”

There we go. I’ll put the remaining responses in an “Other” theme and save the categorization, so we can analyze it.

Add “Other” theme, select remaining responses, and classify as “Other”

Main Reason for Disliking

After saving the categorization, we can analyze it like other quantitative data.

Drag “Dislikes - Categorized” onto page to create summary table

The next step is to see how this data relates to other data like demographics or brand. For example, let’s create a crosstab with age.

Drag and add “Age” to columns

We see price is less of an issue for the youngest age group, and customer support is more of an issue for 55 to 64 year olds.

Mosaic / Mekko = Useful for Understanding…

One visualization that’s particularly helpful is a Mosaic or Mekko plot of the categorized text data by the original 11-point likelihood to recommend data.

You can see interesting correlations, with one category being more strongly associated with higher promotion or recommendation levels than others.

NPS by Journey/Touchpoint (i.e., “Drivers”)

If a survey’s longer, it’ll often collect additional data that’s thought to drive a higher NPS.

Typically, these will be ratings for various touchpoints, aspects of the customer journey, or customer support attributes.

This additional data is often NPS data or satisfaction or customer effort ratings data.

For example, here’s NPS data for different attributes related to cellphone companies.

The key question is how important these are in terms of predicting overall NPS. This is known as Relative Importance Analysis or Driver Analysis more commonly.

Driver Analysis

Drive Analysis is easy to perform in Displayr and Q.

This analysis tells us that network coverage is the most important driver of NPS followed by internet speed.

The orange boxes are warnings. Displayr’s built-in expertise is making sure we set up the Driver Analysis properly. For example, the second orange warning suggests we use robust standard errors, so I’ll do that. If you want to know more about these orange warnings, please check out our Driver Analysis eBook and webinar.

Quad map’s are powerful visualizations that plot performance or satisfaction data by importance data from Driver Analysis.

This quad map shows the results for the total market, but we can filter by brand.

Looking at just AT&T, we see it’s doing a good job. In other words, it performs well on the most the most important things.

By contrast, Boost Mobile performs poorly on the most important things.

And that’s the complete guide to analyzing NPS.

Book a Demo

Friendly reminder that if you don’t have Displayr and want to learn more about it, you can book a demo using the web address shown here.

Read more

I'm Online

Got 5 mins? I'm online if you want a quick Displayr demo

close-link