Webinar

The complete guide to analyzing NPS

The complete guide to analyzing NPS data. Learn everything you need to know, from the smart way to calculate an NPS score to how to visualize NPS to segmentation and driver analysis with NPS.  

In this webinar you will learn

Here’s a little summary of some of the subjects we cover in this webinar

  • The recoding trick for speeding up NPS analysis
  • How to stat test differences in NPS scores between sub-groups, touchpoints/journeys, and over time
  • How to analyze open-ended reasons/verbatims for NPS scores
  • Effective ways at visualizing all the key NPS outputs
  • Driver analysis
  • Using NPS for market segmentation
  • Benchmarking NPS scores
  • Quantifying the ROI of NPS scores
  • The controversy: why some market researchers hate NPS, while companies love it

Transcript

The goal is to give you the complete guide to analyzing NPS data. I will be going over the basics, so if you haven't analyzed NPS before you should get a lot of good ideas. But, as the name suggests, it's the complete guide, so there's more advanced topics as well. As always, I am presenting from within Displayr but everything I show can be done in Q as well.

Overview
I'm going to start by looking at the two ways of calculating NPS, and why some people love and others hate NPS. Then, we will get into the fundamentals of how to turn NPS surveys into strategic value.

The question(s)
It all starts with a simple question. How likely are you to recommend something to friends or colleagues? People are given 11 options, from 0 meaning not at all likely through to 10 meaning extremely likely. Often there will be follow up questions as well, digging into why people said what they said, and who they are.

Tabulating the percentages

We then work out what percentage of people chose each option. Adding up the percentages… If people gave a 9 or 10, they are said to be promoters. These are the people most likely to promote the brand to friends and colleagues. The theory is that the more of these, the more referrals, and the higher your sales. That's 50.6% in this study. If they gave a score of 0 through 6, they are said to be detractors. That's 16.4%.

Calculating the net promoter score
So, while 50.6% will say nice things about the brand, this is going to be offset by 16.4% whining.The difference between these two, is the net level of promotion. The Net Promoter Score, usually called the NPS, which in this case is 34.2. Sometimes it's presented as a percentage. That is 34.2%. Other times just a number. That is, 34.2. In practice, experienced analysts don't calculate the NPS this way.

The smart way of calculating the NPS score
They instead recode the likelihood question. Let me explain. Here are the percentages again. We will start by computing the average rating that the person gave.

In Displayr: Statistics > below: Average

So, our average rating is 8.2. In Displayr: Click on Data Sets > Likelihood to recommend> Object inspector > Values

When we compute the average of 8.2, it's the average of these values, weighted by the number of people to choose each of the options. The recoding trick is to change these values, giving a value of -100 to the detractor categories, 0 to the passives, and 100 to the neutrals. As you can see, our average is 32.4.

Click back on previous slide.

So, we get the same answer with this different approach.The fastest way of calculating the NPS score. And, you don't need to even do the recoding manually like I just showed you. If you click on a variable showing likelihood to recommend, then you just click NPS Recoding. And a new variable is created. As I will soon show, this alternative way is much, much smarter. But, first a diversion.

Loving and hating the NPS
When I was a market researcher, I hated the NPS. Now I love the NPS. Why many market researchers hate the NPS. Quite a lot of market researchers hate the NPS. It's pretty common that people give us a low rating and say as the comment "I'll always give you 0 if you use NPS". The reason that they hate NPS, is that many of the claims made about NPS are not really true. There's a lot of snake oil selling going on.

The first big aspect of this relates to the cutoffs. Why is a promoter a 9 or 10, but not an 8? The cutoffs are pretty arbitrary and there's lots of evidence for this.

Europeans are just less likely to give 9s and 10s than Americans. The link between these ratings and whether people actually give recommendations is not strong. Promoting is a behavior, but the question asks an intention, so there's a bit of a disconnect.

According to the paper in the Harvard Business Review that first promoted the NPS, it's the best predictor of growth. There doesn't seem to be any serious evidence that supports this claim.

Another weakness is that the calculation loses lots of information. A person who says a 6 is obviously very different to a 0, but the NPS ignores this.

And, the other common complaint is that before NPS we market researchers used average customer satisfaction and it did largely the same job. There's no published data to suggest NPS is better, and so many market researchers got pretty irked when all their old data was chucked out in favor of the new bright and shiny thing.

Why the Net Promoter Score triumphed and became the standard
Despite a lot of grumbling by market researchers, the NPS has long since won the day. It's only now that I am no longer a researcher, and use the statistic to run our business, that I have come to really appreciate what the fuss is all about.

First, it's simple. Anybody can use it and understand it. It's certainly easier to explain than average customer satisfaction. When explaining customer satisfaction, you had all these weird conversations, such as why the top category is a 5.

It's short. You can just ask the one question. Or, if you want, add a couple of follow ups about likes and dislikes. This makes it practical to use in closed loop feedback systems. For example, when you do one of our NPS surveys, a gentleman called Olly has the job of reading the answer and working out if you can be helped.

This is only practical because it's short and simple. With a 20-minute questionnaire people would say all this great stuff, but individual responses wouldn’t acted on. The loop would never be closed.

It's versatile. You can use it to compare brands, products, employee happiness, and touchpoints.

Fourth, is it's a concrete business outcome. A negative net promoter score implies more people are complaining about your business than promoting

An increase in NPS should lead to an increase in sales (people buy based on recommendations).

Customer satisfaction, by contrast, is neither concrete, nor a business outcome:
What exactly does "satisfied" mean? It's an abstract concept?
What's the commercial implication of a satisfaction score of 2.9?
What's the commercial implication of a drop in satisfaction from 4.4 to 4.2?

NPS is simply more interpretable.
When used sensibly, NPS is predictive. If somebody completes your survey and says "0 this program sucks as I can't export to PowerPoint" you know that if you don't fix that you've got a good chance of churn. Of course, I'm not saying it's always predictive. For example, if you have somebody locked into a 2-year contract, they can hate you and not churn.

Now, many market researchers can quite rightly say that it's likely to be possible to create something even better than the NPS. But you'd have difficulty getting it off the ground. The NPS is now the standard. This makes it a lock.

Comparing NPS results
You will recall before that I mentioned the NPS ignores a lot of information. This means that it is quite volatile. It can be hard to get a good read on when differences in the net promoter score are meaningful or not.Fortunately, this is an easy problem to solve. You just need to use stat testing.

As some of you will know, we have a few Justins in the team and they do super smart things. We've renamed Justin 1.0 as OJ, for Original Justin. This is what he looks like.

Ojs done a wonderful proof which shows that so long as you do the recoding trick that I explained earlier, Displayr's automatic stat testing will give you the correct results for comparing NPS statistics.

Comparing NPS within sub-groups
So, how do we do it? In Displayr: We just drag the NPS onto the page. Then we drag across other variables and release them in the columns spot. As you can see, the NPS is lower for the 18 to 24s. That its red tells us it's statistically significant. So, based on this data we would say that 18 to 24s have a lower NPS, but the other differences may be flukes.

Comparing NPS over time
Most of the great NPS crimes that I see relate to looking at NPS over time. As discussed in our stat testing webinar, by default the significance test here is saying that the NPS in week 2 is significantly below the average. If we want to test adjacent weeks, we need to instead change the assumptions.

In Displayr: Appearance > Highlight Results > Options > Advanced > Date > Compare to Previous Period

So, the only change that's significant is the one from week 2 to week 3.

But, this type of stat testing in general with data over time isn't great, as the choice of the cutoff level, such as whether 90% or 95% confidence, becomes hugely important. And, the results you get change based on whether you use weekly, monthly, or some other time period.

There's a better approach.

What we want to instead do is something called a spline.
In Displayr: Insert > More > Tests > Simultaneous spline > Drag across NPS > Drag across Day > Type: Linear

The black line shows our best estimate, based on all the data. Note that it's showing much less fluctuation than the raw data. The black line is always between 32 and 38.

And the pink shows our uncertainty. It's basically telling us that the uncertainty is much greater than the movement as a whole, so we don't have enough evidence to conclude that NPS is changing over time.

Benchmarking against competitors
The best way of benchmarking against competitors is do a survey, like the one that I've done here, asking NPS for people's main phone brand.

In Displayr: Drag across NPS > Drag across Main brand

So, we can quickly see here that AT&T and Sprint have problematic NPS scores, which are significantly below the average.

But there are lots of situations where it's not commercially practical to collect high quality competitor NPS scores. For example, with our business, we can easily ask our customers NPS, but getting a list of competitors customers and asking them is not something we can afford.

Our NPS for the quarter is currently 72. How do I evaluate that without doing a survey of competitors?

And, this is what makes NPS so good. In the olden days a client would say to me "So, Tim, our Customer Satisfaction score is 3.8. Is that good?" I'd say, "Kind of".

Today, enough companies have done NPS surveys that we can get a good idea of what to expect.

As it’s a standard …
Here's some data by Satmetrix, who, along with Bain, are the originators of the technique. I can look at our score of 72 and quickly say that we're well above most industries, which is great.

Remember our cell phone data average score of 34. I can look at this table and see that this results pretty consistent with the published benchmark.

If you hunt around you can find even better data. Displayr is technically a Software as a Service, or SaaS business. The average of these is 26. So, we are well above average.

But who wants to be average? What do the stars do?
So, while our score of 72 is pretty good, each of Apple, Costco and USAA beat us. So, we can do better.

But, a note of caution. NPS scores are easy to manipulate.

You want your NPS and any NPS you compare with to be obtained from a representative cross-section of the brand’s customers

There are a few easy ways to deliberately or inadvertently inflate NPS scores

One of my clients used to get their staff to hand out NPS questionnaires to diners during meals and collect them at the end. Their NPS scores were more than 30 points above what they got from emails.

And, I've noticed a strong tendency of people to "clean" people from data that give very low scores

Not sending reminders is also a problem, as it means that your lovers and haters are more likely to respond

And, failing to weight to deal with non-representative samples is also a problem. It can lead to either inflation or deflation. Weighting is often necessary.

A way of deflating data is to get the data from non-users and triallists. E.g., if you ask your customers to provide NPS ratings for competitors, you will be inadvertently making this mistake.

ROI
But, how do you work out if it's worth improving your NPS?

Using NPS for segmentation
This next approach is a really neat one.You form one segment of core customers. That is, the people who are in your target market or are your ideal customers. For us, for example, this is survey researchers.

The second segment is customers with aren't in your target market but have a high NPS. This second segment is a segment of people that you are making happy even though you weren't trying. They are a great opportunity for sales and marketing.

The third segment is the people you've sold to, who are unhappy and aren't n your target market. These are the people to avoid. This is really important. You want to filter these guys out from any analysis of what people want, as they just skew everything.

Data visualization
Gauge plots work well. The reason that they work well is that they allow you to deal with negatives. They make it clear that you can get a score of from -100 to 100.

I like to use sparklines rather than full charts for NPS, as shown here.

Here's the same basic idea, but we've used bars. Here we've again emphasized that the range is -100 to 100, and color-coded higher results.

Here we've used more traditional chart types. But, due to the tendency of people to over-read NPS data, we use little boxes to automatically highlight significant differences. Here, for example, we can see that the 19 o 24 and 35 to 44 age groups have different NPS results. But the gender difference is not significant.

Go to supermarket dashboard

Here's another example. Some NPS studies ask NPS by different touchpoints or stages in a consumer's journey. E.g. ‘how likely would you be to recommend your supermarkets bakery to friends and colleagues’.

Here we are looking NPS of different supermarket departments. Note that the background image has been created to reinforce the journey, With cleaning near cleaning products, cheese, near cheeses etc.

The NPS scores are color coded with a traffic light system.

It's a dashboard, so if the user filters, we get an update.

Filter > Aldi

Remove: Filter > Aldi

How is the 16 computed? We can click on it to drill into it

Click on 16

Nested pie charts allow you to easily see the breakdown of NPS

We can see, for example, that there are 39% promoters, and that marginally more of them gave a 10 than a 9.

Journey/Touchpoint NPS
If you want to focus on direct comparisons by touchpoints, a plot like this can be very effective.

We can see that spring, in the yellow, is last on most things.

Journey/Touchpoint NPS - with significance shown via circles

Here I've used white circles to show where there are significant differences. So, we can readily see that, for example, Sprints poor scores aren’t flukes. It’s customer really think it’s worse in lots of ways.

Profiling NPS by demographics
And this is a case of a picture that tells lots of stories. I'm looking at NPS by all my demographic variables and by provider. It allows us to quickly see that the big outliers are the young people, the Latinos, and Sprint.

I've used bubble size to show the sample size, and we can see it's pretty small for the Latinos and Sprint.

Analyzing text data / verbatims
Most of the time it's a good idea to collect open-ended data.

Commonly people will have a question asking "why" they gave their NPS score. I don't like these as I find the resulting data is pretty ambiguous. Let's say somebody gives you a rating of 8 and says "price". Does that mean your price is good or are they saying they would have given you a 10 if your price was lower? There's no way to know.

For this reason, I like to instead ask people what they like and what they dislike.

Dislikes
These are the reasons why people dislike their phone company.

The next step is then to categorize this data. Or, to use the jargon, to code it.

Displayr's got lots of nifty automated tools that can save a lot of time. Check out our webinars on text analysis if you haven't. But I'm going to go with the simple approach of manual coding.

In Displayr: Click on Dislikes > Insert > Text Analysis > Manual > Mutual Exclusive > New.

512 people have said "nothing". So, I'll classify them as Missing data.

In the context of cell phones, Service tends to mean phone service rather than customer service, although we can't be sure. I'll create a category called Service/Coverage/Network

In Displayr: Rename New Category as Service/Coverage/Network | Click on Service/Coverage/Network | Click on Missing data | Add category: Price | Click on price

For "They are too expensive and lousy customer service". Here, the person has given two reasons. I can either create a new combined category, or, instead allow people to be in multiple categories. But what I will do is code based on the first thing they said, as this tends to make the resulting analysis a lot easier.

Anyway, I won't make you watch me code it all. I will just load one I've done before.

Save categories. Now, a new variable has been created, and I can drag it across.

Drag across Dislikes - Categorized. As you can see, in this data price, followed by the service are the biggest issues.

The next step is then to see how this relates to other data. For example, crosstab it by demographics, brand, and the like.

For example, here we see that internet speed is more of an issue with the 18 to 24s, and it steadily declines in importance with age.

Mosiac / Mekko Charts
One particular visualization that I find can be helpful is a mosaic plot of the categorized text data by the original 11-point rating of likelihood to recommend.

Sometimes you can see that there are interesting correlations, with one category being more strongly associated with higher levels of promotion than others.

NPS by touchpoint/journey ("drivers")
If you’re doing a longer survey, it’s pretty common that in addition to getting a rating of overall likelihood to recommend, you also collect data of things thought to drive overall NPS.Typically, these will be ratings of either touchpoints, aspects of the customer's journey, or service attributes.

These ratings are often either NPS scores, satisfaction scores, or effort scores. Here's some NPS scores for different aspects of cell phones.The question to be answered is how important are these in terms of predicting overall NPS. This is known as driver analysis.

Relative importance analysis / key driver analysis
Displayr and Q have special routines designed for driver analysis.

In Displayr: Insert > Regression > Driver analysis | Drag across NPS as Outcome |Drag across touchpoint journey as predictors

So, the analysis tells us that the most important driver of NPS is network coverage, followed by internet speed.

The orange boxes are warnings. Things we need to check. If you want to know more about this, please check out our ebook and webinar on driver analysis.

It can be useful to overlay the performance by touchpoint with the importance data. The resulting visualization is known as a quad map in the trade.

In Displayr: Insert > Visualization > Scatterplot | X coordinates: model | Y coordinates: table.Touchpoint.Journey | Chart > APPEARANCE > Show labels: On Chart | X Axis: Title > Importance | Y axis: Title > Performance (NPS)

This quad map is showing us the results for the total market. Let's filter it by brand.

In Displayr: Click on table | Insert > Filter > List Box Filters | Main phone company

Looking at AT&T, we can see that it’s actually doing a good job. The most important things are the things that it has high performance.

By contrast, if we look at Sprint, we see that it is high performance is on Understanding bills and checking usage, but these aren't important. The things that are important to driving NPS, network coverage and speed, are its weaknesses.

Overview
So, we have gone through everything from how to calculate NPS, through to all manner of ways of analyzing and presenting it.

Whether you've got NPS or any other type of survey data, Displayr will cut your analysis and reporting time in half. It does everything.

Read more

Cookies help us provide, protect and improve our products and services. By using our website, you agree to our use of cookies (privacy policy).
close-image