Webinar

DIY MaxDiff: The 4 easy steps that’ll make any good researcher proficient at MaxDiff

This webinar is for market researchers and consumer insights people who analyze data (from novice to expert).

In this webinar you will learn

Here’s a little summary of some of the subjects we cover in this webinar

In 20mins we'll teach you the four easy steps for doing a MaxDiff analysis yourself:

  • Creating an experimental design
  • Conducting the fieldwork
  • Using a Hierarchical Bayes model
  • Saving and using utilities

MaxDiff is a technique for working out the relative preference that people have for different attributes or between different alternatives.

Transcript

So what is MaxDiff? MaxDiff is a tool used to understand how people prioritize things.
In this webinar, I'm going to walk you through the four steps of doing MaxDiff: Experimental Design, Fieldwork, Hierarchical Bayes and Analyzing Utilities.

DIY MaxDiff eBook

And you can download the free eBook which has lots more detail, including instructions for both Q and Displayr.

 

MaxDiff Case Study- Alternatives

In this webinar, I will be showing you how to do everything in Displayr, but it works exactly the same way in Q. And I'll show you instructions for Q.

In the case study I will use today we got people to prioritize different aspects of cell phone plans. We wanted to know what was most important to them. Other types of alternatives that are common are menu items and flavoring, advertising claims, and promotional offers.

What you can see on the screen are shortened descriptions of what we tested in a case study that I will show you.

 

… Question

MaxDiff has its own special type of question. People are shown a subset of the alternatives and asked to choose the one they like the most and the one they like the least.

 

… Questionnaire

People are typically asked from 6 to 12 MaxDiff questions. The subset of alternatives shown varies from question to question. By looking at how people’s choices change based on the subset of alternatives shown, we can work out what's important to them.

 

…. Understanding how

In this example output, we have two different segments. Both prioritize price as most important, but the Talkers segment's number two priority is coverage, whereas the Viewers have a much higher preference for Streaming and Hotspot. So, MaxDiff is a measurement tool, designed to measure how people prioritize different things.

 

The four steps

There are four key steps in conducting a MaxDiff study, and I'm going to walk you through them.

Creating the experimental design. That is, working out the specific MaxDiff questions to ask.

Fieldwork. That is, collecting the data. Creating a model. Typically, this is something called a Hierarchical Bayes model. Extracting and analyzing the utilities.

 

Experimental design

We will start with working out the experimental design.

 

…. What it looks like

An experimental design is a table of numbers. Just like this one. The first row of the experimental design tells us what goes in the first question.

 

…. In numbers

As we can see here, question 1 starts with option number 10, which is coverage.

 

…. In words

When we replace the attribute numbers with words it gets a bit easier to read.

 

Instructions for …

Here are the instructions you can refer to later. And yes, they are for both Q and Displayr.

We will create one from scratch. It’s not hard.

 

Cell phone experimental design

In Displayr:

Insert > More > Marketing > MaxDiff > Experimental design

 

By default, the software creates a design with 8 alternatives. However, we have 10 in our case study. It's not unknown for people to create designs with 100 or more alternatives. But, the fewer the better from a research quality perspective.

 

In Displayr:

Number of alternatives: 10

 

We get a warning. If you read the eBook you can learn all about the warnings. But the good news is you can play it like a computer game. Our goal is to play until we have gotten rid of the warnings.

Basically, the way the game works, is that the more alternatives, the more we need to either or both have more alternatives per question, or, more questions, so that we collect enough data. Unless your alternatives are very wordy, you can usually have five alternatives per question.

 

In Displayr:

Alternatives per question: 5

 

Note that a new column is added to the design.

Cool. We got rid of the warning. Does this mean we are finished? No. At the moment we have 10 questions. The more questions, the more it costs to collect the data, and, the more bored respondents get. So, let's see if we can get away with fewer.

 

In Displayr:
Questions: 9

 

  1. A warning. I'm actually going to reduce the number further. Maybe we'll be lucky.

 

In Displayr:

Questions: 6

 

Cool no warnings. I've done this before, so I can tell you that 6 is the smallest we can get to without warnings. Now, there's a whole lot of math you can do if you want, but trial and error works perfectly.

If you only need to do segmentation, then your best bet is to have everybody ask exactly the same MaxDiff questions to everybody. However, otherwise, it's a good idea to have multiple versions. 10’s probably enough, but I like 100 just to be safe.

 

In Displayr:

Versions: 100

 

As you can see the design's grown. It now has 600 rows. So, let's say we were going to interview 300 people, we would assign 3 to each of the versions, where each version is a separate set of 6 questions or rows in this table, and the difference between them is which alternatives appear in which question. Lastly, change Repeats to 10. This usually does nothing, but it can improve things just a bit.

 

Fieldwork

Our software will do the experimental design and all the analysis. But we don't do the fieldwork.

But, here's some tips on how to do it.

 

Fieldwork tips

You need to use software that has the MaxDiff question type. Ideally you will use software that will allow you to create a single code list of all the alternatives, and then use filtering to determine which are shown in which question. Avoid piping if you can.

Make sure you check everything.

Even though I've done lots of these, I always stop after 10% of interviews are done, get a data file, and then complete all the analyses I want to do. This is the foolproof way of checking.

Anyway, once we have our data, we need to get a data file. An SPSS or Triple S file is best.

For the design that I just showed you we went and got 300 interviews. I'll import it now.

 

In Displayr:

Data Set > +

 

HB

The next step is to estimate a Hierarchical Bayes Model. Last time I did one of these webinars I recommended a Latent Class model. But, two of the smarties in our data science team, Matt 3 and Justin 2, have done a great job at building out the hierarchical bayes to the point where it's very safe to use, so I feel confident in recommending it unreservedly.

 

Instructions for …

Here are the Instructions Q and Displayr.

 

Hierarchical Bayes Model

Let's do it.

 

In Displayr:

Insert > More > Marketing > MaxDiff > Hierarchical Bayes

 

I need to hook up the design I created. We also need to hook up which version of the design people saw. This needs to be a variable in the data file.

For example, we can see that the first person did version 47, the second did 59.

Here are the six variables that store which option people chose as their most preferred in the six questions.

And the worst.

This will take a few moments to compute, so let me show how we do this in Q while we wait.

As shown in the instructions, we create the design this way.

In Q:
Create > Marketing > MaxDiff > Experimental design

 

It's exactly the same form and options as in Displayr:

  • 10
  • 5
  • 6
  • 100
  • 10

We also run the Hierarchical Bayes in exactly the same way.

 

In Q:

Create > Marketing > MaxDiff > MaxDiff. Hook up:

Design

Version

Most

LeastReturn to Displayr.

 

  1. So here's our result. We've got some warnings. They are telling us to run some more iterations. Be a bit careful here. It's only in recent years that it's become clear that a lot of the older software doesn’t' run for enough iterations. I'll set the iterations to 1,000.

 

In Q:

Inputs > Model > Iterations: 1000

 

This will be boring to watch. So, I've pre-baked it.

 

Change tab

The column called Mean shows the average appeal of each attribute. The highest mean is for price, telling us that its, on average, most important. If we look at the plot to the left, though, we can see that people vary quite a bit.

Blue means above average and red means below average. So, in the case of price, we can say that most people consider it to be above average in importance. Let's contrast that with the Premium entertainment attribute. The average is low, and just about everybody has a score of below 0. So, it's unimportant to everybody.

This output is too complicated for most clients. We need to extract the utilities, which are the underlying measures of the appeal of each of the attributes.

 

In Displayr:

Click on chart > Inputs > ACTIONS > Save zero-centered utilities

 

As you can see, they've been added to the file as variables.

 

Analyzing utilities

So, how do we analyze the utilities?

 

Instructions for

We can show the average utility, but this is not I think so clever, as it leads to lots of painful and unproductive discussions when clients ask what, exactly, the numbers means. We've developed a special chart just for this, which we call a ranking plot. It's also known as a bumps plot.

 

In Displayr:

Chart > Ranking plot.

 

By default, it shows only things with a score above 0, so we need to change that setting.

 

In Displayr:

RANKING STATISTICS > Minimum value: -100

 

OK, price, followed by coverage is most important. Does this differ by sex? No. Age?

The most important things remain constant, but we can see that streaming speed in particular gets much less important the older people get.

 

TURF Instructions

Some people love to do TURF on MaxDiffs. If your one of these people, this is how we do it.

 

TURF

We first convert the zero centered utilities into ranks

 

In Displayr:

Click on Zero-Centered Utilities

Insert > Transform Within Case > Rank

 

Then count the top 2 values

 

In Displayr:
Structure > Binary Multi

Categories > 9 and 10

Insert > More > Marketing > TURF > Total and Unduplicated …

Drag across the variables.

4 alternatives

 

OK, so the best two would be price and coverage. Then, Price and streaming speed.

Please check out our webinar and eBook on TURF for more about how to do TURF.

 

Segmentation

What about segmentation? There's an easter egg hidden in the segmentation webinar from two weeks ago. I actually used this MaxDiff data in that webinar, so you can go and look at the video to learn more about how to segment using MaxDiff data. But, I would be remiss not to point out that there is a special segmentation method just for MaxDiff.

It's very easy to run, we just duplicate the hierarchical bayes, and change the settings

 

In Displayr:

Home > Duplicate

Inputs > MODEL > Type: Latent Class Analysis

Number of classes: 2

 

We save the new variables, by clicking here on the Save class membership button

 

Latent class analysis

Here are the instructions.

Read more