What is MaxDiff? Understanding Best-Worst Scaling

MaxDiff is a survey research technique for working out relative preferences. What do people like most? Second-most? Etc. It is useful in situations when simpler techniques – such as asking people to rate things or provide rankings – are considered likely to give poor data. It is also known as maximum difference scaling and best-worst […]

Learn More About MaxDiff

This is a guide for everything you need to know about MaxDiff. It covers the “what is?” and the “how to…” of different approaches to the analysis, from preference share to profiling latent classes and finally how to interpret the analysis. There are worked examples, shown in Displayr and R.   Introduction What is Max […]

Comparing MaxDiff Results from Different Packages

Different models There are lots of different statistical models that you can use to compute MaxDiff. Some of these get different results from Sawtooth simply because they are wrong. If you are doing counting analysis, aggregate multinomial logit, or aggregate rank-ordered logit models, then you will definitely get a different answer from Sawtooth. In the case of […]

Comparing MaxDiff Models and Creating Ensembles in Displayr

man looking at papers on wall

Types of MaxDiff model There are two main categories of MaxDiff model: hierarchical Bayes and latent class. Within these categories, models are further specified by other parameters such as the number of classes. We frequently want to experiment with a variety of different models in order to find the most accurate. To illustrate the comparison, […]

How to use Covariates to Improve your MaxDiff Model

Using Covariates to Improve Your MaxDiff Model

MaxDiff is a type of best-worst scaling. Respondents are asked to compare all choices in a given set and pick their best and worse (or most and least favorite). For an introduction, check out this great webinar by Tim Bock. In our post, we’ll discuss why you may want to include covariates in the first place […]

The Accuracy of Hierarchical Bayes When the Data Contains Segments

A simulation involving two segments To explore this problem I generated some simulated data for 200 fake respondents. I used a MaxDiff experiment with 10 alternatives (A, B, …, J) and 2 segments (75% and 25% in size). One segment was created to prefer the alternatives in order of A > B > … > […]

Creating Pairwise Balanced MaxDiff Designs

Creating single version designs These earlier posts describe how to create MaxDiff experimental designs in Displayr, Q and with R. They also give some guidelines on how to set the numbers of questions and alternatives per question, as well as advice on interpreting designs. The standard method used to create designs aims to maximize the […]

Checking Convergence When Using Hierarchical Bayes for MaxDiff

Please read Using Hierarchical Bayes for MaxDiff in Displayr, prior to reading this post. Technical overview Hierarchical Bayes for MaxDiff models individual respondent utilities as parameters (usually denoted beta) with a multivariate normal (prior) distribution. The mean and covariance matrix of this distribution are themselves parameters to be estimated (this is the source of the term hierarchical in the […]

Using Hierarchical Bayes for MaxDiff in Displayr

Getting started Your MaxDiff data needs to be in the same format as the technology companies dataset used in previous blog posts on MaxDiff such as this one. To start a new Hierarchical Bayes analysis, click Insert > More > Marketing > MaxDiff > Hierarchical Bayes. Many options in the object inspector on the right […]

Comparing Tricked Logit and Rank-Ordered Logit with Ties for MaxDiff

Tricked logit Multinomial logit is used to model data where respondents have selected one out of multiple alternatives. The logit probability of selecting given the utilities is     where denotes the set of alternatives. In MaxDiff, respondents select two alternatives instead: their favourite (best) and least favourite (worst). Tricked logit models MaxDiff data by […]

11 Tips for your own MaxDiff Analysis

If you are a MaxDiff analysis novice, please check out A Beginner’s Guide to MaxDiff analysis before reading this post. Download our free MaxDiff eBook! 1. Keep it simple (particularly if it is your first MaxDiff analysis) MaxDiff analysis projects come in all shapes and sizes. They vary on the following dimensions: Sample size. I […]

How to Check an Experimental Design (MaxDiff, Choice Modeling)

In this post, I explain the basic process that I tend to follow when doing a rough-and-ready check of an experimental design. The last step, Checking with a small sample, is the gold-standard. I’ve never heard a good excuse for not doing this. Every now and then somebody sends me an experimental design and says, “Can […]

Using Cross-Validation to Measure MaxDiff Performance

This post compares various approaches to analyzing MaxDiff data using a method known as cross-validation. Before you read this post, make sure you first read How MaxDiff analysis works, which describes many of the approaches mentioned in this post.   Cross-validation Cross-validation refers to the general practice of fitting a statistical model to part of a data set (in-sample […]

How to Analyze MaxDiff Data in Displayr

This post discusses a number of options that are available in Displayr for analyzing data from MaxDiff experiments. For a more detailed explanation of how to analyze MaxDiff, and what the outputs mean, you should read the post How MaxDiff analysis works. The post will cover counts analysis first, before moving on to bringing in […]

How MaxDiff Analysis Works (Simplish, but Not for Dummies)

Download our free MaxDiff eBook! Counting the best scores (super-simple, super risky) The simplest way to analyze MaxDiff data is to count up how many people selected each alternative as being most preferred. The table below shows the scores. Apple is best. Google is second best. This ignores our data on which alternative is worst. We […]

How to Create a MaxDiff Experimental Design in Displayr

Creating the experimental design for a MaxDiff experiment is easy in Displayr. This post describes how you can create and check the design yourself. If you are not sure what this is, best to read An introduction to MaxDiff first. Creating the design In Displayr, select Insert > More > Marketing > MaxDiff > Experimental Design. Specify […]

An Introduction to MaxDiff

MaxDiff consists of four stages: Creating a list of alternatives. Creating an experimental design. Collecting the data. Analysis. 1. Creating a list of alternatives to be evaluated The first stage in a MaxDiff study is working out the alternatives to be compared. If comparing brands, this is usually straightforward. In a recent study where I was interested in the […]

Chat with us