10 May 2017 | by Tim Bock

An Introduction to MaxDiff

MaxDiff questionnaire

MaxDiff is a research technique for measuring relative preferences. It is typically used in situations where more traditional question types are problematic. Consider the problem of working out what traits people would like in The President of the United States. Asking people to rate how important each of the following characteristics are would likely not be so useful. We all want a decent/ethical president. But we also want a president that is healthy. And the President needs to be good in a crisis. As most of the traits listed below are critical, there is little likelihood of getting good data by asking people to rate their importance. It is precisely for such a problem that MaxDiff comes to its fore.


Decent/ethicalGood in a crisisConcerned about global warmingEntertaining
Plain-speakingExperienced in governmentConcerned about povertyMale
HealthyFocuses on minoritiesHas served in the militaryFrom a traditional American background
Successful in businessUnderstands economicsMultilingualChristian



MaxDiff consists of four stages:

  1. Creating a list of alternatives.
  2. Creating an experimental design.
  3. Collecting the data.
  4. Analysis.

1. Creating a list of alternatives to be evaluated

The first stage in a MaxDiff study is working out the alternatives to be compared. If comparing brands, this is usually  straightforward. In a recent study where I was interested in the relative preference for Google and Apple, I used the following list of brands: Apple, Google, Samsung, Sony, Microsoft, Intel, Dell, Nokia, IBM, and Yahoo.

When conducting studies looking at attributes, many of the typical challenges with questionnaire wording come into play. In another recent study where I was interested in what people wanted from an American President, I used the list of personal characteristics that are shown in the table above.

Common mistakes

There are two big mistakes that people make when working out which alternatives to include in a MaxDiff study:

  1. Having too many alternatives. The more alternatives, the worse the quality of the resulting data. With more alternatives you have only two choices. You can ask more questions, which increases fatigue and reduces the quality of the data. Or, you can collect less data on each alternative, which reduces the reliability of the data. The damage of adding alternatives grows the more alternatives that you have (e.g., the degradation of quality from having 14 versus 13 alternatives is greater than that of having 11 versus 10).
  2. Vague wording. If you ask people about “price” in a study looking at preference for product attributes, your resulting data will be pretty meaningless. For some people “price” will just mean not too expensive, while others will interpret it as a deep discount. It is better to nominate a specific price point or range, such as “Price of $100”. Similarly, if you are evaluating Gender as an attribute, it is better to use Male (or Female). Otherwise, if the results show that Gender is important, you will not know which gender was appealing.

2. The experimental design

MaxDiff involves a series of questions – typically, six or more. Each of the questions has the same basic structure, as shown below, and each question shows the respondent a subset of the list of alternatives. I usually show five alternatives. People are asked to indicate which option they prefer the most, and which they prefer the least.

People complete multiple such questions. Each is identical in structure but shows a different list of alternatives. The experimental design is the term for the instructions that dictate which alternatives to show in each question. See How to create a MaxDiff experimental design in Q for more information about how to create an experimental design.


MaxDiff experimental design example

3. Collecting the data

This is the easy step. You need a survey platform that supports MaxDiff style questions. And, you need to collect some other profiling data (e.g., age, gender, etc.).


4. MaxDiff Analysis and Reporting

The end-point of MaxDiff is typically one or more of:

  • The preference shares of the alternatives. Where the alternatives represent attributes of some kind, such as characteristics desired in the American President, the preference shares are more commonly referred to as relative importance scores (but, they have lots of other names).
  • Segments, where each segment exhibits a different pattern of preferences.
  • Profiling of preference shares by other data (e.g., demographics, attitudes).

The preference shares for the technology brands are shown below. The data comes from a study conducted in Australia in April 2017. I will provide more detail about the analysis of MaxDiff in forthcoming blog posts.



See also 11 Tips for DIY MaxDiff 

Author: Tim Bock

Tim Bock is the founder of Displayr. Tim is a data scientist, who has consulted, published academic papers, and won awards, for problems/techniques as diverse as neural networks, mixture models, data fusion, market segmentation, IPO pricing, small sample research, and data visualization. He has conducted data science projects for numerous companies, including Pfizer, Coca Cola, ACNielsen, KFC, Weight Watchers, Unilever, and Nestle. He is also the founder of Q www.qresearchsoftware.com, a data science product designed for survey research, which is used by all the world’s seven largest market research consultancies. He studied econometrics, maths, and marketing, and has a University Medal and PhD from the University of New South Wales (Australia’s leading research university), where he was an adjunct member of staff for 15 years.

4 Comments. Share your thoughts.

  1. Ntsiki Tango

    Good day Tim,

    I work for KLA and we really love Q:-)

    I am confused by experimental design.

    Do we conduct it separately from our main survey?



    • Displayr Admin

      Hi Ntsiki,

      Thanks! With Max-Diff, the first stage is to create the experimental design. This tells you which questions you need to ask in your questionnaire. You add in the questions as a separate section to your questionnaire.


  2. Sean

    Hi Tim,

    When crafting a Maxdiff type question, how do I control the frequency of attributes appearing in the question. Can I randomize the attributes, or must I create a set scenario for each question (Q1, a,b,c,d,e; Q2: a,e,d,b,f, etc?)


Leave a Reply

Your email address will not be published. Required fields are marked *

Human? *

Keep updated with the latest in data science.