Webinar

Learn how to DIY Conjoint

Looking to learn how to design and implement your own Choice-Based Conjoint (CBC) study? Then don't miss this webinar series. In this workshop-style webinar, we'll work together to tackle a real-world commercial problem - employee attitudes towards climate change.

Over the course of four sessions, you'll gain valuable insights into every step of the CBC process, from experiment and survey design to dashboard design. You'll even have the chance to check your own data and perform statistical analysis.

What's covered in this webinar series

Throughout this series, we design and implement a choice study together, using a real-world commercial problem.

Transcript

Conjoint's the most advanced technique in widespread use in market research. Today, we will focus on providing an overview and planning the whole data collection side of things. We will create the conjoint part of the questionnaire today. Over the next two weeks my team will collect data. Then, we will go through the process of turning this into insights together.

What is choice-based conjoint (CBC)?

Conjoint is an area where there's a lot of jargon. And lots of jargon police, that love to get into debates about definitions.

What I'm talking about today is often called choice-based conjoint, as well as discrete choice experiments, and has 101 other names.

The key defining aspect of the technique is the way the data is collected.
People are asked to make choices between hypothetical products, with questions like this one

The descriptions of the alternatives change from question to question.

Typically people are shown as few as one such question, and as many as 20.

Using very very complicated math we can predict why people make the choices they make.

Now, the cool thing is that once we have done the math, it then be used make predictions about what people will do when faced with products that we haven't shown them.

By analyzing questions like the ones shown on the left, we can answer research questions like these.

I'll give you a moment to read them.

And if you are a true research ninja you can even start to make predictions about sales and market share.

But, I put a caveat here. These are very difficult to do and most people stuff it up most of the time.

So, be very careful when selling conjoint. You need to make sure that the easy questions are the ones they want answered. If all your stakeholders want are sales and market share predictions, be cautious, as I've seen even the most lauded professors in the field get it wrong by more than 1000 percent.

But, let's dig into how it allows us to understand what people value.

As a very simple example, look at question 3 on the bottom left.

If a person chooses the Godiva option on the left, what have we learned about them?

That's right. The person that chooses the left option values single origin beans more than sugar free.

We also know that they prefer the Godiva option on the left to the Hershey option, but we don't know if this is driven by brand, chocolate, origin, or price.

To understand which of these is driving choice, we need to simultaneously analyze all the data and make some assumptions.

 

Assumption 1

The first key assumption of conjoint is that products in a market can be described in terms of their level of performance on different attributes.

For example, in a study we performed on the market for chocolate bars, we used Brand, Price, Cocoa strength, Sugar, origin, nuts, and whether they are ethical or not.

You need to make sure that you have all the attributes and levels needed to meet the agreed objectives.

 

Assumption 2

The second key assumption of conjoint is that a person has some quantifiable level of preference for the different attribute levels. This level of preference is known as utility.

Here, for example, the data shows preference for Dove to Hershey's, Godiva to Dove, and Lindt is liked the least.

Note also that Hershey's Utility is set to exactly 0. This isn't a result. Or even an assumption. We only calculate relative utilities. That is, technically we do no estimate the utility of Dove. We just estimate its utility relative to Hershey's.

A different assumption would be to instead estimate each brand's utility relative to the average. It's equivalent to setting one brand to 0. We return to this in the next webinar where we dig into the statistical analysis of the data.

What conjoint does is, it estimates the utility of each attribute level for each respondent, so as to most accurately predict the respondent's choices in the questionnaire.

 

Assumption 3

Assumption 3 is that we can sum up the utilities of each of the attribute levels to deduce the overall utility of a product.

 

Assumption 4

And the last assumption is that people are most likely to choose the product with the highest utility.

You did a great job at working out preferences before. But, the clever math we use is something called Hierarchical Bayes estimation of a mixed logit model with a multivariate normal mixing distribution . Or, just Hiearchical Bayes for short.

 

Hierarchical Bayes

This is what we get when we run such a model. We will return to this in a couple of weeks.

 

Simulator

The coolest of the outputs from conjoint is known as a simulator. It allows us to ask What If questions.

For example, this simulator shows that Godiva has a 10.1% market share.

What happens if it changes to sugar free?

It's share drops to 9.6%

 

"Demand curve"…

We can use simulators to create demand curves.
Here's a demand curve for sugar free chocolates.
The horizontal axis shows price. We can see that at $0, only about 32% of people are predicted to want sugar free chocolate.
This makes sense. Most people prefer chocolate with sugar.
But, from this chart we can see that around 26% of the market would pay a 60 c premium.
And, 20% a $1 premium.
Charts like this one and the simulator are the magic of conjoint.
They give us very clear answers to business questions. We can get a very clear understanding of how people trade off between price and product features by asking relatively easy to understand questions, and doing a lot of math in the background.
Here the data tells us that sugar free is not a mass market proposition, but it is a feature we can charge a very large price premium for.

 

Agree on objectives

Step 1 in a conjoint study is agreeing on objectives.

You really do need to lock down objectives at the very beginning.

You want to know precisely what numbers you will need to compute. If you don't know this before you design the study, you will likely be in trouble after.

 

Objectives for our case study

I have a few goals for the case study we're creating. I will give you a moment to read them.

 

Common objectives

These are common objectives for conjoint studies.

The ones in blue are the ones that relate to the case study we are doing.

 

Choose attributes and levels

What attributes and levels should we use?

The basic way of doing this is to think things through.

 

Examples

Studies focused on pricing and deleting products from a range often only use two attribute, brand and price.

Studies focused on segmentation and product design will often use many more. This example uses 14 attributes.

 

Attributes and levels for the case study

These are the attributes that I've created for the case study. My plan is to ask people about which job offer they would accept. The way I want to understand their attitude to global warming is by working out how they trade off net zero emissions against salary. Will they take a smaller pay rise if their prospective employer is committed to net-zero emissions?

From a marketing perspective, I'd like to know if software is a factor for employees when choosing jobs, so I've included it as an attribute as well.

Lots of companies, including mine, are interested in work location, so there's some potential interest in that data.

And, distance from workplace is obviously important to many people, so I've included it as well.

Now, let's as a group improve these.

I will walk us through some ways of evaluating these attributes. However, at any stage please add your comments into the questions field. I'm looking for you to add your expertise to make my attributes better.

What can you see is bad here?
- Distance irrelevant
- Interaction
- Tradies can't work from home

 

Attribute levels must describe all interesting current and future scenarios

A surprisingly common problem is that people choose attributes and levels that can't describe interesting current and future scenarios.

For example, a study that looks at miles per gallon of gas is not so useful in a world where competitors are electric.

Can anybody see any problems here?

If so, type them into the question field. It's fine to keep typing after we've moved onto the next one.

One issue may be that a company may want to bring in net zero emissions in 25 years. Should we test all the possible times? No, when we have numeric attributes like his, we can interpolate at analysis stage.

 

... mutually exclusive

Attributes and levels must be mutually exclusive.

This is a really common mistake. For example, look at the example in the top right. If, say, it takes 20 minutes at security, you can't very well also show 20 minutes at the gate. The smart thing to do is to create attributes that don't have this problem.

 

Delete any nice-to-know attributes

As with most research, it's a mistake to collect data that you don't need. Nice to know is not enough. What attributes are nice to know here?

If we were trying to predict the actual choices that people made when choosing their current workplace we would need to include such an attribute. But, we aren't, so it should be removed.

 

Checking .... descriptions must be simple

We will ask people quite a few questions that look very similar. We need to have simple descriptions or they can't or won't read them.

Can anybody see any problems with these descriptions? Please type any concerns you have into gotowebinar in the questions field

 

... Descriptions must be unambiguous
It's surprisingly common to see quite vague attributes. For example, words like "regular" to describe pricing, or, worse yet, ranges.

The problem with these is that the stakeholders using the choice model will likely interpret them differently to the respondents who provided the data, leading to the research being misleading.

And yes, often the different checks are contradictory. Sometimes we need to compromise on simplicity to void ambiguity and vice vera.

 

Design the choice questions/tasks
Once we have worked out what attributes and levels we want, we then need to use them to create questions.

There are three main parts to this.

 

How to design the choice tasks/questions

The first bit in this process is creating he experiemental design. What's the experimental design you ask?

 

What is a conjoint experimental design?

It's the set of instructions that converts a table of attributes and levels, into conjoint questions.

 

What the design looks like

A conjoint experimental design is typically a whopping big table of numbers, like this one. We are looking at the first 11 of 300 rows here.

I'll shortly show you how to create such tables.

But, let's start by working out how to read them.

Looking at the Brand column, for example. This is for a study with four brands. So, the 1 just means the first brand, the 2 the second, and so on.

In our case, 1 is Dove, 2 is Godiva, etc.

Let's replace the numbers with words to make it a bit easier.

So, while a design is traditionally all numbers, it can also be written in words.

 

Creating the questionnaire from the design

As we will soon discuss, often there are multiple versions in a design, with each respondent seeing one of the versions.
The question column refers to the question number. We can see here that question 1 is represented by 3 rows.

The first row shows us the first option that appears in the first question.
The second row shows the attribute levels of the second option.
The third row the third option
The fourth row the first option in the second question
And so on.

 

Creating experimental designs in Q and Displayr

Here are the instructions for creating designs in Displayr and Q.

 

Case studies

We will start by looking at a simple toy example with two attributes in the car market. Brand and price

The reason I am using such a simple example is it makes it really easy to see what's going on.

We will then migrate to a more realistic problem which looks at choice of home delivery options.

 

Creating

Here are the instructions for creating experimental designs in Q and Displayr, but let's go and do it.

 

Experimental design for the case study

We'll go into Displayr in edit mode
As always, if you don't know how to do something, you can search.
Search: Experimental design
Anything > Advanced Analysis > Choice Modeling > Experimental Design
Clear search
We click this red button and paste in the data.
Attributes and levels > Enter in spreadsheet > Add data
Go to Excel
Copy data
Paste into Displayr
By default Displayr's creating something called a Balanced overlap design, which is a good general purpose design, and I'd only change it if I had a good reason.
We've got an error. By default, Displayr is having 10 questions per respondent.
And 1 version. This means that all respondents will see the same questions.
The error is because we need to have more variation n the data for the hiearchical bayes model to calculate the utilities.
I usually want at least 10 versions, and often as many as there are respondents. So, I will set this to 10 to start.
Versions 10
What other errors and warnings have we got?
OK, I will put in a sample size of 500. How does that go?
Let's increase the number of options to 4.
How are we doing?
Now, I want to as as few questions as I can, so I want to increase the number of questions.
Decrease to 9.
Oh. Can't do that!
Increase to 10.
The next level of checking is to create the actual choice questions to look at them.
Insert > More > Choice Modeling > Preview Choice Questionnaire
Choice experimental design output: choice.mode.design
What can you see? Ia this question sensible?

 

More complicated issues

There are some more complicated issues that sometimes need to be taken into account. We've got articles on how to address these, so I won't go through them now. But, if anybody has any questions, I'm more than happy to go through them.

 

Question types

We've now looked at how to create experimental designs and many of the key decisions that need to be made.

We need to make some other decisions as well. Once we have created a design, we can use any of the question formats I am about to show you.

 

Choice questions

All the examples and most real-world studies use choice questions like this.

 

Best-worst questions

A more complex approach is to use best-worst questions, where people are asked which they like most and which they like least.

The great things about these questions is that because they collect more data, they require smaller sample sizes.

But, there is a problem as well. The cool thing about choice questions is that in the real world people choose products and in choice questions they choose products.

But, In the real world people don't walk into shops and say which product they won't buy, so this question has what academics call poor ecological validity. That is, it's less likely to collect high quality data, all else being equal

 

Ranking questions

We can also ask people to rank alternatives.

This collects even more data. But, the questions again have poor ecological validity.

 

Constant-sum questions

And, we can ask people constant sum questions.

Again, even more data. But, I'd strongly advise against these. In my opinion they violate one of the assumptions of conjoint.

The underlying math of conjoint assumes that the numbers people enter reflect their uncertainty. That is, they are trated as being proprotional to probabilities. But, they are much more likely to instead indicate variety seeking

And, the questions are tedious to fill in so you get a lot of junk.

 

Numeric questions

And you can ask people how many they would buy.

I also think these tend not to be so clever to use. In addition to the variety seeking issue, none of the standard models can be used to analyze the data.

 

Mock shelf

All the examples I've shown so far have been of boring grid questions.

Why not use prototypes, virtual reality and so on?

You can. But most people don't. It makes the studies a lot more expensive. And, this in turn leads to less data, which just makes the studies noisy and expensive.

Like everything it's a tradeoff.

 

None of these and current

We can also offer people a choice of None of these and current options.

 

None of these

Note the option on the far right

Most people who are new to choice-based conjoint instinctively think it is a good idea to add such an option, seeming to view it as a bit like having a Don’t know option in a normal questionnaire. However, including this option comes with considerable costs. In particular:

  • When given such an option some people may click this option as an easy alternative to reading the other options. If that happens, then the validity of the entire study is poor.
  •  You need a larger sample size if using this option. For example, if the None option is chosen half the time, then the required sample size will, else being equal, need to be twice as large.
  • Realism. Often people do not have none of these as an option in the real world. For example, a family must have electricity and water and foodstuffs, so giving a none of these can sacrifice Ecological validity.

I virtually never ask this style of question.

 

Current

A similar type of approach is to give users a current alternative.

This has all the problems with none, and also violates some complicated technical assumptions of the hierarchical bayes model

 

Dual

One solution to the None problem is to ask the dual response question. This solves most of the issues, and allows you to work out at analysis time whether or not the None of these choices are good at predicting real world behavior.

But, it makes the questionnaire longer and harder to understand and analyze, so again, I prefer not to use it.

 

 

Sample size

We've already look at aspects of sample size with the design

 

 

Sample size and experimental

  • Some types of experimental designs require large sample sizes, such as prohibitions and partial profile designs.
  • If we are using None of these or current options, we need to have bigger samples.
  • We can reduce sample sizes if we use the less realistic question types. There are a few proprietary techniques out there, such as adaptive choice based conjoint, which are also designed to collect more data per respondent, but they also do this by asking less realistic questions.

 

Approaches for determining sample size

So how do we work out the sample size?
One rule of thumb is to use the standard errors must be 0.05 or more, as discussed.
I find this useful in understanding the consequences of different design decisions, such as prohibitions.
We already used this, and it got our sample size up to 500.
But, it's just a made up rule. There are so many impossible assumptions in the formulas that calculate the standard errors, such as the assumption that people have a utility of 0 for everything, which makes them largely meaningless. This is exactly the same reason that commercial researchers rarely use sample size formulas in more conventional studies.
So, just like with a normal study, we've got heuristics we can use, such as a minimum sample size of 300
Sawtooth have made nice little formula that I've got here as a calculator.
For example, if I halve the number of levels of the attribute with the most levels
The recommended sample size goes down by 50%.
If you want to get all scientific, the way is to use simulations. There's a post about this on the blog post.

 

Collect data
Now we move onto data collection

 

Fieldwork tips
Here are some tips for doing fieldwork. I'll give you a few moments to read them

Read more

Learn how to DIY Conjoint
Learn how to DIY Conjoint
Learn how to DIY Conjoint
Learn how to DIY Conjoint
Learn how to DIY Conjoint
Learn how to DIY Conjoint
Learn how to DIY Conjoint
Cookies help us provide, protect and improve our products and services. By using our website, you agree to our use of cookies (privacy policy).
close-image

Conjoint streamlined in one place

See how!
close-link