Now, we will do a poll.

How much do you know about simulators?

• I am not sure what they are

• I have used them, but not created them

• I have created and used them.

**Recap**

In the previous two webinars we designed an experiment looking at various aspects of job choice. In particular, how people trade off salary with an employer's commitment to being carbon neutral, software, and work from home policies.

We collected data for 1,056 people, cleaned the data, and estimated some choice models.

**This webinar**

In this webinar we will focus on three things. Extracting unscaled utilities. These unscaled utilities are also known as coefficients.

We will then discuss how to create a simulator from these unscaled utilities. A simulator allows us to ask what if questions of the data. To predict peoples choices when confronted with different job offers

Finally, we will look at how to scale the utilities, so that they are more easy to present to clients.

*Hierarchical Bayes: 3,000*

As discussed in the previous webinar, we fit a hierarchical bayes choice model to the data and estimate the utilities for the respondents. The outputs show the average utilities for each person, and the distributions of these utilities.

These hierarchical bayes models estimate multiple utilities for each person. These are call draws.

Multiple utilities

We are looking at the utilities for the 10th respondent in this study. We can see that they marginally preferred alternative 2 as that's what she has the higher utilities for.

They have higher utilities for higher salaries.

The prefer a company that is committed to carbon neutrality, sooner rather than later.

They value better software, but don't care a lot of its standard or great.

And, they'd like a fully remote office.

As mentioned, this is just one of their 100 draws.

Let's change to draw 22. This is a different estimate of the respondents data. . We have a much higher utility for Great software versus standard now.

When we are creating a simulation, it's useful to have multiple estimates of utility for each respondent, as it helps our simulations take uncertainty into account.

However, for many analyses it's much more convenient to just have a single number for each person.

We can do this in Displayr and Q by saving variables from the model that contain each person's utility.

*Object Inspector > Save individual-level coefficients.*

*Expand out variables*

**Create a simulator**

Once we have our utilities, we need to create a simulator.

**Utilities (Ceofficients)**

Pop quiz. Looking at the utilities here, which of these three alternatives do you think the person will prefer?

Now, Displayr will automate this for us.

*Go back to 2,000 model*

*Object insepctor > SIMULATION > Simulator*

*No alternatives*

*4*

As an example, let's compare the following

Alternative 1: Salary 20% higher

2: Carbon neutral

3: Best software

4: Fully remote

**Assumptions of a standard simulator**

The shares we have been showing are predicted shares, based on what people chose in the questionnaire.

To produce sales forecasts, we typically need to do two things:

- Convert preference shares to market shares. This is very hard.
- Convert the market share to sales. This is usually easy. Just multiply the market size by the share.

What makes it hard to compute market share?

The first issue is ecological validity. Do people answer choice based conjoint questions in the same way that they make choices in the real world?

It’s an assumption we make throughout research. We assume that when people tell us who they will vote for in a poll that they will vote the same way in an election.

But, which choice vases conjoint the questions are boring and complicated so the assumption is less likely to be true. In tactics we have to hope the data is good, but can do a few things to improve it.

The next four assumptions, are things we can improve on.

*Rule: logit draw*

I have talked about how more modern analyses allow you to take into account respondent uncertainty.

If we want the simulator to take uncertainty into account, we need to explicity do it.

In many markets retail distribution is a factor. A particular brand May only be available in some states. If so, we need to adjust the simulator to take that into account. I will send some reading materials about that.

Answering conjoint questions is a bit boring. People make mistakes.

Shopping is also a bit boring. We often are lazy and careless shoppers. We make mistakes.

A basic conjoint simulator assumes we make mistakes at the same rate in the real world as in the conjoint.

We can tune choice models to deal with making more or less mistakes.

The way we tune is using a scale parameter.

A value of 1 indicates we think the mistakes are the same in the real world as the experiment.

Let's set it at 0.5.

*Scale: 0.5*

Note that when we do this all the shares become more equal.

If we thought there was less error in the real world, we get bigger differences.

*Scale: 3*

If we have some market share data, we can tune it to match the market share.

We have technical documentation that describes this.

**Calibration**

Another assumption of our model was that we have not ignored any key attributes.

If we did, that could explain a difference between our shares and the real world. We can fix this using Calibration

This is discussed in our technical documentation

A lot of care should be exercised before using calibration. It's only valid if the reason we have failed to predict perfectly is that we are missing some attributes. But, that assumption's a bit tenous. So many other things are also happening that could explain the difference between the simulator and the actual share.

I don't like to use calibration as I think it makes clients believe that the research is more accurate than it is, and then they rely on uncritically.

*View mode > Scale utilities*

In an earlier webinar, I was asked what scale the utiliteis are estimated on.

**The scaling...**

That is, why, say, is the value of 5.7 for Salary for this respondent with a 20% pay rise rather than, say, 10 or 100.

This is because they are logit scaled. What is logit scaled?

It means that when we apply the formula I just showed you here, the utilities you estimate will give you the probabilities, such that the probabilities line up to the way people answered in the questionnaire.

So, logit scaling are very useful as it allows us to build a simulator.

But, they're very hard to explain to clients. So lots of different scaling have been developed

The scaling of utilities is about presentation

To make this point a bit easier to see, here I've just shown the utilities for some brands of chocolate from another study.

As we talked about in the earlier webinar, the first alternative, Godiva, arbitrarily gets a utility of 0.

A different way of showing this is to change the scale so that the mean is 0. Note that the relativities stay the same.

We can sale them so that the difference between the smallest and biggest is 100.

The smallest is 100.

The average range is 100

They are ordered from lest to most popular

They all give the same answer. It's a matter for personal preference.

No matter how we show it, Lindt is least appealing, Godiva a bit more appealing, Dove is more appealing, and Hershey the clear winner.

My favorite is Utilities (min 0, Max Range 100).