How Good is your Choice Model Experimental Design?
Today, you can produce a wide range of choice model experimental designs with numerous different algorithms. But with all this design diversity, how do you measure its quality? In this post, I'll show you how you can distill your data into a few key diagnostic metrics of balance, which will help you assess the quality of your choice model design.
Defining Balance and Overlap
Often, the quality of a design is described in terms of its balance and overlap. Balance is a measure of consistency of the frequencies of the attribute levels. Overlap is a measure of repetition of attribute levels within the same question.
However, the drawback of these measures is that they produce many statistics that are difficult to interpret in isolation. In order to develop an understanding of how good your design is, you must look at these statistics as part of the bigger picture. I'll show you how to derive diagnostic metrics that provide a holistic measure of the quality of your design.
You can easily apply these metrics to compare designs created from different algorithms.
An example design
In Q or Displayr designs are created with Conjoint/Choice Modeling > Experimental Design.ย I am using a small design produced with the Random algorithm. There are two attributes (Colorย andย Speed), each of which have three levels. Every respondent answers five questions, each of which contains three alternatives. There are two versions.
Below I show the output of Conjoint/Choice Modeling > Diagnostic > Experimental Design > Balances and Overlaps. Don't worry if you're confused about what each output means. I'm about to explain them.
D-error and Overlaps
D-error is a measure that shows how good or bad a design is at extracting information from respondents. A lower D-error indicates a better design. Usually, D-errors are used to compare the quality of designs created by different algorithms.
For each attribute, the overlap is calculated as the percentage of questions with some repetition of a level. The number of levels of each attribute are shown in brackets. In the example above, 70% of the questions have at least one repeatedย Colorย level. In other words, 30% of questions show alternatives with distinct Color levels.
Balance statistics
In order to take all our complex data and distill it into a few metrics we can easily use to see how good a design is, we're going to have to do some calculations.
To calculate the balance of an attribute within a version, first define the mean level frequency for attributeย as,
where is the number of questions per respondent, is the number of alternatives per question and is the number of levels of this attribute. Sinceย is the total number of appearances of each attribute in the version, is the number of times each level appears if the levels are balanced.
The balance of an attribute is then defined as the sum across levels of the absolute differences between the level frequency and mean level frequency.
where is the frequency of occurrence of level for attribute in the design version.
To normalize the balance, define the worst possible balance of the attribute as,
The first term in brackets arises from one level appearing in all alternatives. The second term in brackets arises from all other levels never appearing.
The normalized balance for this version and attribute is calculated according to,
To calculate the mean version balance, take the average of the normalized balances across all attributes and all versions.ย If a version is perfectly balanced and the levels of each attribute appear the same number of times, mean version balance is one.
By calculating the balance for the whole design regardless of version, I arrive at the analogous acrossย version balance. If this value is one,ย the levels of each attribute appear the same number of times within the whole design. Note that the acrossย version balance could be one despite the individual versions not being balanced (so acrossย version balance is less than one). The more usual case is that acrossย version balance is closer to one because at the whole-design level, some of the individual version differences are offsetting.
Worked example of balance
Using the example shown previously,
With frequencies of in the first version as follows,
The calculations for the balance, worst balance and normalized balance of are,
For all attributes and versions the table of of normlized balances is,
Taking the average arrives atย mean version balanceย of 0.825 as above.
In the original diagnostic output theย singles lists are the level balances across the whole design. These can be used to calculate the across version balance. Without going through each step, , andย . Since those values are the same for theย Speedย attribute,ย across version balance = 0.9.
Pairwise balances
It is relatively straightforward for an algorithm to maintain single level balance (apart from for theย Randomย algorithm!). More challenging is the pairwise balance. I discuss why you want your designs to be pairwise balanced here.
The pairwise balance of two attributes is best shown by a table of the co-occurrences of each pair of levels. Below I reproduce the pairsย table from the diagnostic output. Using the bottom right cell as an example, it shows that across the whole design there were 3 alternatives that were bothย Yellow andย Slow.
The formulae for balance statistics can be converted to pairwise balance statistics (where aย andย b are attributes) as follows,
To calculate the mean version pairwise balance, take the average of the normalized balances across all distinct pairs of attributes and all versions.
Like theย mean version balance, ifย mean version pairwise balance is one, then each pair of levels for each pair of attributes occurs equally often in each version. The closer thatย mean version pairwise balance is to zero, the more imbalanced the design.ย Across version pairwise balanceย is the counterpart of across version balance. It ignores versions and considers the pairwise balance of the design as a whole.
Worked example of pairwise balance
The table of pairwise frequencies for the first version is as follows,
For which we can compute the statistics according to the above formulae.
For the second version,ย which gives theย mean version pairwise balance of 0.6625. Remember that the closer the mean version pairwise balance is to zero, the more imbalanced the design is.
Conclusion
I hope this helps you distill your data into a few metrics that give you a better idea of the quality of your design. You can easily calculate these statistics in Q, Displayr or R. If you would like to see my worked example and adapt it for your own data, you can inย this Displayr document.
The power of these metrics is in using them as benchmarks for comparison between different designs.ย In a later post I will use this technique to explore the differences between design algorithms.
Find out more about Choice Model Experimental Designs on our Displayr blog. Want to do this yourself? Try Displayr for free.ย We've specifically designed, what we think, is the best choice modeling software in the world.