Today, you can produce a wide range of choice model experimental designs with numerous different algorithms. But with all this design diversity, how do you measure its quality? In this post, I'll show you how you can distill your data into a few key diagnostic metrics of balance, which will help you assess the quality of your choice model design.

Defining Balance and Overlap

Often, the quality of a design is described in terms of its balance and overlap. Balance is a measure of consistency of the frequencies of the attribute levels. Overlap is a measure of repetition of attribute levels within the same question.

However, the drawback of these measures is that they produce many statistics that are difficult to interpret in isolation. In order to develop an understanding of how good your design is, you must look at these statistics as part of the bigger picture. I'll show you how to derive diagnostic metrics that provide a holistic measure of the quality of your design.

You can easily apply these metrics to compare designs created from different algorithms.

An example design

In Q or Displayr designs are created with Conjoint/Choice Modeling > Experimental Design.ย I am using a small design produced with the Random algorithm. There are two attributes (Colorย andย Speed), each of which have three levels. Every respondent answers five questions, each of which contains three alternatives. There are two versions.

Below I show the output of Conjoint/Choice Modeling > Diagnostic > Experimental Design > Balances and Overlaps. Don't worry if you're confused about what each output means. I'm about to explain them.

D-error and Overlaps

D-error is a measure that shows how good or bad a design is at extracting information from respondents. A lower D-error indicates a better design. Usually, D-errors are used to compare the quality of designs created by different algorithms.

For each attribute, the overlap is calculated as the percentage of questions with some repetition of a level. The number of levels of each attribute are shown in brackets. In the example above, 70% of the questions have at least one repeatedย Colorย level. In other words, 30% of questions show alternatives with distinct Color levels.

Balance statistics

In order to take all our complex data and distill it into a few metrics we can easily use to see how good a design is, we're going to have to do some calculations.

To calculate the balance of an attribute within a version, first define the mean level frequency for attributeย a as,

    \[ \mu _{a}= \frac{nqns{\cdot}nalts}{nlevels_{a}} \]

where nqns is the number of questions per respondent, nalts is the number of alternatives per question and nlevels_{a} is the number of levels of this attribute. Sinceย nqns \cdot nalts is the total number of appearances of each attribute in the version, \mu _{a} is the number of times each level appears if the levels are balanced.

The balance of an attribute is then defined as the sum across levels of the absolute differences between the level frequency and mean level frequency.

    \[ b_{a}=\sum_{i=1}^{nlevels_{a}}{|f_{i}-\mu _{a}|} \]

where f_{i} is the frequency of occurrence of level i for attribute a in the design version.

To normalize the balance, define the worst possible balance of the attribute as,

    \[ wb_{a}=(\mu _{a}{\cdot}nlevels_{a}-\mu _{a})+(nlevels_{a}-1)\mu_{a}=2(nlevels_{a}-1)\mu _{a} \]

The first term in brackets arises from one level appearing in all alternatives. The second term in brackets arises from all other levels never appearing.

The normalized balance for this version and attribute is calculated according to,

    \[ nb_{a}=1- \frac{b_{a}}{wb_{a}} \]

To calculate the mean version balance, take the average of the normalized balances across all attributes and all versions.ย If a version is perfectly balanced and the levels of each attribute appear the same number of times, mean version balance is one.

    \[ \text{mean version balance}=\frac{\sum_{i=1}^{nattributes}{\sum_{j=1}^{nversions}{nb_{i,j}}}}{nattribues{\cdot}nversions} \]

By calculating the balance for the whole design regardless of version, I arrive at the analogous acrossย version balance. If this value is one,ย the levels of each attribute appear the same number of times within the whole design. Note that the acrossย version balance could be one despite the individual versions not being balanced (so acrossย version balance is less than one). The more usual case is that acrossย version balance is closer to one because at the whole-design level, some of the individual version differences are offsetting.

Worked example of balance

Using the example shown previously,

    \[ \mu _{color}= \frac{5\cdot3}{3}=5 \]

With frequencies of Color in the first version as follows,

The calculations for the balance, worst balance and normalized balance of Color are,

    \[ b_{color}=(|4-5|+|8-5|+|3-5|=6 \]

    \[ wb_{color}=2(3-1)5=20 \]

    \[ nb_{color}=1-\frac{6}{20}=\frac{7}{10} \]

For all attributes and versions the table of of normlized balances is,

Taking the average arrives atย mean version balanceย of 0.825 as above.

In the original diagnostic output theย singles lists are the level balances across the whole design. These can be used to calculate the across version balance. Without going through each step, b_{color}=4, wb_{color}=40 andย nb_{color}=0.9. Since those values are the same for theย Speedย attribute,ย across version balance = 0.9.

Pairwise balances

It is relatively straightforward for an algorithm to maintain single level balance (apart from for theย Randomย algorithm!). More challenging is the pairwise balance. I discuss why you want your designs to be pairwise balanced here.

The pairwise balance of two attributes is best shown by a table of the co-occurrences of each pair of levels. Below I reproduce the pairsย table from the diagnostic output. Using the bottom right cell as an example, it shows that across the whole design there were 3 alternatives that were bothย Yellow andย Slow.

The formulae for balance statistics can be converted to pairwise balance statistics (where aย andย b are attributes) as follows,

    \[ \mu_{a, b}= \frac{nqns{\cdot}nalts}{nlevels_{a}{\cdot}{nlevels_{b}}} \]

    \[ b_{a,b}=\sum_{i=1}^{nlevels_{a}}\sum_{j=1}^{nlevels_{b}}{|f_{i,j}-\mu_{a,b}|} \]

    \[ wb_{a,b}=(\mu _{a,b}{\cdot}nlevels_{a}{\cdot}nlevels_{b}-\mu _{a,b})+(nlevels_{a}{\cdot}nlevels_{b}-1)\mu_{a,b}=2(nlevels_{a}{\cdot}nlevels_{b}-1)\mu _{a,b} \]

    \[ nb_{a,b}=1- \frac{b_{a,b}}{wb_{a,b}} \]

To calculate the mean version pairwise balance, take the average of the normalized balances across all distinct pairs of attributes and all versions.

    \[ \text{mean version pairwise balance}=\frac{\sum_{i=1}^{nattributes}\sum_{j=1}^{i - 1}{\sum_{k=1}^{nversions}{nb_{i,j,k}}}}{nattribues{\cdot}(nattributes-1){\cdot}nversions} \]

Like theย mean version balance, ifย mean version pairwise balance is one, then each pair of levels for each pair of attributes occurs equally often in each version. The closer thatย mean version pairwise balance is to zero, the more imbalanced the design.ย Across version pairwise balanceย is the counterpart of across version balance. It ignores versions and considers the pairwise balance of the design as a whole.

Worked example of pairwise balance

The table of pairwise frequencies for the first version is as follows,

For which we can compute the statistics according to the above formulae.

    \[ \mu_{color, speed}= \frac{5\cdot3}{3\cdot3}=\frac{5}{3} \]

    \[ b_{color, speed}=\frac{28}{3} \]

    \[ wb_{color,speed}=2(3\cdot3-1)\frac{5}{3}=\frac{80}{3} \]

    \[ nb_{color,speed}=1-\frac{28}{80}=\frac{13}{20}=0.65 \]

For the second version,ย nb_{color,speed}=0.675 which gives theย mean version pairwise balance of 0.6625. Remember that the closer the mean version pairwise balance is to zero, the more imbalanced the design is.

Conclusion

I hope this helps you distill your data into a few metrics that give you a better idea of the quality of your design. You can easily calculate these statistics in Q, Displayr or R. If you would like to see my worked example and adapt it for your own data, you can inย this Displayr document.

The power of these metrics is in using them as benchmarks for comparison between different designs.ย In a later post I will use this technique to explore the differences between design algorithms.

Find out more about Choice Model Experimental Designs on our Displayr blog. Want to do this yourself? Try Displayr for free.ย We've specifically designed, what we think, is the best choice modeling software in the world.