How to Compute D-error for a Choice Experiment
D-error is a way of summarizing how good or bad a design is at extracting information from respondents in a choice experiment. A design with a low D-error is better than a design with a high D-error, provided that both designs are for the same experiment; comparing D-error between designs for different experiments is meaningless. Many other related measures exist that also serve this purpose, such as D-optimality.
This article gives an overview of D-error and demonstrates how to compute D-error by working through an example. Concepts in this article are covered in more (mathematical) detail here.
Prior parameter assumptions
When computing D-error, a prior assumption about the respondent parameters needs to be made. D0-error assumes that all parameters are zero โ i.e., respondents have no preference for any of the attribute levels. DP-error assumes that all respondent parameters are equal to a parameter vector. On the other hand, DB-error assumes that respondent parameters are distributed according to a probability distribution, which is often a multivariate normal distribution with a diagonal covariance matrix.
DP-error example
A small choice experiment design is shown below:
Version | Task | Question | Alternative | Attribute 1 | Attribute 2 | Attribute 3 |
---|---|---|---|---|---|---|
1 | 1 | 1 | 1 | 1 | 2 | 1 |
1 | 1 | 1 | 2 | 2 | 1 | 2 |
1 | 2 | 2 | 1 | 1 | 2 | 2 |
1 | 2 | 2 | 2 | 2 | 1 | 1 |
1 | 3 | 3 | 1 | 2 | 2 | 1 |
1 | 3 | 3 | 2 | 1 | 1 | 2 |
2 | 4 | 1 | 1 | 2 | 2 | 2 |
2 | 4 | 1 | 2 | 1 | 1 | 1 |
2 | 5 | 2 | 1 | 2 | 2 | 2 |
2 | 5 | 2 | 2 | 1 | 1 | 1 |
2 | 6 | 3 | 1 | 1 | 2 | 1 |
2 | 6 | 3 | 2 | 2 | 1 | 2 |
The first step is to encode the design, with either dummy coding or effects coding. I use dummy coding in this example, and split the encoded design by its four questions:
For DP-error, I assume that the respondent parameters are given by . The next step is to compute the multinomial logit probabilities, using the formula
where refers to the probability of selecting alternative i out of J alternatives in question q. The probabilities are
These probabilities are then used to construct the Fisher information matrix, using the formula
Plugging the values for and into the formula, the information matrix is
The DP-error is , where is the number of parameters.
D0-error example
Computing D0-error is just a special case of DP-error where is assumed to be a vector of zeros. In this case, the probabilities (=1/2 in this example) and the information matrix is
The D0-error is .
DB-error (Bayesian) Example
DB-error is defined as the integral of DP-error over an assumed prior distribution of the respondent parameters. One way to compute this numerically is known as Monte Carlo estimation. It involves calculating the average DP-error of many sets of parameters randomly drawn from the prior distribution. To illustrate this, assume that the parameter distribution is multivariate normal with mean and a covariance matrix that is diagonal with standard deviations . I draw 1000 samples from this distribution, as partially shown in the table below:
Draw | Parameter 1 | Parameter 2 | Parameter 3 | Dp-error |
---|---|---|---|---|
1 | 0.25 | -0.73 | 0.67 | 1.45 |
2 | 1.14 | -0.67 | 0.67 | 1.90 |
3 | 0.69 | -0.50 | 1.23 | 1.92 |
... | ... | ... | ... | ... |
1000 | 1.15 | -1.67 | 0.57 | 2.82 |
DB-error is estimated as the mean DP-error, which is 1.90 in this example. Another more computationally efficient but complicated way of computing the integral using quadrature exists1 but is beyond the scope of this article.
Read more handy How To guides, or check out the rest of our blog!
References
1 Christopher M. Gotwalt, Bradley A. Jones & David M. Steinberg, โFast Computation of Designs Robust to Parameter Uncertainty for Nonlinear Settings,โ Technometrics (2012) 51:1, 88-95, DOI: 10.1198/TECH.2009.0009