D-error is a way of summarizing how good or bad a design is at extracting information from respondents in a choice experiment. A design with a low D-error is better than a design with a high D-error, provided that both designs are for the same experiment; comparing D-error between designs for different experiments is meaningless. Many other related measures exist that also serve this purpose, such as D-optimality.

This article gives an overview of D-error and demonstrates how to compute D-error by working through an example. Concepts in this article are covered in more (mathematical) detail here.

Prior parameter assumptions

When computing D-error, a prior assumption about the respondent parameters needs to be made. D0-error assumes that all parameters are zero — i.e., respondents have no preference for any of the attribute levels. DP-error assumes that all respondent parameters are equal to a parameter vector. On the other hand, DB-error assumes that respondent parameters are distributed according to a probability distribution, which is often a multivariate normal distribution with a diagonal covariance matrix.

DP-error example

A small choice experiment design is shown below:

VersionTaskQuestionAlternativeAttribute 1Attribute 2Attribute 3

The first step is to encode the design, with either dummy coding or effects coding. I use dummy coding in this example, and split the encoded design by its four questions:


    \[ \textbf{X}_3=\left[\begin{matrix}1&0&1\\0&1&0\end{matrix}\right],\textbf{X}_4=\left[\begin{matrix}1&1&1\\0&0&0\end{matrix}\right] \]

For DP-error, I assume that the respondent parameters are given by \mathbit{\beta}=[0.5, -0.8, 1.0]. The next step is to compute the multinomial logit probabilities, using the formula


where \mathbittextbf{p}_{q,i} refers to the probability of selecting alternative i out of J alternatives in question q. The probabilities are


These probabilities are then used to construct the Fisher information matrix, using the formula


Plugging the values for \textbf{X} and \mathbittextbf{p} into the formula, the information matrix is


The DP-error is {|\textbf{M}|}^{-1/K}=1.72, where K=3 is the number of parameters.

D0-error example

Computing D0-error is just a special case of DP-error where \mathbit{\beta} is assumed to be a vector of zeros. In this case, the probabilities \mathbittextbf{p}_{q,i}=1/J (=1/2 in this example) and the information matrix is


The D0-error is {|\mathbittextbf{M}|}^{-1/K}=1.17.

DB-error (Bayesian) Example

DB-error is defined as the integral of DP-error over an assumed prior distribution of the respondent parameters. One way to compute this numerically is known as Monte Carlo estimation. It involves calculating the average DP-error of many sets of parameters randomly drawn from the prior distribution. To illustrate this, assume that the parameter distribution is multivariate normal with mean \mu=[0.5, -0.8, 1.0] and a covariance matrix that is diagonal with standard deviations \sigma=[0.4, 0.4, 0.4]. I draw 1000 samples from this distribution, as partially shown in the table below:

DrawParameter 1Parameter 2Parameter 3Dp-error

DB-error is estimated as the mean DP-error, which is 1.90 in this example. Another more computationally efficient but complicated way of computing the integral using quadrature exists1 but is beyond the scope of this article.

Read more handy How To guides, or check out the rest of our blog!


1 Christopher M. Gotwalt, Bradley A. Jones & David M. Steinberg, “Fast Computation of Designs Robust to Parameter Uncertainty for Nonlinear Settings,” Technometrics (2012) 51:1, 88-95, DOI: 10.1198/TECH.2009.0009