The replication crisis, also known as the replicability crisis or the reproducibility crisis, refers to the growing belief that the results of many scientific studies cannot be reproduced and are thus likely to be wrong.

Scale and scope of the replication crisis

Various studies have shown that many (and perhaps most) attempts to replicate findings in published research are unsuccessful, where a lack of success means that:

  • The replication studies failed to find an effect that was claimed in an earlier study.
  • The replication studies found an effect that did not exist in an earlier study.
  • The direction of an effect was found to change (i.e., a positive effect was found to be negative or vice versa).
  • The replication study found a smaller effect than that found in the original study, and the difference was material.
  • The replication study found a larger effect than that found in the original study, and the difference was material.
  • The statistical evidence in support of an effect was weaker than that claimed by the researchers (most commonly this will be reflected in a p-value).

Although sometimes referred to as the replication crisis in science, at this stage the replication crisis as currently documented largely exists in:

  • Medical studies, such as clinical trials (e.g., Ioannidis 2005)
  • Psychological-style laboratory experiments/survey studies, conducted in the various sub-disciplines of psychology (Collaboration 2015) and related disciplines such as marketing (Armstrong and Green, 2017) and economics (e.g., Camerer et al. 2016)

Presumably, the same issue exists in non-academic research fields, such as government policy evaluation, data science, and polling.

Explanations for the replication crisis

  • A widespread lack of methodological sophistication, with researchers using poorly designed experiments with small sample sizes and inappropriate statistical models (Gelman and Carlin, 2014).
  • The “publish or perish” economic model of universities, which provides an economic incentive for researchers to publish work showing effects.
  • Clerical errors in reporting and programming.
  • Fraud, with research being invented to further career interests or the interests of organizations sponsoring the research.


The replication crisis has significant implications for many fields. This is particularly the case when seminal studies are found to be based on unreproducible research. Given the complexity of this issue, it cannot be resolved with a single solution. Suggested solutions to the crisis include a greater emphasis on replication studies in publication, funding, and education.


Armstrong, J. Scott; Green, Kesten (30 January 2017). "Guidelines for Science: Evidence and Checklists"Working paper.

Camerer, Colin F.; Dreber, Anna; Forsell, Eskil; Ho, Teck-Hua; Huber, Jürgen; Johannesson, Magnus; Kirchler, Michael; Almenberg, Johan; Altmejd, Adam (2016-03-25). "Evaluating replicability of laboratory experiments in economics"Science351 (6280), pp. 1433–1436.

Collaboration (2015), Estimating the reproducibility of psychological scienceScience349 (6251).

Gelman, Andrew and John Carlin (2014), Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors, Perspectives on Psychological Science, Vol. 9(6), pp. 641–651

Ioannidis, John P. A.  (2005), Contradicted and Initially Stronger Effects in Highly Cited Clinical Research, JAMA.  294(2), pp. 218-228.

Want to read more? Check out the rest of our blog!