Recommendations for Increasing Replicability in Psychology

Jens B. Asendorpf, Mark Conner, Filip De Fruyt, Jan De Houwer, Jaap J. A. Denissen, Klaus Fiedler, Susann Fiedler, David C. Funder, Reinhold Kliegl, Brian A. Nosek, Marco Perugini, Brent W. Roberts (+4 others)
2013 European Journal of Personality  
Replicability of findings is at the heart of any empirical science. The aim of this article is to move the current replicability debate in psychology toward concrete recommendations for improvement. We focus on research practices, but also offer guidelines for reviewers, editors, journal management, teachers, granting institutions, and university promotion committees, highlighting some of the emerging and existing practical solutions that can facilitate implementation of these recommendations.
more » ... he challenges for improving replicability in psychological science are systemic. Improvement can occur only if changes are made at many levels of practice, evaluation, and reward. Replicability 2 Preamble The purpose of this article is to recommend sensible improvements that can be implemented in future research without dwelling on suboptimal practices in the past. We believe the suggested changes in documentation, publication, evaluation, and funding of research are timely, sensible, and easily to implement. Because we are aware that science is pluralistic in nature and scientists pursue diverse research goals with myriad methods, we do not intend the recommendations as dogma to be applied rigidly and uniformly to every single study, but as ideals to be recognized and used as criteria for evaluating the quality of empirical science. Moving Beyond the Current Replicability Debate In recent years the replicability of research findings in psychology (but also psychiatry and medicine at large) has been increasingly questioned (Ioannidis, 2005; Lehrer, 2010; Yong, 2012) . Whereas current debates in psychology about unreplicable findings often focus on individual misconduct or even outright frauds that occasionally occur in all sciences, the more important questions are which specific factors and which incentives in the system of academic psychology might contribute to the problem (Nosek, Spies, & Motyl, 2012). Discussed are, among others, an underdeveloped culture of making data transparent to others, an over-developed culture of encouraging brief, eye-catching research publications that appeal to the media, and absence of incentives to publish high-quality null results, failures to replicate earlier research even when based on stronger data or methodology, and contradictory findings within studies. Whatever the importance of each such factor might be, current psychological publications are characterized by strong orientation toward confirming hypotheses. In a comparison of publications in 18 empirical research areas, Fanelli (2010) found rates of confirmed hypotheses ranging from 70% (space science) to 92% (psychology and psychiatry), and in a study of historic Replicability 3 trends across sciences, Fanelli (2012) reported a particularly sharp increase of the rate for psychology and psychiatry between 1990 and 2007. The current confirmation rate of 92% seems to be far above rates that should be expected, given typical effect sizes and statistical power of psychological studies (see later section on sample sizes). The rate seems to be inflated by selective non-reporting of non-confirmations as well as post hoc invention of hypotheses and study designs that do not subject hypotheses to possibility of refutation. In contrast to the rosy picture presented by publications, in a recent worldwide poll of more than 1,000 psychologists, the mean subjectively estimated replication rate of an established research finding was 53%
doi:10.1002/per.1919 fatcat:rrmrjdepuvcijbsdsp5famxmri