Type M error can explain Weisburd's Paradox [post]

andrew gelman
2016 unpublished
Simple calculations seem to show that larger studies should have higher statistical power, but empirical meta-analyses of published work in criminology have found zero or weak correlations between sample size and estimated statistical power. This is "Weisburd's paradox" and has been attributed by Weisburd, Petrosino, and Mason (1993) to a difficulty in maintaining quality control as studies get larger, and attributed by Nelson, Wooditch, and Dario (2014) to a negative correlation between sample
more » ... sizes and the underlying sizes of the effects being measured. We argue against the necessity of both these explanations, instead suggeting that the apparent Weisburd paradox is an artifact of systematic overestimation inherent in post-hoc power calculations, a bias that is large with small n. Furthermore, we recommend abandoning the use of statistical power as a measure of the strength of a study, because implicit in the definition of power is the bad idea of statistical significance as a research goal.
doi:10.31235/osf.io/ahnd4 fatcat:m64p5zpbfvhfrlekiq2x2h4ila