What do Editors Maximize? Evidence from Four Leading Economics Journals [report]

David Card, Stefano DellaVigna
2017 unpublished
match papers to the publication records of authors at the time of submission and to subsequent Google Scholar citations. To guide our analysis we develop a benchmark model in which editors maximize the expected quality of accepted papers and citations are unbiased measures of quality. We then generalize the model to allow different quality thresholds for different papers, and systematic gaps between citations and quality. Empirically, we find that referee recommendations are strong predictors
more » ... strong predictors of citations, and that editors follow the recommendations quite closely. Holding constant the referees' evaluations, however, papers by highly-published authors get more citations, suggesting that referees impose a higher bar for these authors, or that prolific authors are over-cited. Editors only partially offset the referees' opinions, effectively discounting the citations of more prolific authors in their revise and resubmit decisions by up to 80%. To tell apart the two explanations for this discounting, we conduct a survey of specialists, asking them for their preferred relative citation counts for matched pairs of papers. The responses show no indication that prolific authors are over-cited and thus suggest that referees and editors seek to support less prolific authors. * We thank Pierre Azoulay, Lawrence Katz, and Fabrizio Zilibotti for their comments and suggestions. We are also grateful to the audiences at UC Berkeley, UC Santa Cruz, and the Annual Meeting of WEAI for useful comments. We 1 certain journals may favor certain fields (e.g., more theoretical versus more applied fields) or certain groups of authors. 2 We show how such preferences can be easily incorporated in the model, leading to systematic differences between the way that referee recommendations and paper characteristics affect the accept/reject decision versus expected citations. Second, citations can also be directly impacted by the publication process. Although this will not necessarily have a differential impact on papers from different fields or different authors, we provide evidence on this issue by comparing citations for papers in the period before they are actually published. Third, citations may be systematically biased as a measure of quality by differences in citing practices across fields or a tendency to cite well-established authors (Merton, 1968) . Again, this possibility can be easily incorporated in the model. Importantly, however, using only information on citations and editorial decisions we cannot distinguish between editorial preferences for certain types of papers and differential biases in the gap between citations and quality. Thus, in the final section of the paper we augment our sample of submissions with data from a survey of expert readers in which we quantify the relative bias in citations versus quality for specific types of papers. We focus our main empirical analysis on the R&R decision for the roughly 55% of submissions that are not initially desk rejected. 3 These papers are typically reviewed by 2 to 4 referees who provide summary recommendations on a 7-point scale, ranging from "Definitely Reject" to "Accept". We show that referee recommendations are strong predictors of citations: on average a paper unanimously classified as "Revise and Resubmit" by the referees receives many times more citations than one they unanimously agree is "Definitely Reject". We also show that the fractions of referees who rank a paper in each category provide a good summary of the information contained in the reports, with little loss relative to more flexible alternatives. Nevertheless, the referee recommendations are not sufficient statistics for expected citations. In particular, submissions from authors with more (recent) publications in a set of 35 major journals receive substantially more citations, controlling for referee recommendations. For example, papers by authors with 8 recent publications have on average 2-3 times more citations than papers with similar referee rankings by authors with no recent publications. This suggests either that referees are tougher on more prolific authors (i.e., a bias arising from referee preferences) or that submissions from more prolific authors receive more citations conditional on their quality (i.e., a bias in citations as a measure of quality). Looking at the R&R decision we find that editors are heavily influenced by the referees' recommendations. Morever, the relative weights that editor's place on the fractions of referees in different categories are nearly proportional to their coefficients in a regression model for citations, as would be expected if editors are trying to maximize expected citations. We also find that editors have some private information over and above the summary referee opinions that are predictive of citations. Comparing papers that receive an R&R decision and those that do not, we estimate that the editors' private signals have a correlation of about 0.2 with ultimate citations.
doi:10.3386/w23282 fatcat:t4pyh6jan5aytdlk4auaoz5u7i