Quantifying Voter Biases in Online Platforms

Himel Dev, Karrie Karahalios, Hari Sundaram
2019 Proceedings of the ACM on Human-Computer Interaction  
In content-based online platforms, use of aggregate user feedback (say, the sum of votes) is commonplace as the "gold standard" for measuring content quality. Use of vote aggregates, however, is at odds with the existing empirical literature, which suggests that voters are susceptible to different biases -- reputation (e.g., of the poster), social influence (e.g., votes thus far), and position (e.g., answer position). Our goal is to quantify, in an observational setting, the degree of these
more » ... es in online platforms. Specifically, what are the causal effects of different impression signals -- such as the reputation of the contributing user, aggregate vote thus far, and position of content -- on a participant's vote on content? We adopt an instrumental variable (IV) framework to answer this question. We identify a set of candidate instruments, carefully analyze their validity, and then use the valid instruments to reveal the effects of the impression signals on votes. Our empirical study using log data from Stack Exchange websites shows that the bias estimates from our IV approach differ from the bias estimates from the ordinary least squares (OLS) method. In particular, OLS underestimates reputation bias (1.6--2.2x for gold badges) and position bias (up to 1.9x for the initial position) and overestimates social influence bias (1.8--2.3x for initial votes). The implications of our work include: redesigning user interface to avoid voter biases; making changes to platforms' policy to mitigate voter biases; detecting other forms of biases in online platforms.
doi:10.1145/3359222 fatcat:es7jia7nujal3hmvem6yrti5wi