Controlling Fairness and Bias in Dynamic Learning-to-Rank (Extended Abstract) release_ga3ixkc3ejg65mwr3fjltemkwa

by Marco Morik, Ashudeep Singh, Jessica Hong, Thorsten Joachims

Released as a paper-conference by International Joint Conferences on Artificial Intelligence Organization.

2021  

Abstract

Rankings are the primary interface through which many online platforms match users to items (e.g. news, products, music, video). In these two-sided markets, not only do the users draw utility from the rankings, but the rankings also determine the utility (e.g. exposure, revenue) for the item providers (e.g. publishers, sellers, artists, studios). It has already been noted that myopically optimizing utility to the users -- as done by virtually all learning-to-rank algorithms -- can be unfair to the item providers. We, therefore, present a learning-to-rank approach for explicitly enforcing merit-based fairness guarantees to groups of items (e.g. articles by the same publisher, tracks by the same artist). In particular, we propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data. The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility, dynamically adapting both as more data becomes available. In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.
In application/xml+jats format

Archived Files and Locations

application/pdf   739.9 kB
file_qlcznmowuvaidb2fnhofq4qcnq
www.ijcai.org (publisher)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  paper-conference
Stage   unknown
Year   2021
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 78a2644b-b263-427b-a927-1b8304ad43b6
API URL: JSON