Critical Race Data Science and Epistemologies of Trust

Paul Gowder
2019 Zenodo  
memo to readers: this is an extremely early, rough, draft of some ideas I'm trying to work out about the possibility of a kind of bottom-up data science rooted in the needs and capacities of oppressed communities. It isn't remotely ready for prime time, and I hadn't planned on posting it online for quite a while. (Its ultimate destination is to become a chapter in an edited volume which has, due to nobody's fault, become stalled—as such volumes often do.). However, I was motivated to post it by
more » ... being made aware of a wonderful talk by Dan McQuillan entitled "Towards an Anti-Fascist AI,"1 as well2as a paper of his entitled "People's Councils for Ethical Machine Learning." Needless to say, future iterations of this chapter will be engaging with McQuillan's work in some detail. But I post this draft now as a moment to participate in the ongoing conversation about the notion of placing control over machine learning in the hands of the oppressed—goals that McQuillan and I clearly share. To the limited extent that this chapter presently has an Abstract it is that many current uses of data science/machine learning/artificial intelligence operate from a top-down standpoint, rooted in a standpoint of distrust, in which authoritative entities (governments, credit raters, etc.) use data for surveillance of subordinated groups. But these technologies also have the potential for bottom-up use, on behalf of and by the subordinated and the oppressed. In order to achieve this potential, a transdisciplinary research program encompassing the theory of trust and collective action as well as the techniques of distributed machine learning and transfer learning, should be initiated. Comments eagerly solicited. As this is in a publication pipeline, I'm currently reserving further distribution rights (i.e., there's no cc/open license on this).
doi:10.5281/zenodo.2642298 fatcat:v53vimykyrga3lgjanltzkgb74