AI&Society: editorial volume 35.2: the trappings of AI Agency

Karamjit S. Gill
2020 AI & Society: The Journal of Human-Centred Systems and Machine Intelligence  
What should be our response to AI Agency trapping us in the data-driven web of the AI-powered machine? Among wide-ranging responses of AI communities to the crisis of AI Agency, the response in general is to warn us about the existential risk posed by Super AI, the dystopia of Black Mirror, and the crisis of automated decision-making. This response makes us aware of our techno-centric future where our identities are being linked to facial recognition, and of the deep concerns about data
more » ... ng institutions and industries, thereby creating a culture of 'data anxiety'. The second response is to articulate the implication of opaqueness and the lack of transparency of autonomous AI systems, raising concerns of manipulation of decision-making. This response also alerts us to the impact of AI Agency on the politics of governance. The third response is to question the very nature of intelligence of the artificial, raising questions about sentience and our understanding of the data-driven world. This response also warns us of the danger of getting used to blind faith in the machine, trappings of a human-robot co-existence society, and elimination of human intervention in autonomous decision-making. Further, it alerts us of empty slogans of transparency and compliance and "ethics washing" facades. The fourth response is to counter the images of a dystopic future, asking us to give attention to positive impacts and potentials of AI systems for societal benefit in domains such as human health, transportation, service robots, health-care, education, public safety, security and entertainment. The fifth response is to initiate a conversation on public accountability frameworks, including issues of governance, and the cultivation of a culture of algorithmic accountability arising from concerns of opaqueness, transparency and responsibility. Whilst recognising the need to cultivate trust and reliability of AI systems and tools, it argues for alignment of AI Agency to social, cultural, legal and moral values of societies, guided by ethical frameworks. To get a glimpse of varied voices and responses to the trappings of the AI Agency, we take a note of recent AI debates of forums such as those of The Economic Word Forum (2019), the STOA Study (2019) , the AI and the Future of Humanity Exhibition (2020), The Royal Society (2018), and voices of the AI research community including those of the authors of this volume. The Economic Word Forum White paper (2019) concludes that "The increasing use of AI and autonomous systems will have revolutionary effects on human society. Despite many benefits, AI and autonomous systems involve considerable risks that must be managed well to take advantage of their benefits while protecting ethical values as defined in fundamental rights and basic constitutional principles, thereby preserve a human-centric society." It points out the lack of transparency, the increasing loss of humanity in social relationships, the loss of privacy and personal autonomy, information biases, as well as error proneness and susceptibility to manipulation of AI-powered autonomous systems. On the debate on embedding ethical principles in the AI machine, the White paper alerts us to a possible slavish adherence of AI systems to a particular ethical school of thought in decision-making. As the concern and impact of AI agency move to the politics of governance, STOA (2019) Study explores a European rationale for creating governance frameworks for algorithmic accountability and transparency. The study notes that a lack of transparency of algorithmic systems risks their meaningful scrutiny and accountability when these systems are integrated into decision-making processes, especially those that can have a considerable impact on people's human rights (e.g., critical safety decisions in autonomous vehicles; allocation of health and social service resources, etc.). Further, such an integration of algorithmic systems has the potential of significant privacy and ethical consequences for individuals, organisations and societies as a whole. From a utilitarian perspective, it may be noted that transparency and accountability are both tools to
doi:10.1007/s00146-020-00961-9 fatcat:hwqj4staqnfqhcnrvb3nbkktw4