Uncommon voices of AI

Karamjit S. Gill
2017 AI & Society: The Journal of Human-Centred Systems and Machine Intelligence  
Beyond the headlines of the thrill engendered by futuristic AI super machines, Virtual Reality and Internet of Things, what are we to make of artificial intelligence? A gigantic job eliminator? Or the next step in evolution, the one in which technology finally asserts its mastery over us? Or maybe artificial intelligence in its many guises become the source of redemptive systems that develop new medications for us and operate on us, that invest and multiply our capital, and that create more
more » ... onal decision-makers? (Ars Electronica Festival 2017). The new wave of artificial super intelligence raises a number of serious societal concerns: what are the crises and shocks of the AI machine that will trigger fundamental change and how should we cope with the resulting transformation? Digital technologies are the box in which we all increasingly live. Living through dramatic technological change, we may feel trapped and disrupted, being left behind in the myth and reality of AI, and miss what is really at stake. The Silicon Valley technological culture may often see societal concerns and humanistic perspectives of digital technologies as rather inconvenient, but in the midst of this transformation we can hear voices of existential risk, reason, redemption and ethics. Sir Rees (2013) of the Centre for the Study of Existential Risk (CSER) (2017) gives an insight into the concerns and challenges of existential risk of ecological shocks, fast-spreading pandemics, and scarcity of resources, aggravated by climate change. For him, equally worrying are the imponderable downsides of powerful new cyber-, bio-, nanotechnologies, and synthetic biology. His concerns include a "sci-fi scenario", in which a network of computers could develop a mind of its own and threaten us all. It is hard to quantify the potential "existential" threats from (for instance) bio-or cyber-technology, from artificial intelligence, or from runaway climatic catastrophes. He proposes forward planning and research to avoid unexpected catastrophic consequences and the imponderable downsides of powerful new cyber-, bio-and nanotechnologies, and to circumvent societal breakdown due to error or terror. Ó É igeartaigh (2017) gives a soothingly rational note when he says that humanity has already changed a lot over its lifetime as a species. While our biology is not drastically different from what it was a millennium ago, the capabilities enabled by our scientific, technological, and sociocultural achievements have changed what it is to be human. We have dramatically augmented our biological abilities, we can store and access more information than our brains can hold, and collectively solve problems that we could not do individually. AI systems of the future would be capable of matching or surpassing human intellectual abilities across a broad range of domains and challenges. The Leverhulme Centre for the Future of Intelligence (CFI) (2017) visualises a redemptive curve on the horizon while asking us to take note of the serious consequences of untamed AI and argues for developing a framework for responsible innovation that seeks maximising the societal benefit of AI. He cautions us about the possibility of creating computer intelligence equaling that of human intelligence. In this future scenario, freed of biological constraints, such as limited memory and slow biochemical processing speeds, machines may eventually become more intelligent than we are-with profound implications for us all. Any inter-disciplinary or crossdisciplinary collaborative effort to meet these challenges,
doi:10.1007/s00146-017-0755-y fatcat:vhp2wmyrhfbeper35u6c2uybgi