Explaining AI: Are We Ready For It?

Britta Wrede
2020 Künstliche Intelligenz  
Dear readers, How often have you tried to explain to a friend or an uncle why his computer crashed or why the navigation system suggests the same route again and again even though he has taken a better one last time? How often have you succeeded? In my experience, my explanations are seldom as successful as I wish them to be and do not lead to a more appropriate behaviour next time. Yet, we expect that AI should do exactly that: explain itself to non-expert users so that they can use it better
more » ... ext time. This expectation is based on the assumption that if only the explanation is good enough, people will understand. But what if the reason for non-understanding does not lie in the explanation but in the recipient? Understanding in a general way how a computer, a navigation system or a recommendation system works so that certain predictions about their capabilities, behaviour and reliability can be made requires some basic concepts underlying computing. One attempt to capture basic concepts necessary to solve problems with a computer has been made with the theory of computational thinking which focuses on a user's ability for abstraction, automation and analysis. While these are important abilities for solving problems with a computer they may not be the ones necessary for understanding and appropriately interacting with systems that are based on any kind of "artificial intelligence". We do not yet know what the relevant concepts for interacting with such systems in an informed and mature way are. It may be necessary that people are able to see the difference in system behaviour that is based on learning from observed data alone versus behaviour from human intelligence which is based on situated cognition and experience. More research in this direction is needed. One may argue that people should not have to learn new concepts in order to be able to use everyday technology. But humans even have to learn how to interact with other humans as well: while babies are born with the capability for empathy and perspective taking, i.e. to interpret others' thoughts and feelings, they still need to learn to take the other's perspective into account in years of training. The more artificial intelligence is entering our daily life the more we need to provide people with the ability to use and control it in an adequate way. There is a lot of ongoing discussion how to adapt our eduction systems to the challenge of digitalisation. This aspect of AI thinking should be added to it. However, we also need to educate adults many of whom are already working with AI systems in their jobs. While there is a lot of information about how AI can be used for business little is known about how AI is currently being controlled and evaluated in the field and how users are trained to work with it. Public media is providing little to this, albeit interesting formats exist in terms of youtube videos (e.g. from the Year of Artificial Intelligence 2019 1 ) or online discussions touching on the impacts of AI on our society (e.g. the Debatten Podcast by Sascha Lobo 2 ). Yet, a more strategic approach needs to be taken-we as a community need to actively engage in this discussion on how to educate people to use AI as this will become one of the most pressing issues in the upcoming decade. Machine learning is revolutionizing our ability to leverage data and tackle challenging applications such as natural language and image understanding, recommender systems,
doi:10.1007/s13218-020-00639-w fatcat:ps5dxt3xvrfjpbpukrn4tdeva4