Towards a Formalization of Explanations for Robots' Actions and Beliefs

Felix Lindner
2020 Joint Ontology Workshops  
A robot's capacity to self-explain its behavior is a means of ensuring trust. This work presents a preliminary formal characterization of explanations embracing the distinction between explanations based on counterfactuality and those based on regularity. It also distinguishes generative and instrumental explanations. The formalization will guide future work on explanation generation, explanation sharing, and explanation understanding in human-robot interaction.
dblp:conf/jowo/000120 fatcat:qq3mqct37fg27bbawgct2eu4wi