Engineering Moral Agents -- from Human Morality to Artificial Morality (Dagstuhl Seminar 16222)

Michael Fisher, Christian List, Marija Slavkovik, Alan Winfield, Marc Herbstritt
2016 Dagstuhl Reports  
This report documents the programme of, and outcomes from, the Dagstuhl Seminar 16222 on "Engineering Moral Agents -from Human Morality to Artificial Morality". Artificial morality is an emerging area of research within artificial intelligence (AI), concerned with the problem of designing artificial agents that behave as moral agents, i.e., adhere to moral, legal, and social norms. Context-aware, autonomous, and intelligent systems are becoming a presence in our society and are increasingly
more » ... lved in making decisions that affect our lives. While humanity has developed formal legal and informal moral and social norms to govern its own social interactions, there are no similar regulatory structures that apply to non-human agents. The seminar focused on questions of how to formalise, "quantify", qualify, validate, verify, and modify the "ethics" of moral machines. Key issues included the following: How to build regulatory structures that address (un)ethical machine behaviour? What are the wider societal, legal, and economic implications of introducing AI machines into our society? How to develop "computational" ethics and what are the difficult challenges that need to be addressed? When organising this workshop, we aimed to bring together communities of researchers from moral philosophy and from artificial intelligence most concerned with this topic. This is a long-term endeavour, but the seminar was successful in laying the foundations and connections for accomplishing it.
doi:10.4230/dagrep.6.5.114 dblp:journals/dagstuhl-reports/FisherLSW16 fatcat:ngtdcym6ojey5kw3lqmavrxwcq