The Design of Human Oversight in Autonomous Weapon Systems

Ilse Verdiesen
2019 Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence  
Autonomous Weapon Systems (AWS) can be defined as weapons systems equipped with Artificial Intelligence (AI). They are an emerging technology and are increasingly deployed on the battlefield. In the societal debate on Autonomous Weapon Systems, the concept of Meaningful Human Control (MHC) is often mentioned as requirement, but MHC will not suffice as requirement to minimize unintended consequences of Autonomous Weapon Systems, because the definition of 'control' implies that one has the power
more » ... o influence or direct the course of events or the ability to manage a machine. The characteristics autonomy, interactivity and adaptability of AI in Autonomous Weapon Systems inherently imply that control in strict sense is not possible. Therefore, a different approach is needed to minimize unintended consequences of AWS. Several scholars are describing the concept of Human Oversight in Autonomous Weapon Systems and AI in general. Just recently Taddeo and Floridi (2018) describe that human oversight procedures are necessary to minimize unintended consequences and to compensate unfair impacts of AI. In my PhD project, I will analyse the concepts that are needed to define, model, evaluate and ensure human oversight in Autonomous Weapons and design a technical architecture to implement this.
doi:10.24963/ijcai.2019/923 dblp:conf/ijcai/Verdiesen19 fatcat:klknpwapizg2znhpkxptxze4ky