Risks of Deep Reinforcement Learning Applied to Fall Prevention Assist by Autonomous Mobile Robots in the Hospital

Takaaki Namba, Yoji Yamada
2018 Big Data and Cognitive Computing  
Our previous study proposed an automatic fall risk assessment and related risk reduction measures. A nursing system to reduce patient accidents was also developed, therefore reducing the caregiving load of the medical staff in hospitals. However, there are risks associated with artificial intelligence (AI) in applications such as assistant mobile robots that use deep reinforcement learning. In this paper, we discuss safety applications related to AI in fields where humans and robots coexist,
more » ... ecially when applying deep reinforcement learning to the control of autonomous mobile robots. First, we look at a summary of recent related work on robot safety with AI. Second, we extract the risks linked to the use of autonomous mobile assistant robots based on deep reinforcement learning for patients in a hospital. Third, we systematize the risks of AI and propose sample risk reduction measures. The results suggest that these measures are useful in the fields of clinical and industrial safety. To apply AI technology to a system, there are three approaches, according to how AI and safety technology are involved. The "Three Safety Policies of Artificial Intelligence based on Robot Safety" have been proposed to achieve a systematic approach [3] . These policies regard the application of AI to non-safety-related parts, safety-related parts, and humans. We agree with this systematizing approach. On the other hand, the "Consideration of Errors and Faults Based on Machinery for Robot using Artificial Intelligence" has reported that it is appropriate to treat errors in AI functions as probabilistic faults [4] . Moreover, it has been revealed that it is impossible to eliminate errors in practice, because AI makes human-like errors. This reference paper has insisted that there are four possible ways to guarantee safety. The first one is analyzing trends, such as variance and standard deviation of learning error, and evaluating the likelihood of error. The second one is duplicating the system to secure diversity. The third one is reducing the possibility of errors. Finally, the fourth one is correctly evaluating the error risk of AI, comparing advantages and disadvantages and determining acceptable risk levels. In addition, it should be demonstrated that learning methods and supervised data are transparent (i.e., visible and explainable). This regards their development and evaluation processes, disclosure of accountability, clarification of accountability, recording of learning processes, and securing of reproducibility. In a study, the recognition accuracies of AI and sensor performance have been compared. However, we think that such a comparison is inappropriate, because AI also uses sensor data. Assuming that a sensor is in the foundation hierarchy, AI is in the application layer. The accuracies of using and not using AI (from the viewpoint of application) should be compared. In terms of safety verification, the necessity of establishing a quantitative and analytical evaluation method has been demonstrated. Further, the examination of a safety evaluation platform for robots using AI [5] is being addressed, and an autonomous moving function using intelligence can be installed as an additional interface. In a recent study, Fujiwara et al. proposed an asymmetry classification method for the judgment of safety, to suppress the probability of dangerous side failures by judging uncertainty as a dangerous side [6] . However, this method sacrifices the accuracy of multi-class classification of AI, instead of giving priority to safety. Apart from the safety perspective of applying AI, "The Japanese Society for Artificial Intelligence Ethical Guidelines" are considered important for research [7] . Google, OpenAI, Stanford University, and the University of California ( Berkeley) have reported five main challenges specifically associated with the safe use of AI [8]: (1) avoiding negative side effects, including adverse effects on the surroundings, the interactions with humans and the environment, and vandalism; (2) avoiding reward hacking, which takes into consideration the measures for achieving desire and malicious hacking from the outside; (3) scalable oversight for proper and efficient feedback; (4) safe exploration to secure safety, such as during learning by simulation; (5) robustness toward distributional shifts, to manage changes in cases that are significantly different from the learning environment. In these previous studies, there has been insufficient consideration of the development procedures specific to AI, such as training/validity/verification and the safety of the entire AI life cycle (including the online/offline updating of AI models). Considering the entire lifecycle leads to a reduction in unsupported events. Moreover, regarding the systematization of risk and risk reduction measures of AI, a clarification on the risk factors in a human robot coexistence environment and specific draft measures for risk reduction are required. Furthermore, no measures have been devised to achieve compatibility between estimation ability and safety for unknown/unlearned subjects, so as not to impair the flexibility and robustness of AI.
doi:10.3390/bdcc2020013 fatcat:mkj423ijjjfx5fohamjm5nu7ze