A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Risk-sensitive Inverse Reinforcement Learning via Coherent Risk Models
2017
Robotics: Science and Systems XIII
The literature on Inverse Reinforcement Learning (IRL) typically assumes that humans take actions in order to minimize the expected value of a cost function, i.e., that humans are risk neutral. Yet, in practice, humans are often far from being risk neutral. To fill this gap, the objective of this paper is to devise a framework for risk-sensitive IRL in order to explicitly account for an expert's risk sensitivity. To this end, we propose a flexible class of models based on coherent risk metrics,
doi:10.15607/rss.2017.xiii.069
dblp:conf/rss/MajumdarSMP17
fatcat:c4a5545yivbnnegqbplimt2vku