Exploiting Distributional Temporal Difference Learning to Deal with Tail Risk

Peter Bossaerts, Shijie Huang, Nitin Yadav
2020 Risks  
In traditional Reinforcement Learning (RL), agents learn to optimize actions in a dynamic context based on recursive estimation of expected values. We show that this form of machine learning fails when rewards (returns) are affected by tail risk, i.e., leptokurtosis. Here, we adapt a recent extension of RL, called distributional RL (disRL), and introduce estimation efficiency, while properly adjusting for differential impact of outliers on the two terms of the RL prediction error in the
more » ... equations. We show that the resulting "efficient distributional RL" (e-disRL) learns much faster, and is robust once it settles on a policy. Our paper also provides a brief, nontechnical overview of machine learning, focusing on RL.
doi:10.3390/risks8040113 fatcat:p56ng4d7wng2dfzni7fkawhneu