Reinforcement Learning with Information-Theoretic Actuation [article]

Elliot Catt, Marcus Hutter, Joel Veness
2021 arXiv   pre-print
Reinforcement Learning formalises an embodied agent's interaction with the environment through observations, rewards and actions. But where do the actions come from? Actions are often considered to represent something external, such as the movement of a limb, a chess piece, or more generally, the output of an actuator. In this work we explore and formalize a contrasting view, namely that actions are best thought of as the output of a sequence of internal choices with respect to an action model.
more » ... This view is particularly well-suited for leveraging the recent advances in large sequence models as prior knowledge for multi-task reinforcement learning problems. Our main contribution in this work is to show how to augment the standard MDP formalism with a sequential notion of internal action using information-theoretic techniques, and that this leads to self-consistent definitions of both internal and external action value functions.
arXiv:2109.15147v1 fatcat:dsl67a42szf6xlhy3ezapp5mcm