Abstractions of General Reinforcement Learning [article]

Sultan Majeed, University, The Australian National
2022
The field of artificial intelligence (AI) is devoted to the creation of artificial decision-makers that can perform (at least) on par with the human counterparts on a domain of interest. Unlike the agents in traditional AI, the agents in artificial general intelligence (AGI) are required to replicate human intelligence in almost every domain of interest. Moreover, an AGI agent should be able to achieve this without (virtually any) further changes, retraining, or fine- tuning of the parameters.
more » ... he real world is non-stationary, non-ergodic, and non-Markovian: we, humans, can neither revisit our past nor are the most recent observations sufficient statistics to perform optimally. Yet, we excel at a variety of complex tasks. Many of these tasks require long term planning. We can associate this success to our natural faculty to abstract away task-irrelevant information from our overwhelming sensory experience. We make task- specific mental models of the world without much effort. Due to this ability to abstract, we can plan on a significantly compact representation of a task without much loss of performance. Not only this, we also abstract our actions to produce high-level plans: the level of action- abstraction can be anywhere between small muscle movements to a mental notion of "doing an action". It is natural to assume that any AGI agent competing with humans (at every plausible domain) should also have these abilities to abstract its experiences and actions. This thesis is an inquiry into the existence of such abstractions which aid efficient planning for a wide range of domains. And most importantly, these abstractions come with some optimality guarantees. We use a history-based reinforcement learning (RL) setup, appropriately called general reinforcement learning (GRL), to model such general-purpose decision-makers. We show that if such GRL agents have access to appropriate abstractions then they can perform optimally in a huge set of domains. That is, we argue that GRL with abstractions, called abstraction rei [...]
doi:10.25911/bap9-tk71 fatcat:yl3nl533ljbijkxv73sixnejwm