Learning potential functions and their representations for multi-task reinforcement learning

Matthijs Snel, Shimon Whiteson
2013 Autonomous Agents and Multi-Agent Systems  
In multi-task learning, there are roughly two approaches to discovering representations. The first is to discover task relevant representations, i.e., those that compactly represent solutions to particular tasks. The second is to discover domain relevant representations, i.e., those that compactly represent knowledge that remains invariant across many tasks. In this article, we propose a new approach to multi-task learning that captures domain-relevant knowledge by learning potential-based
more » ... ng functions, which augment a task's reward function with artificial rewards. We address two key issues that arise when deriving potential functions. The first is what kind of target function the potential function should approximate; we propose three such targets and show empirically that which one is best depends critically on the domain and learning parameters. The second issue is the representation for the potential function. This article introduces the notion of k-relevance, the expected relevance of a representation on a sample sequence of k tasks, and argues that this is a unifying definition of relevance of which both task and domain relevance are special cases. We prove formally that, under certain assumptions, k-relevance converges monotonically to a fixed point as k increases, and use this property to derive Feature Selection Through Extrapolation of krelevance (FS-TEK), a novel feature-selection algorithm. We demonstrate empirically the benefit of FS-TEK on artificial domains.
doi:10.1007/s10458-013-9235-z fatcat:lploktugprdxvpi7bb5npejosy