Double Q(σ) and Q(σ, λ): Unifying Reinforcement Learning Control Algorithms [article]

Markus Dumke
<span title="2017-11-05">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Temporal-difference (TD) learning is an important field in reinforcement learning. Sarsa and Q-Learning are among the most used TD algorithms. The Q(σ) algorithm (Sutton and Barto (2017)) unifies both. This paper extends the Q(σ) algorithm to an online multi-step algorithm Q(σ, λ) using eligibility traces and introduces Double Q(σ) as the extension of Q(σ) to double learning. Experiments suggest that the new Q(σ, λ) algorithm can outperform the classical TD control methods Sarsa(λ), Q(λ) and Q(σ).
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1711.01569v1">arXiv:1711.01569v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pf2kzdpfszacti3yk4uuahztwi">fatcat:pf2kzdpfszacti3yk4uuahztwi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200825215652/https://arxiv.org/pdf/1711.01569v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/a6/2f/a62f83334d4b79b84241a78df095f11b66a6d66f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1711.01569v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>