A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Convergence of synchronous reinforcement learning with linear function approximation
2004
Twenty-first international conference on Machine learning - ICML '04
Synchronous reinforcement learning (RL) algorithms with linear function approximation are representable as inhomogeneous matrix iterations of a special form (Schoknecht & Merke, 2003) . In this paper we state conditions of convergence for general inhomogeneous matrix iterations and prove that they are both necessary and sufficient. This result extends the work presented in (Schoknecht & Merke, 2003) , where only a sufficient condition of convergence was proved. As the condition of convergence
doi:10.1145/1015330.1015390
dblp:conf/icml/MerkeS04
fatcat:ovaahz7wzrbgbofwckam2iggxm