A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
The MineRL 2019 Competition on Sample Efficient Reinforcement Learning using Human Priors
[article]
2021
arXiv
pre-print
Though deep reinforcement learning has led to breakthroughs in many difficult domains, these successes have required an ever-increasing number of samples. As state-of-the-art reinforcement learning (RL) systems require an exponentially increasing number of samples, their development is restricted to a continually shrinking segment of the AI community. Likewise, many of these systems cannot be applied to real-world problems, where environment samples are expensive. Resolution of these
arXiv:1904.10079v3
fatcat:n3xfk2fyfnhe7l5oafdmccfbza