A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is
The problem of sampling from a remote sensor, powered by energy harvesting, is considered. The problem is formulated as a partially observable Markov decision process (POMDP), since the controller only has partial knowledge of the energy reserve at the sensor. Three policies are proposed and their performances are evaluated and compared to that of a clairvoyant policy.doi:10.1109/ita.2014.6804220 dblp:conf/ita/Seyedi14 fatcat:4mmlm7isyvc3lgus5ofoftmdia