A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2016; you can also visit the original URL.
The file type is application/pdf
.
Learning Saccadic Eye Movements Using Multiscale Spatial Filters
1994
Neural Information Processing Systems
We describe a framework for learning saccadic eye movements using a photometric representation of target points in natural scenes. The representation takes the form of a high-dimensional vector comprised of the responses of spatial filters at different orientations and scales. We first demonstrate the use of this response vector in the task of locating previously foveated points in a scene and subsequently use this property in a multisaccade strategy to derive an adaptive motor map for delivering accurate saccades.
dblp:conf/nips/RaoB94
fatcat:fyxax6yosrbh5iqaa2ixo263ga