A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Modelling human visual navigation using multi-view scene reconstruction
2013
Biological cybernetics
It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer's prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene
doi:10.1007/s00422-013-0558-2
pmid:23778937
pmcid:PMC3755223
fatcat:egbf6a4xcvdyha4r2h7yenk6bi