A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Modeling human visual search: A combined Bayesian searcher and saliency map approach for eye movement guidance in natural scenes
[article]
2020
arXiv
pre-print
Finding objects is essential for almost any daily-life visual task. Saliency models have been useful to predict fixation locations in natural images, but are static, i.e., they provide no information about the time-sequence of fixations. Nowadays, one of the biggest challenges in the field is to go beyond saliency maps to predict a sequence of fixations related to a visual task, such as searching for a given target. Bayesian observer models have been proposed for this task, as they represent
arXiv:2009.08373v2
fatcat:4jgqhgtteferpo3cw42vz6rwqe