Filters








4,839 Hits in 10.7 sec

Predicting human gaze using low-level saliency combined with face detection

Moran Cerf, Jonathan Harel, Wolfgang Einhäuser, Christof Koch
2007 Neural Information Processing Systems  
We here demonstrate that a combined model of face detection and low-level saliency significantly outperforms a low-level model in predicting locations humans fixate on, based on eye-movement recordings  ...  Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input.  ...  Combining face detection with various saliency algorithms We tried to predict the attentional allocation via fixation patterns of the subjects using various saliency maps.  ... 
dblp:conf/nips/CerfHEK07 fatcat:z44dklwgpzas3fytb3f4dd6du4

Predictive Saliency Maps for Surveillance Videos

Fahad Fazal Elahi Guraya, Faouzi Alaya Cheikh, Alain Tremeau, Yubing Tong, Hubert Konik
2010 2010 Ninth International Symposium on Distributed Computing and Applications to Business, Engineering and Science  
The PSMs are compared with the experimentally obtained gaze maps and saliency maps obtained using approaches from the literature.  ...  In this paper we focus only on surveillance videos therefore, in addition to low-level features such as intensity, color and orientation we consider high-level features such as faces as salient regions  ...  SPATIO-TEMPORAL SALIENCY MODEL BASED ON LOW AND HIGH LEVEL FEATURES In this paper we propose a predictive method to combine the saliency maps for surveillance videos using static saliency and motion  ... 
doi:10.1109/dcabes.2010.160 fatcat:rut3j5qlp5cupanbgnxseu4hhm

Predictive Visual Saliency Model For Surveillance Video

Faouzi Alaya Cheikh, Fahad Fazal Elahi Guraya
2011 Zenodo  
Human visual system is significantly coupled with eye movements [22] and it easily detect the human faces, a high level visual cue in top down saliency model.  ...  Lastly we combine the stationary saliency map with faces, motion saliency map and predictive saliency map into a predictive video saliency map(PVSM) using the average func-tion.  ... 
doi:10.5281/zenodo.42675 fatcat:rzjbxlcpfzfwhn3kgtsurtdoxq

Augmented saliency model using automatic 3D head pose detection and learned gaze following in natural scenes

Daniel Parks, Ali Borji, Laurent Itti
2015 Vision Research  
However, no computational model has been proposed to combine bottom-up saliency with actor's head pose and gaze direction for predicting where observers look.  ...  We then learn a Bayesian combination of gaze following, head region, and bottom-up saliency maps with a Markov chain composed of head region and non-head region states.  ...  Human faces have already been shown to be important in predicting eye movements (Cerf et al., 2007) , and low-level features have formed the basis of many saliency models.  ... 
doi:10.1016/j.visres.2014.10.027 pmid:25448115 fatcat:bw6ljanjxzfvrkhr2sbtsshefm

Where Should Saliency Models Look Next? [chapter]

Zoya Bylinskii, Adrià Recasens, Ali Borji, Aude Oliva, Antonio Torralba, Frédo Durand
2016 Lecture Notes in Computer Science  
We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected  ...  Have saliency models begun to converge on human performance?  ...  The field of saliency estimation has moved beyond the modeling of low-level visual attention to the prediction of human eye fixations on images.  ... 
doi:10.1007/978-3-319-46454-1_49 fatcat:kn7v2yfttnb37dnlbscveliyem

Using semantic content as cues for better scanpath prediction

Moran Cerf, E. Paxon Frady, Christof Koch
2008 Proceedings of the 2008 symposium on Eye tracking research & applications - ETRA '08  
We here demonstrate that a combined model of high-level object detection and low-level saliency significantly outperforms a lowlevel saliency model in predicting locations humans fixate on.  ...  Under natural viewing conditions, human observers use shifts in gaze to allocate processing resources to subsets of the visual input.  ...  A model combining low-level saliency with highlevel features To determine whether high-level objects contribute more than their low-level attributes to power attention we tested how well the standard low-level  ... 
doi:10.1145/1344471.1344508 dblp:conf/etra/CerfFK08 fatcat:cn47lqqt7zccjolg66h6pvptjy

Learning to Model Task-Oriented Attention

Xiaochun Zou, Xinbo Zhao, Jian Wang, Yongjia Yang
2016 Computational Intelligence and Neuroscience  
Models of saliency can be used to predict fixation locations, but a large body of previous saliency models focused on free-viewing task.  ...  For many applications in graphics, design, and human computer interaction, it is essential to understand where humans look in a scene with a particular task.  ...  We use machine learning to train a bottom-up, topdown model of saliency based on low-level, high-level, and center prior features.  ... 
doi:10.1155/2016/2381451 pmid:27247561 pmcid:PMC4876208 fatcat:d7s33wkmcvgwxfevul57c4poa4

A Spatiotemporal Saliency Model for Video Surveillance

Tong Yubing, Faouzi Alaya Cheikh, Fahad Fazal Elahi Guraya, Hubert Konik, Alain Trémeau
2011 Cognitive Computation  
The stationary model integrates faces as a supplement feature to other low level features such as color, intensity and orientation.  ...  Every feature is analyzed with a multi-scale Gaussian pyramid, and all the features conspicuity maps are combined using different weights.  ...  any low level features and face feature in the saliency map, as shown in Fig. 22c .  ... 
doi:10.1007/s12559-010-9094-8 fatcat:zt4q3cwd6zea3ehoet7cqvkj74

Boosting bottom-up and top-down visual features for saliency estimation

A. Borji
2012 2012 IEEE Conference on Computer Vision and Pattern Recognition  
Here, we combine low-level features such as orientation, color, intensity, saliency maps of previous best bottom-up models with top-down cognitive visual features (e.g., faces, humans, cars, etc.) and  ...  Despite significant recent progress, the best available visual saliency models still lag behind human performance in predicting eye fixations in free-viewing of natural scenes.  ...  Thus, combining high-level concepts and low-level features seems inevitable to scale up current models and reach the human performance.  ... 
doi:10.1109/cvpr.2012.6247706 dblp:conf/cvpr/Borji12 fatcat:uobjdwwjzjbkxa75yyiq6z2eia

Improving Visual Saliency by Adding 'Face Feature Map' and 'Center Bias'

Sophie Marat, Anis Rahman, Denis Pellerin, Nathalie Guyader, Dominique Houzet
2012 Cognitive Computation  
(iii) A 'face' saliency map emphasizes areas where a face is detected with a value proportional to the confidence of the detection.  ...  Faces play an important role in guiding visual attention, thus the inclusion of face detection into a classical visual attention model can improve eye movement predictions.  ...  A recent paper [25] demonstrates that a combined model of high-level object detection and low-level saliency significantly outperformed a low-level saliency model in predicting eye movements.  ... 
doi:10.1007/s12559-012-9146-3 fatcat:4zaux5dgaza3bdzkhrgfyhnu2a

Video saliency based on rarity prediction: Hyperaptor

Ioannis Cassagne, Nicolas Riche, Marc Decombas, Matei Mancas, Bernard. Gosselin, Thierry Dutoit, Robert Laganiere
2015 2015 23rd European Signal Processing Conference (EUSIPCO)  
Saliency models are able to provide heatmaps highlighting areas in images which attract human gaze.  ...  The rarity-maps obtained for each feature are combined with the result of a superpixel algorithm to have a more object-based orientation.  ...  Fusion The fusion process has two main steps: 1) the spatial features mas are combined with the low-level priors map with a max fusion.  ... 
doi:10.1109/eusipco.2015.7362638 dblp:conf/eusipco/CassagneRDMGDL15 fatcat:hl5vrhuzqbbqpp3cinqxwk75g4

Incorporating visual field characteristics into a saliency map

Hideyuki Kubota, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki
2012 Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA '12  
The experimental results using a large number of fixation/saccade data with wide viewing angles demonstrate the advantage of our saliency map, showing that it can accurately predict the point where one  ...  The existing models compute visual saliency uniformly over the retina and, thus, have difficulty in accurately predicting the next gaze (fixation) point.  ...  Cerf et al. [2008] proposed combining face detection with a saliency map computed from low-level features.  ... 
doi:10.1145/2168556.2168629 dblp:conf/etra/KubotaSOSSH12 fatcat:6g33kauwjbb23pn4hjld65r73u

Saliency Prediction for Mobile User Interfaces [article]

Prakhar Gupta, Shubh Gupta, Ajaykrishnan Jayagopal, Sourav Pal, Ritwik Sinha
2017 arXiv   pre-print
Using this data, we develop a novel autoencoder based multi-scale deep learning model that provides saliency prediction at the mobile interface element level.  ...  We introduce models for saliency prediction for mobile user interfaces.  ...  Our model learns a non-linear combination of low and high level features to predict saliency at an the element level.  ... 
arXiv:1711.03726v3 fatcat:bcrtqg3horckrnnwbipl5n6doa

Saliency in Crowd [chapter]

Ming Jiang, Juan Xu, Qi Zhao
2014 Lecture Notes in Computer Science  
features at both low-and high-levels.  ...  To facilitate saliency in crowd study, a new dataset of 500 images is constructed with eye tracking data from 16 viewers and annotation data on faces (the dataset will be publicly available with the paper  ...  Our model combines low-level center-surround contrast and high-level semantic face features for saliency prediction in crowd.  ... 
doi:10.1007/978-3-319-10584-0_2 fatcat:bbzgtyzjsvcjzkbbbrehop44iu

Multi-layer linear model for top-down modulation of visual attention in natural egocentric vision

Keng-Teck Ma, Liyuan Li, Peilun Dai, Joo-Hwee Lim, Chengyao Shen, Qi Zhao
2017 2017 IEEE International Conference on Image Processing (ICIP)  
The first layer is a linear regression model which combines the bottom-up saliency maps on various visual features and objects.  ...  Inspired by the mechanisms of top-down attention in human visual perception, we propose a multi-layer linear model of top-down attention to modulate bottom-up saliency maps actively.  ...  It integrates bottomup saliency maps on low-level saliency, ego-motion, exomotion, ground, text, hand, and face.  ... 
doi:10.1109/icip.2017.8296927 dblp:conf/icip/MaLDLSZ17 fatcat:gsxslelujbdz5eis24wdgwksve
« Previous Showing results 1 — 15 out of 4,839 results