1,875 Hits in 5.1 sec

Multi-mode saliency dynamics model for analyzing gaze and attention

Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama
2012 Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA '12  
The multi-mode saliency-dynamics model (MMSDM) is introduced to segment spatio-temporal patterns of the saliency dynamics into multiple sequences of primitive modes underlying the saliency patterns.  ...  We present a method to analyze a relationship between eye movements and saliency dynamics in videos for estimating attentive states of users while they watch the videos.  ...  Acknowledgement This work is in part supported by MEXT Global COE program "Informatics Education and Research Center for a Knowledge-Circulating Society".  ... 
doi:10.1145/2168556.2168574 dblp:conf/etra/YonetaniKM12 fatcat:4i6dtkwm7vc7rpn7go4zn2a6ze

Mental Focus Analysis Using the Spatio-temporal Correlation between Visual Saliency and Eye Movements

Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama
2012 Journal of Information Processing  
We extract spatio-temporal dynamics patterns of saliency areas from the videos, which we refer to as saliency-dynamics patterns, and evaluate eye movements based on their correlation with the saliency-dynamics  ...  The spatio-temporal correlation analysis between visual saliency and eye movements is presented for the estimation of the mental focus toward videos.  ...  Thus, from the existence and modes of the saliency flows, the patterns consisting of Single Static, Single Dynamic, Multi Static, Multi-Static/Dynamic, and Multi Dynamic, are formed by a set of mode sequences  ... 
doi:10.2197/ipsjjip.20.267 fatcat:ay2ltyeqw5hzdpi2vlcrhclzqi

Multi-feature based visual saliency detection in surveillance video

Yubing Tong, Hubert Konik, Faouzi Alaya Cheikh, Fahad Fazal Elahi Guraya, Alain Tremeau, Guo Wei, Shipeng Li, Pascal Frossard, Houqiang Li, Feng Wu, Bernd Girod
2010 Visual Communications and Image Processing 2010  
Compared with the gaze map from subjective experiments, the output of the multi-feature based video saliency detection model is close to gaze map.  ...  First, background is extracted based on binary tree searching, then main features in the foreground is analyzed using a multi-scale perception model.  ...  Itti's attention model and GAFFE are two typical stationary image saliency analyzing methods adopting the 'bottom-up' visual attention mechanism 1, 2 .  ... 
doi:10.1117/12.863281 dblp:conf/vcip/TongKCGT10 fatcat:jy76hxkghnf75jln2gt733g45e

A Spatiotemporal Saliency Model for Video Surveillance

Tong Yubing, Faouzi Alaya Cheikh, Fahad Fazal Elahi Guraya, Hubert Konik, Alain Trémeau
2011 Cognitive Computation  
Both bottom-up and topdown attention mechanisms are involved in this model. Stationary saliency and motion saliency are, respectively, analyzed.  ...  Every feature is analyzed with a multi-scale Gaussian pyramid, and all the features conspicuity maps are combined using different weights.  ...  In the well-known model of Itti, every feature is analyzed using Gaussian pyramids and multi-scales [1] .  ... 
doi:10.1007/s12559-010-9094-8 fatcat:zt4q3cwd6zea3ehoet7cqvkj74

Review of Visual Saliency Prediction: Development Process from Neurobiological Basis to Deep Models

Fei Yan, Cheng Chen, Peng Xiao, Siyu Qi, Zhiliang Wang, Ruoxiu Xiao
2021 Applied Sciences  
Nevertheless, the deep models still have some limitations, for example in tasks involving multi-modality and semantic understanding.  ...  the saliency model, and the emerging applications, to provide new saliency predictions for follow-up work and the necessary help and advice.  ...  Dynamic models also include multistream, multi-modal, and 3D CNNs and other forms.  ... 
doi:10.3390/app12010309 fatcat:u5yvrsykkbcevj46un5e4hrzs4

A Multi-Modal Panoramic Attentional Model for Robots and Applications [chapter]

Ravi Sarvadevabhatla, Victor Ng-Thow-Hing
2012 The Future of Humanoid Robots - Research and Applications  
Panoramic Attentional Model for Robots and Applications Will-be-set-by-IN-TECH A Multi-Modal Panoramic Attentional Model for Robots and Applications  ...  A Multi-Modal Panoramic Attentional Model for Robots and Applications, The Future of Humanoid Robots -Research and Applications, Dr.  ... 
doi:10.5772/27023 fatcat:fbcrg7y57ncaliezxytnczceqa

A Multi-modal Discourse Analysis on D&G's Advertisements

Lu Zhou, the Southwestern University of Finance and Economics, China
2020 International Journal of Languages Literature and Linguistics  
Thirdly, to get the compositional meaning which consists of three elements, namely the information value, the salience, and the framing.  ...  This article will analyze the context behind these social symbols, and discuss if D&G achieved its ideographic function and commercial intention.  ...  The multi-modal discourse analysis includes the discourse form of visual mode, auditory mode, tactile mode, and spatial mode.  ... 
doi:10.18178/ijlll.2020.6.2.258 fatcat:phyvoqekfvezrpodv42i5oqnii

A saliency-based method of simulating visual attention in virtual scenes

Oyewole Oyekoya, William Steptoe, Anthony Steed
2009 Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology - VRST '09  
for saccade generation, and static gaze featuring non-moving centered eyes.  ...  A critical question is addressed: What types of saliency attract attention in virtual environments and how can they be weighted to drive an avatar's gaze?  ...  Acknowledgements The authors acknowledge the support of EPSRC Eye Catching (EP/E007406/1) and EU Presenccia (contract no. 27731) Projects.  ... 
doi:10.1145/1643928.1643973 dblp:conf/vrst/OyekoyaSS09 fatcat:doegjmy3c5dplciwgrc6wsamte

When I Look into Your Eyes: A Survey on Computer Vision Contributions for Human Gaze Estimation and Tracking

Dario Cazzato, Marco Leo, Cosimo Distante, Holger Voos
2020 Sensors  
revolutionized the whole machine learning area, and gaze tracking as well.  ...  A very long journey has been made from the first pioneering works, and this continuous search for more accurate solutions process has been further boosted in the last decade when deep neural networks have  ...  Some basic features are considered: signal orientations and spatial frequencies for the static saliency, and the module of motion for the dynamic saliency.  ... 
doi:10.3390/s20133739 pmid:32635375 pmcid:PMC7374327 fatcat:jwou6gv4f5dy7lrsxvtbnb2fly

Learning Where to Attend Like a Human Driver [article]

Andrea Palazzi, Francesco Solera, Simone Calderara, Stefano Alletto, Rita Cucchiara
2017 arXiv   pre-print
In this paper we study the dynamics of the driver's gaze and use it as a proxy to understand related attentional mechanisms.  ...  Second, we model the driver's gaze by training a coarse-to-fine convolutional network on short sequences extracted from the DR(eye)VE dataset.  ...  VI we analyze how well the model mimics the driver's focus dynamics. V.  ... 
arXiv:1611.08215v2 fatcat:5lu7amm2ibevdpfwoefmdmheru

What/Where to Look Next? Modeling Top-Down Visual Attention in Complex Interactive Environments

Ali Borji, Dicky N. Sihite, Laurent Itti
2014 IEEE Transactions on Systems, Man & Cybernetics. Systems  
In this study, we describe new task-dependent approaches for modeling top-down overt visual attention based on graphical models for probabilistic inference and reasoning.  ...  Several visual attention models have been proposed for describing eye movements over simple stimuli and tasks such as free viewing or visual search.  ...  Two modes for gaze prediction are possible: 1) Memory-dependent, and 2) Memoryless. The only difference is that in the memory-less mode, information of previous actions and gazes is not available.  ... 
doi:10.1109/tsmc.2013.2279715 fatcat:z4g6nari2jb5lbhx2jb2ouwyaq

Human Attention Modelization and Data Reduction [chapter]

Matei Mancas, Dominique De, Nicolas Riche, Xavier Siebert
2012 Video Compression  
6 2 will be set by intech Attention modeling: what is saliency?  ...  Attention in computer science: idea and approaches There are two main approaches to attention modeling in computer science.  ...  It operates in two modes, in an exploration mode in which no task is provided, and in a search mode with a specified target. The bottom-up mode is based on an enhancement of the Itti model.  ... 
doi:10.5772/34942 fatcat:esq57asj2vf27gzfir4ycx36im

Towards Collaborative and Intelligent Learning Environments Based on Eye Tracking Data and Learning Analytics: A Survey

Yuehua Wang, Shulan Lu, Derek Harter
2021 IEEE Access  
All data including raw multi-view eye images, 3D eye shape models with annotations, and the synthesized eye images based on 3D models are contained in the UT multi-view gaze dataset.  ...  Sitting Laboratory setting Images, modes(.obj), gaze data(.csv) Data (20GB) including 3D eye shape models with annotations, and synthesized eye images 3D Gaze Estimation, video coding  ... 
doi:10.1109/access.2021.3117780 fatcat:lz6ngjaarrholjiscknw6ch63i

Eye guidance in natural vision: Reinterpreting salience

B. W. Tatler, M. M. Hayhoe, M. F. Land, D. H. Ballard
2011 Journal of Vision  
We discuss the emerging theoretical framework for gaze allocation on the basis of reward maximization and uncertainty reduction.  ...  However, models based on the picture-viewing paradigm are unlikely to generalize to a broader range of experimental contexts, because the stimulus context is limited, and the dynamic, taskdriven nature  ...  Acknowledgments The authors would like to thank Werner Schneider and an anonymous reviewer for their helpful comments and suggestions. We thank Brian Sullivan for his comments on an earlier draft.  ... 
doi:10.1167/11.5.5 pmid:21622729 pmcid:PMC3134223 fatcat:tnaf5oczb5fonckodmfyggivue

Modelling Spatio-Temporal Saliency to Predict Gaze Direction for Short Videos

Sophie Marat, Tien Ho Phuoc, Lionel Granjon, Nathalie Guyader, Denis Pellerin, Anne Guérin-Dugué
2009 International Journal of Computer Vision  
In parallel, the static and the dynamic pathways are analyzed to understand what is more or less salient and for what type of videos our model is a good or a poor predictor of eye movement.  ...  These feature maps are used to form two saliency maps: a static and a dynamic one. These maps are then fused into a spatio-temporal saliency map.  ...  For this model, we only concentrated on some basic features: signal orientations and spatial frequencies for the static saliency, and the module of motion for the dynamic saliency.  ... 
doi:10.1007/s11263-009-0215-3 fatcat:4w5ubbbcsfbs5i3nuqsntiicje
« Previous Showing results 1 — 15 out of 1,875 results