33,474 Hits in 4.1 sec

Multi-mode saliency dynamics model for analyzing gaze and attention

Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama
2012 Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA '12  
Experimental results show the effectiveness of the proposed model to classify the attentive states of users by learning the statistical difference of the local saliency dynamics on gaze-paths at each level  ...  The MMSDM enables us to describe the relationship by the local saliency dynamics around gaze points, which is modeled by a set of distances between gaze points and salient regions characterized by the  ...  Acknowledgement This work is in part supported by MEXT Global COE program "Informatics Education and Research Center for a Knowledge-Circulating Society".  ... 
doi:10.1145/2168556.2168574 dblp:conf/etra/YonetaniKM12 fatcat:4i6dtkwm7vc7rpn7go4zn2a6ze

Spatio-temporal Saliency Detection in Dynamic Scenes Using Local Binary Patterns

Satya M. Muddamsetty, Desire Sidibe, Alain Tremeau, Fabrice Meriaudeau
2014 2014 22nd International Conference on Pattern Recognition  
In this paper, we introduce a new spatio-temporal saliency detection method for dynamic scenes based on dynamic textures computed with local binary patterns.  ...  The algorithm is evaluated on a dataset with complex dynamic scenes and the results show that our proposed method outperforms state-of-art methods.  ...  CONCLUSION In this paper, we have proposed a spatio-temporal saliency detection method of dynamic scenes based on local binary patterns.  ... 
doi:10.1109/icpr.2014.408 dblp:conf/icpr/MuddamsettySTM14 fatcat:bkhyoosu4rbibncyt3l6itljvm

Salient objects detection in dynamic scenes using color and texture features

Satya M. Muddamsetty, Désiré Sidibé, Alain Trémeau, Fabrice Mériaudeau
2017 Multimedia tools and applications  
In our work, we model the dynamic textures in a dynamic scene with local binary patterns to compute the dynamic saliency map, and we use color features to compute the static saliency map.  ...  A common approach for obtaining a spatio-temporal saliency map is to combine a static saliency map and a dynamic saliency map.  ...  The authors in [20] proposed a dynamic saliency visual attention model based on the rarity of features.  ... 
doi:10.1007/s11042-017-4462-y fatcat:wdbdhppzvbh7vbptr7aqswc2k4

A performance evaluation of fusion techniques for spatio-temporal saliency detection in dynamic scenes

Satya M. Muddamsetty, Desire Sidibe, Alain Tremeau, Fabrice Meriaudeau
2013 2013 IEEE International Conference on Image Processing  
A spatio-temporal saliency map is usually obtained by the fusion of a static saliency map and a dynamic saliency map.  ...  In this paper, we evaluate the performances of different fusion techniques on a large and diverse dataset and the results show that a fusion method must be selected depending on the characteristics, in  ...  The nine fusion techniques are evaluated on a large dataset of twelve complex dynamic scenes.  ... 
doi:10.1109/icip.2013.6738808 dblp:conf/icip/MuddamsettySTM13 fatcat:5ekmdnfpubhxnntrr36unpzg24

A Low-complexity Wavelet-based Visual Saliency Model to Predict Fixations

Manjula Narayanaswamy, Yafan Zhao, Wai Keung Fung, Nazila Fough
2020 2020 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS)  
CONCLUSION AND FUTURE WORK A low-complexity visual saliency model based on Wavelet Transform (WT) is proposed.  ...  Predicting fixations based on bottom-up attention is a datadriven process which relies on sensory information of the input image [12] , such as colour, luminance, motion, edges and so on.  ... 
doi:10.1109/icecs49266.2020.9294905 fatcat:i4wxd7rsbjfiddb7gkt3knp7bm

A Computing Model of Selective Attention for Service Robot Based on Spatial Data Fusion

Huanzhao Chen, Guohui Tian
2018 Journal of Robotics  
FOA is selected based on a winner-take-all (WTA) network and rotated by inhibition of return (IOR) principle.  ...  We proposed a computing model of selective attention which is biologically inspired by visual attention mechanism, which aims at predicting focus of attention (FOA) in a domestic environment.  ...  A dynamical neural network based is built on WTA network and IOR mechanism, to choose focus of attention and to balance the saliency map.  ... 
doi:10.1155/2018/5368624 fatcat:6hlv3mrsizfydab7kupevw5cby

Detection of Moving Objects in Surveillance Video by Integrating Bottom-up Approach with Knowledge Base

Aarthi R., Amudha J., Boomika K., Anagha Varrier
2016 Procedia Computer Science  
This paper discusses a saliency detection method that aims to discover and localize the moving regions for indoor and outdoor surveillance videos.  ...  To decrease perceptual overload in CCTV monitoring, automation of focusing the attention on significant events happening in overpopulated public scenes is also necessary.  ...  The model is developed based on the observation in human vision, that attention is driven by low level stimulus.  ... 
doi:10.1016/j.procs.2016.02.026 fatcat:3jtkvtteanbxripbkiijlqoppa

Detecting Salient Objects Of Natural Scene In A Video'S Using Spatio-Temporal Saliency & Colour Map

2016 Zenodo  
This uses the data on the basis of same thing on images and in accordance similarity will be calculated. According to it visual summary data are used.  ...  So by using the data of different method we develop a Novel method which will give best results of detection of saliency.  ...  In this paper, a novel saliency model based on motion history map is proposed.  ... 
doi:10.5281/zenodo.1468351 fatcat:wlngwso3czam3nflumlyuspbtm

On spatio-temporal saliency detection in videos using multilinear PCA

Desire Sidibe, Mojdeh Rastgoo, Fabrice Meriaudeau
2016 2016 23rd International Conference on Pattern Recognition (ICPR)  
[1] proposed a spatio-temporal saliency model based on information theory.  ...  A similar method is developed in [14] , where the video patches are modeled using dynamic textures and saliency is computed based on discriminant center-surround. Mancas et al.  ... 
doi:10.1109/icpr.2016.7899910 dblp:conf/icpr/SidibeRM16 fatcat:y2io6nwfx5bxbohmqrg3udfwx4

Research on Salient Object Detection using Deep Learning and Segmentation Methods

2019 International journal of recent technology and engineering  
It not only focuses on the methods to detect saliency objects, but also reviews the works related to spatio temporal video attention detection technique in video sequences.  ...  While many models have been proposed and several applications have emerged, yet a deep understanding of achievements and issues is lacking.  ...  In spatial saliency map, it inherits the classical bottom-up spatial saliency map. In temporal saliency map, a novel optical flow model is proposed based on the dynamic consistency of motion.  ... 
doi:10.35940/ijrte.b1046.0982s1119 fatcat:6ofq53vb7zhx7boq4ndpraphs4

Performance Comparison of Saliency Detection

Ning Li, Hongbo Bi, Zheng Zhang, Xiaoxue Kong, Di Lu
2018 Advances in Multimedia  
Saliency detection has attracted significant attention in the field of computer vision technology over years. At present, more than 100 saliency detection models have been proposed.  ...  In this paper, a relatively more detailed classification is proposed. Furthermore, we selected 25 models and evaluated their performance using four public image datasets.  ...  Salient Region Detection Based on Local Contrast.  ... 
doi:10.1155/2018/9497083 fatcat:oxnpzdhtebcpvlbq7ljowb3opu

Global and Local Sensitivity Guided Key Salient Object Re-augmentation for Video Saliency Detection [article]

Ziqi Zhou, Zheng Wang, Huchuan Lu, Song Wang, Meijun Sun
2018 arXiv   pre-print
Results on three benchmark datasets suggest that our model has the capability of improving the detection accuracy on complex scenes.  ...  In this paper, based on the fact that salient areas in videos are relatively small and concentrated, we propose a key salient object re-augmentation method (KSORA) using top-down semantic knowledge and  ...  [15] also designed a video saliency detection model based on similar ideas. [3] proposed a video saliency detection model based on multi-stream convLSTM.  ... 
arXiv:1811.07480v1 fatcat:xeypmi5u7zhzpay7s2rplnxcry

A deep-learning based feature hybrid framework for spatiotemporal saliency detection inside videos

Zheng Wang, Jinchang Ren, Dong Zhang, Meijun Sun, Jianmin Jiang
2018 Neurocomputing  
Abstract: Although research on detection of saliency and visual attention has been active over recent years, most of the existing work focuses on still image rather than video based saliency.  ...  2018) A deep-learning based feature hybrid framework for spatiotemporal saliency detection inside videos. Neurocomputing.  ...  The Strathprints institutional repository ( is a digital archive of University of Strathclyde research outputs.  ... 
doi:10.1016/j.neucom.2018.01.076 fatcat:nfzjix4kjzgihe5jw3wz2r5jta

Visual-Salience-Based Tone Mapping for High Dynamic Range Images

Zhengguo Li, Jinghong Zheng
2014 IEEE transactions on industrial electronics (1982. Print)  
Visual saliency aims to predict the attentional gaze of observers viewing a scene, and it is thus highly demanded for tone mapping of high dynamic range (HDR) images.  ...  The saliency aware weighting and the proposed filter are applied to design a new local tone mapping algorithm for HDR images such that both extreme light and shadow regions can be reproduced on conventional  ...  Similar to the saliency model in [22] , [23] , our saliency model for the HDR image is based on the image cooccurrence histogram (ICH) of the HDR image.  ... 
doi:10.1109/tie.2014.2314066 fatcat:qjkikqfvxvaf5hrsb3po6x4rg4

A computer vision model for visual-object-based attention and eye movements

Yaoru Sun, Robert Fisher, Fang Wang, Herman Martins Gomes
2008 Computer Vision and Image Understanding  
This paper presents a new computational framework for modelling visual-object based attention and attention-driven eye movements within an integrated system in a biologically inspired approach.  ...  Attention operates at multiple levels of visual selection by space, feature, object and group depending on the nature of targets and visual tasks.  ...  a local area.  ... 
doi:10.1016/j.cviu.2008.01.005 fatcat:w6r4btbfsncftomvgmqwinu23q
« Previous Showing results 1 — 15 out of 33,474 results