A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
Accumulative Computation Method for Motion Features Extraction in Active Selective Visual Attention
[chapter]
2005
Lecture Notes in Computer Science
The aim of this paper is to highlight the importance of the accumulative computation method for motion features extraction in the active selective visual attention model proposed. ...
A new method for active visual attention is briefly introduced in this paper. ...
Acknowledgements This work is supported in part by the Spanish CICYT TIN2004-07661-C01-01 and TIN2004-07661-C02-02 grants. ...
doi:10.1007/978-3-540-30572-9_16
fatcat:vprodlxc6jctxlphq4s5mhza2u
Neurally Inspired Mechanisms for the Dynamic Visual Attention Map Generation Task
[chapter]
2003
Lecture Notes in Computer Science
A model for dynamic visual attention is briefly introduced in this paper. ...
This paper mainly focuses on those subtasks of the model inspired in neuronal mechanisms, such as accumulative computation and lateral interaction. ...
Motion Features Extraction
Motion Interest Map
Shape Features Extraction 3 Accumulative computation and lateral interaction subtasks 3.1 Subtask "Working Memory Generation" The process of obtaining ...
doi:10.1007/3-540-44868-3_88
fatcat:h2s6zwccxzeobnpv3od3sp2t3e
Motion features to enhance scene segmentation in active visual attention
2006
Pattern Recognition Letters
A new computational model for active visual attention is introduced in this paper. ...
scene segmentation outputs in this dynamic visual attention method. ...
The authors are thankful to the anonymous reviewers for their very helpful comments. ...
doi:10.1016/j.patrec.2005.09.010
fatcat:bovocarcjbflxlcesokssqewgi
Dynamic stereoscopic selective visual attention (DSSVA): Integrating motion and shape with depth in video segmentation
2008
Expert systems with applications
Depth inclusion as an important parameter for dynamic selective visual attention is presented in this article. ...
The three models are based on the accumulative computation problemsolving method. ...
Acknowledgements This work is supported in part by the Spanish CICYT TIN2004-07661-C02-02 grant and the Junta de Comunidades de Castilla-La Mancha PBI06-0099 grant. ...
doi:10.1016/j.eswa.2007.01.007
fatcat:jqmwyimuxbh7fgmas4mohhimkq
Dynamic visual attention model in image sequences
2007
Image and Vision Computing
A new computational architecture of dynamic visual attention is introduced in this paper. ...
Thus, the three tasks involved in the attention model are introduced. The Feature-Extraction task obtains those features (color, motion and shape features) necessary to perform object segmentation. ...
Acknowledgements This work is supported in part by the Spanish CICYT TIN2004-07661-C02-01 and TIN2004-07661-C02- ...
doi:10.1016/j.imavis.2006.05.004
fatcat:3w7k32n4szfezj5m56bflq633i
Algorithmic lateral inhibition method in dynamic and selective visual attention task: Application to moving objects detection and labelling
2006
Expert systems with applications
In this paper, the algorithmic lateral inhibition (ALI) method is now applied in the generic dynamic and selective visual attention (DSVA) task with the objective of moving objects detection, labelling ...
The four basic subtasks, namely feature extraction, feature integration, attention building and attention reinforcement in our proposal of DSVA are described in detail by inferential CommonKADS schemes ...
Acknowledgements This work is supported in part by the Spanish CICYT TIN2004-07661-C02-01 and TIN2004-07661-C02-02 grants. ...
doi:10.1016/j.eswa.2005.09.062
fatcat:sozlsnplknarhj6j5hj47vfe2m
Video Structuring: From Pixels To Visual Entities
2012
Zenodo
Publication in the conference proceedings of EUSIPCO, Bucharest, Romania, 2012 ...
The human brain and visual system actively seek for regions of interest by paying more attention to some specific parts of the image/ video. ...
Stationary attention model A powerful method of computing bottom-up visual cues is proposed in [13] . First, the input image is segmented into regions based on a graph partition strategy [19] . ...
doi:10.5281/zenodo.51978
fatcat:5tsrcrfs2zggjch4bxzcjzvfki
Manipulation-skill Assessment from Videos with Spatial Attention Network
[article]
2019
arXiv
pre-print
In particular, we propose a novel RNN-based spatial attention model that considers accumulated attention state from previous frames as well as high-level knowledge about the progress of an undergoing task ...
Our motivation here is to estimate attention in videos that helps to focus on critically important video regions for better skill assessment. ...
elements: 1) instantaneous visual information in each frame (deep appearance-motion features); 2) highlevel task-related knowledge; 3) accumulate information of attention. ...
arXiv:1901.02579v2
fatcat:l2zebsl475c2pg7xnwz5rli6su
Revisiting Algorithmic Lateral Inhibition and Accumulative Computation
[chapter]
2009
Lecture Notes in Computer Science
This paper is dedicated to the computational formulations of both methods, which have led to quite efficient solutions of problems related to motion-based computer vision. ...
called "algorithmic lateral inhibition", a generalization of lateral inhibition anatomical circuits, and "accumulative computation", a working memory related to the temporal evolution of the membrane ...
From the good results obtained by means of these methods in computer-visionbased motion analysis, the following step was the challenge of facing selective visual attention (dynamic) by means of a research ...
doi:10.1007/978-3-642-02264-7_7
fatcat:3vixstt3afbidpuctdollm4nfq
Self-Supervised Learning of Audio-Visual Objects from Video
[article]
2020
arXiv
pre-print
and tracking speakers, (c) correcting misaligned audio-visual data, and (d) active speaker detection. ...
We demonstrate the effectiveness of the audio-visual object embeddings that our model learns by using them for four downstream speech-oriented tasks: (a) multi-speaker sound source separation, (b) localizing ...
Thandavan for infrastructure support. This work is funded by the UK EPSRC CDT in AIMS, DARPA Medifor, and a Google-DeepMind Graduate Scholarship.
Bibliography ...
arXiv:2008.04237v1
fatcat:6qs4sxx3qfgzdcr77zlw7zzyvi
A conceptual frame with two neural mechanisms to model selective visual attention processes
2008
Neurocomputing
In this work we explore a way of saving this gap for the case of the attentional processes, consisting in (1) proposing in first place a conceptual model of the attention double bottom-up/top-down organization ...
computation) formulated at symbolic level, and, (5) assessing the validity of the proposal by accommodating the works of the research team on diverse aspects of attention associated to visual surveillance ...
This work is also supported in part by Junta de Comunidades de Castilla-La Mancha PBI06-099 Grant. ...
doi:10.1016/j.neucom.2007.10.005
fatcat:lk7dck6fgrd5vlobaoee75qo2q
Speech/non-speech detection in meetings from automatically extracted low resolution visual features
2010
2010 IEEE International Conference on Acoustics, Speech and Signal Processing
In this paper we address the problem of estimating who is speaking from automatically extracted low resolution visual cues from group meetings. ...
Due to the high probability of losing the audio stream during video conferences, this work proposes methods for estimating speech using just low resolution visual cues. ...
FEATURE EXTRACTION 3.1 Estimating Motion We estimate body motion in the close-view video streams by extracting visual activity features directly from the compressed domain. ...
doi:10.1109/icassp.2010.5494913
fatcat:wi6xzjgykfb37miey5r7jz67qa
Multiple Image Objects Detection, Tracking, and Classification using Human Articulated Visual Perception Capability
[chapter]
2008
Brain, Vision and AI
For doing this, skeletonization of the motion region is done, and then motion analysis is done to compute motion feature variation using selected feature points (R. Cutler, et al. 2000 , H. ...
To extract multivariate feature vectors, shape and motion information are computed using Fourier descriptor, gradients, and motion feature variation (A. J. Lipton, et al. 1998 , Y. ...
Furthermore, it works as a valuable resource for researchers interested in this field. ...
doi:10.5772/6040
fatcat:xmuljcpyzbbzvomxqsqfoc2jju
Driver hand activity analysis in naturalistic driving studies: challenges, algorithms, and experimental studies
2013
Journal of Electronic Imaging (JEI)
The static-cue-based method extracts features in each frame in order to learn a hand presence model for each of the three regions. ...
The motioncue-based hand detection uses temporally accumulated edges in order to maintain the most reliable and relevant motion information. ...
We also thank our UCSD LISA colleagues, in particular Dr. Cuong Tran and Mr. Minh Van Ly, and are also thankful to Mr. ...
doi:10.1117/1.jei.22.4.041119
fatcat:scjepctzurcedklore3awv2tiu
A Computing Model of Selective Attention for Service Robot Based on Spatial Data Fusion
2018
Journal of Robotics
Both static features and dynamic features are composed in attention selection computing process. Information from sensor networks is transformed and incorporated into the model. ...
We proposed a computing model of selective attention which is biologically inspired by visual attention mechanism, which aims at predicting focus of attention (FOA) in a domestic environment. ...
A Saliency Computing Model of Selective Attention The general procedure of saliency computing model for selective attention is illustrated in Figure 2 . ...
doi:10.1155/2018/5368624
fatcat:6hlv3mrsizfydab7kupevw5cby
« Previous
Showing results 1 — 15 out of 30,769 results