A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2011; you can also visit the original URL.
The file type is application/pdf
.
Filters
Attention please!
2011
Proceedings of the 1st International Conference on Learning Analytics and Knowledge - LAK '11
This paper will present the general goal of and inspiration for our work on learning analytics, that relies on attention metadata for visualization and recommendation. ...
Moreover, recommendation can help to deal with the "paradox of choice" and turn abundance from a problem into an asset for learning. ...
Much more importantly, the support, comments and feedback from my team and students have thought me much more than I will ever be able to teach them. ...
doi:10.1145/2090116.2090118
dblp:conf/lak/Duval11
fatcat:w35mvz5kojab3cjcwuqkik3s5y
Rapid learning in attention shifts: A review
2006
Visual Cognition
PRIMING IN A CONJUNCTIVE VISUAL SEARCH TASK In Kristja Ânsson, Wang, and Nakayama (2002) we have further investigated priming in visual search using a more challenging visual search task than the one ...
To tie these findings to the present topic of how previous task history, in the short run, influences deployments of visual attention, we argued that the efficient search we observed, where explicit guidance ...
doi:10.1080/13506280544000039
fatcat:xsjslbbvirdx5ptrompsnmca3m
Deep Multimodal Neural Architecture Search
[article]
2020
arXiv
pre-print
By using a gradient-based NAS algorithm, the optimal architectures for different tasks are learned efficiently. ...
In this paper, we devise a generalized deep multimodal neural architecture search (MMnas) framework for various multimodal learning tasks. ...
Early NAS methods use the reinforcement learning to search neural architectures, which are computationally exhaustive [64, 65] . ...
arXiv:2004.12070v2
fatcat:424rhm5cknhpbklcn2obcgzvh4
Adaptive Feature Guidance: Modelling Visual Search with Graphical Layouts
2019
International Journal of Human-Computer Studies
The model suggests, for example, that (1) layouts that are visually homogeneous are harder to learn and more vulnerable to changes, (2) elements that are visually salient are easier to search and more ...
A B S T R A C T We present a computational model of visual search on graphical layouts. It assumes that the visual system is maximising expected utility when choosing where to fixate next. ...
Therefore, the main problem in visually searching graphical UIs becomes the problem of attention deployment: where to look next? ...
doi:10.1016/j.ijhcs.2019.102376
fatcat:bjk47k7iorfqnokwfj7v2lyf6a
Pose-Guided Multi-Granularity Attention Network for Text-Based Person Search
[article]
2019
arXiv
pre-print
To further capture the phrase-related visual body part, a fine-grained alignment network (FA) is proposed, which employs pose information to learn latent semantic alignment between visual body part and ...
To exploit the multilevel corresponding visual contents, we propose a pose-guided multi-granularity attention network (PMA). ...
Recently, attention is widely used in person search, which selects either visual contents or textual information. ...
arXiv:1809.08440v3
fatcat:rb33zfv645at3nfh7qu7vcjqvi
Pose-Guided Multi-Granularity Attention Network for Text-Based Person Search
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
To further capture the phrase-related visual body part, a fine-grained alignment network (FA) is proposed, which employs pose information to learn latent semantic alignment between visual body part and ...
To exploit the multilevel corresponding visual contents, we propose a pose-guided multi-granularity attention network (PMA). ...
Recently, attention is widely used in person search, which selects either visual contents or textual information. ...
doi:10.1609/aaai.v34i07.6777
fatcat:e7o4adxewrb5ned33qnmjdisuq
Visual search and location probability learning from variable perspectives
2013
Journal of Vision
Do moving observers code attended locations relative to the external world or relative to themselves? To address this question we asked participants to conduct visual search on a tabletop. ...
Goal-driven attention can be deployed to prioritize an environment-rich region. ...
All authors contributed to the design. YVJ and CGC set up the experiments and collected the data. YVJ and KMS interpreted the data and wrote the paper. ...
doi:10.1167/13.6.13
pmid:23716120
fatcat:2p4tzf6k7rglzi47yi4ifulyrm
A Model for Calculating Saliency from Both Input Image and Memory
2002
IAPR International Workshop on Machine Vision Applications
As a first step to implement human hnctions related to visual attention in computer vision, we developed a computational model for calculating the saliency map of an input image. ...
an asymmetrical effect in visual search. ...
Although this asymmetrical effect in visual search implies that the degree to which attention can be easily directed to a certain area of the image is influenced by visual experience, the model in the ...
dblp:conf/mva/EndohGT02
fatcat:gvqqxcpxvbcufdor3yv5tgcefy
Visionary: Vision architecture discovery for robot learning
[article]
2021
arXiv
pre-print
This is the first approach to demonstrate a successful neural architecture search and attention connectivity search for a real-robot task. ...
We propose a vision-based architecture search algorithm for robot manipulation learning, which discovers interactions between low dimension action inputs and high dimensional visual inputs. ...
This is done within Reinforcement Learning (RL)-based robot learning context, where in addition to learning the main architecture to generate visual features and combine them with action inputs, the robot ...
arXiv:2103.14633v1
fatcat:7j5656qfojfrphdhleqwt5gvea
A User Study on User Attention for an Interactive Content-based Image Search System
2021
Conference on Human Information Interaction and Retrieval
For contentbased image search systems, it is important to understand what users pay attention to, and thus engage users more in the search process. ...
User attention is one of the fundamental indications of users' interests in search. ...
The SS system used the active learning mechanism where data is abundant [28] . It enabled the users to provide feedback as an intent or preferences. ...
dblp:conf/chiir/Artemi021
fatcat:tcsago7gafcmrjokgtrimizmeq
Computational Models of Human Visual Attention and Their Implementations: A Survey
2013
IEICE transactions on information and systems
In particular, our objective is to carefully distinguish several types of studies related to human visual attention and saliency as a measure of attentiveness, and to provide a taxonomy from several viewpoints ...
Exploiting this pre-selection mechanism called visual attention for image and video processing systems would make them more sophisticated and therefore more useful. ...
Kazuhiko Kojima of Sanyo Electric Co.Ltd, and anonymous associate editors and reviewers for their valuable discussions and useful comments, which helped to improve of this work. ...
doi:10.1587/transinf.e96.d.562
fatcat:vkz3cdismbhadgzhickap3tzlm
Image Search With Text Feedback by Visiolinguistic Attention Learning
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
In this work, we tackle this task by a novel Visiolinguistic Attention Learning (VAL) framework. ...
Specifically, we propose a composite transformer that can be seamlessly plugged in a CNN to selectively preserve and transform the visual features conditioned on language semantics. ...
Acknowledgement: We would like to thank Maksim Lapin, Michael Donoser, Bojan Pepik, and Sabine Sternig for their helpful discussions. ...
doi:10.1109/cvpr42600.2020.00307
dblp:conf/cvpr/ChenGB20
fatcat:zpm32czgujfpvlvxnqhs3tt2re
Cortical dynamics of contextually cued attentive visual learning and search: Spatial and object evidence accumulation
2010
Psychological review
How do humans use target-predictive contextual information to facilitate visual search? ...
How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? ...
Acknowledgments This work was supported in part by CELEST, a National Science Foundation Science of Learning Center (NSF SBE-0354378) and HRL Laboratories LLC (subcontract #801881-BS under DARPA prime ...
doi:10.1037/a0020664
pmid:21038974
fatcat:7okysumkhnh2vncdkbuhgvvmnq
Learning to attend in a brain-inspired deep neural network
[article]
2018
arXiv
pre-print
Using deep reinforcement learning, ATTNet learned to shift its attention to the visual features of a target category in the context of a search task. ...
More fundamentally, ATTNet learned to shift its attention to target like objects and spatially route its visual inputs to accomplish the task. ...
inputs and shifting attention to be a useful thing to do. ...
arXiv:1811.09699v1
fatcat:v7fiud7ijzdrlpkx6nspagfvke
Reinforcement learning based visual attention with application to face detection
2012
2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
This paper introduces a novel approach to the problem of visual search by framing it as an adaptive learning process. ...
The mainstream approach to modeling focal visual attention involves identifying saliencies in the image and applying a search process to the salient regions. ...
Section II briefly reviews existing visual attention methods. Section III introduces the proposed learning-based approach to the visual search task. ...
doi:10.1109/cvprw.2012.6239177
dblp:conf/cvpr/GoodrichA12
fatcat:yf7btrlp4nhahd77psj2ro2w5m
« Previous
Showing results 1 — 15 out of 341,328 results