Filters








7,017 Hits in 7.1 sec

Learning where to look

Brian D. Ehret
1999 CHI '99 extended abstracts on Human factors in computing systems - CHI '99  
DEDICATION I dedicate this dissertation to my mother and father, Kathleen and Douglas Ehret, whose strength of character will always provide me with a source of inspiration, perspective and purpose. iii  ...  As Jones & Dumais (1986) put it It is not enough to know what we are looking for; we must also know where to look for it (p. 43).  ...  The probability of retrieval will increase more quickly in tasks which require repeated shifts of attention to objects that remain in constant locations.  ... 
doi:10.1145/632716.632895 dblp:conf/chi/Ehret99 fatcat:2ok25ofpdraf5czaqls6cmh3qe

Where to look next? Eye movements reduce local uncertainty

Laura Walker Renninger, Preeti Verghese, James Coughlan
2007 Journal of Vision  
How do we decide where to look next? During natural, active vision, we move our eyes to gather task-relevant information from the visual scene.  ...  Information theory provides an elegant framework for investigating how visual stimulus information combines with prior knowledge and task goals to plan an eye movement.  ...  Kirchstein NRSA (#EY 14536-02) to L.W.R.; Air Force (#FA9550-05-1-0151) and NSF (#0347051) to P.V.; and NIDRR (#H133G030080), NSF (#IIS0415310), and NIH (#EY015187-01A2) to J.C.  ... 
doi:10.1167/7.3.6 pmid:17461684 fatcat:b6afv6xj4ffybelwlp3nnpq5ju

Serial attention mechanisms in visual search: A critical look at the evidence

Leonardo Chelazzi
1999 Psychological Research  
perception and visual selective attention.  ...  Until a few years ago, visual search tasks were of exclusive pertinence to psychophysicists and cognitive psychologists trying to understand the operating principles and computational constraints of visual  ...  I wish to thank Kia Nobre, Giovanni Berlucchi, Massimo Girelli, and Emanuela Bricolo for many helpful comments on preliminary versions of the manuscript, Marco Veronese for preparing the ®gures, and Manuela  ... 
doi:10.1007/s004260050051 pmid:10472200 fatcat:jgr5lqrr75dllhchxed2qippsu

6. Looking Up, Looking Down: A New Vision in Motion [chapter]

Jennifer Pranolo
2020 Screen Space Reconfigured  
The camera and the photograph are not used to replicate a pre-existing vision of reality but to explore the visual and cognitive terrain of a new spatial logic.  ...  This article traces a genealogy of what it means to 'see' photographically.  ...  They chart out mobile paths for our looking, viscerally displacing us from position to position, view to view.  ... 
doi:10.1515/9789048529056-008 fatcat:b3d4gdttafbyxpp6whsd2ven6i

Perceiving where another person is looking: the integration of head and body information in estimating another person's gaze

Pieter Moors, Filip Germeys, Iwona Pomianowska, Karl Verfaillie
2015 Frontiers in Psychology  
In summary, this study shows that body orientation is indeed used as a cue to determine where another person is looking.  ...  To be able to do this, the observer effectively has to compute where the other person is looking.  ...  The gaze cueing literature frequently relies on the classical Posner cueing paradigm in which a cue is presented either centrally or peripherally causing participants to respond faster to a target on the  ... 
doi:10.3389/fpsyg.2015.00909 pmid:26175711 pmcid:PMC4485307 fatcat:wuv3j5u3rzbrdb3y7jcdryjd2i

Looking at the center of the targets helps multiple object tracking

Hilda M. Fehd
2010 Journal of Vision  
Decreasing object size showed that peripheral visibility is necessary for tracking, but that center-looking continues up to the limits of peripheral visibility.  ...  This strategy of center-looking is in contrast to a target-looking strategy where participants would saccade from target to target.  ...  Thomas and two anonymous reviewers for helpful comments on the manuscript.  ... 
doi:10.1167/10.4.19 pmid:20465338 pmcid:PMC4150652 fatcat:3qjar5wrinexvheos2qz7k6x2y

Micro-analysis of infant looking in a naturalistic social setting: insights from biologically based models of attention

Kaya de Barbaro, Andrea Chiba, Gedeon O. Deák
2011 Developmental Science  
We examined the visual behaviors of n = 16 infants (6-7 months) while they attended to multiple spatially distributed targets in a naturalistic environment.  ...  We coded four measures of attentional vigilance, adapted from studies of norepinergic modulation of animal attention: rate of fixations, duration of fixations, latency to reorientation, and target 'hits  ...  Deµk and by NSF grant #SBE-0542013 to the Temporal Dynamics of Learning Center, an NSF Science of Learning Center.  ... 
doi:10.1111/j.1467-7687.2011.01066.x pmid:21884330 fatcat:xuhxnummpvg75dehz42zgkqjk4

Task and context determine where you look

Constantin A. Rothkopf, Dana H. Ballard, Mary M. Hayhoe
2016 Journal of Vision  
The deployment of human gaze has been almost exclusively studied independent of any specific ongoing task and limited to two-dimensional picture viewing.  ...  Gaze distributions were compared to a random gaze allocation strategy as well as a specific "saliency model." Gaze distributions showed high similarity across subjects.  ...  Acknowledgments This research was supported by National Institutes of Health Grants EY05729 and RR09283.  ... 
doi:10.1167/7.14.16 pmid:18217811 fatcat:roqmxycpkfcsneqg6sqipidcuq

Looking without seeing or seeing without looking? Eye movements in sustained inattentional blindness

Vanessa Beanland, Kristen Pammer
2010 Vision Research  
In a high perceptual load task, IB was high (81%) and most participants did not allocate overt attention to the unexpected object.  ...  Few studies have explicitly investigated the role of eye movements in IB and the relative contributions of overt and covert attention.  ...  assistance; and Anne Aimola Davies for allowing recruitment for Experiment 1 in class.  ... 
doi:10.1016/j.visres.2010.02.024 pmid:20206648 fatcat:5pkmq2oasvda7ppvv76kmcy5oq

Components of Visual Orienting in Early Infancy: Contingency Learning, Anticipatory Looking, and Disengaging

Mark H. Johnson, Michael I. Posner, Mary K. Rothbart
1991 Journal of Cognitive Neuroscience  
to use a central cue to predict the spatial location of a target.  ...  their ability to use a central cue to predict the spatial location of a target stimulus.  ... 
doi:10.1162/jocn.1991.3.4.335 pmid:23967813 fatcat:vuplgy37aje3rnkzrsjqlni3bq

Getting directions from the hippocampus: The neural connection between looking and memory

Miriam L.R. Meister, Elizabeth A. Buffalo
2016 Neurobiology of Learning and Memory  
Probing how the hippocampus reflects and impacts motor output during looking behavior renders a practical path to advance our understanding of the hippocampal memory system.  ...  Here, we review how looking behavior is guided by memory in several ways, some of which have been shown to depend on the hippocampus, and how hippocampal neural signals are modulated by eye movements.  ...  Acknowledgments This work was supported by National Institute of Mental Health Grants MH080007 and MH093807 (to E.A.B.) and the National Institutes of Health, ORIP-0D010425.  ... 
doi:10.1016/j.nlm.2015.12.004 pmid:26743043 pmcid:PMC4927424 fatcat:pzxijfor2rh35kjsosqeipjune

Picture Changes During Blinks: Looking Without Seeing and Seeing Without Looking

J. Kevin O'Regan, Heiner Deubel, James J. Clark, Ronald A. Rensink
2000 Visual Cognition  
There were three kinds of changes: Appearance or disappearance of a picture element (this could be an object, part of object, surface, or region such as a shadow or the sky); a shift in the position of  ...  This idea relates to the notion that attention might be linked, not to locations in the visual field, but to aspects of the visual field that have been perceptually grouped, for example into objects (  ... 
doi:10.1080/135062800394766 fatcat:k4nsskmxyjfaxhz7zcicrimcgi

Why Do We Move Our Head to Look at an Object in Our Peripheral Region? Lateral Viewing Interferes with Attentive Search

Ryoichi Nakashima, Satoshi Shioiri, Robert J. van Beers
2014 PLoS ONE  
Why do we frequently fixate an object of interest presented peripherally by moving our head as well as our eyes, even when we are capable of fixating the object with an eye movement alone (lateral viewing  ...  Results show that lateral viewing increased the time required to detect a target in a search for the letter T among letter L distractors, a serial attentive search task, but not in a search for T among  ...  Acknowledgments We thank Mitsumasa Takahashi, and Nobutaka Omori for assistance with data collection. Author Contributions  ... 
doi:10.1371/journal.pone.0092284 pmid:24647634 pmcid:PMC3960241 fatcat:nhuutzbhpfh4hobpbujffztk4u

What speakers do and what addressees look at: Visual attention to gestures in human interaction live and on video

Marianne Gullberg, Kenneth Holmqvist
2006 Pragmatics & Cognition  
, suggesting a social effect for overt gaze-following and visual joint attention.  ...  We compare a live face-to-face setting to two video conditions. In all conditions, the face dominates as a fixation target and only a minority of gestures draw fixations.  ...  Acknowledgements We gratefully acknowledge the support of Birgit and Gad Rausing's Foundation for Research in the Humanities through a grant to the first author, as well as financial and technical support  ... 
doi:10.1075/pc.14.1.05gul fatcat:ts2iwklopfazzoz42mbdxoi5oe

When I Look into Your Eyes: A Survey on Computer Vision Contributions for Human Gaze Estimation and Tracking

Dario Cazzato, Marco Leo, Cosimo Distante, Holger Voos
2020 Sensors  
The automatic detection of eye positions, their temporal consistency, and their mapping into a line of sight in the real world (to find where a person is looking at) is reported in the scientific literature  ...  task that aims at estimating gaze target from different perspectives: from the eye of the beholder (first-person view), from an external camera framing the beholder's, from a third-person view looking  ...  The problem of attention targets in videos has also been addressed in [98] , where authors also try to correctly handle the case of out-of-frame gaze targets.  ... 
doi:10.3390/s20133739 pmid:32635375 pmcid:PMC7374327 fatcat:jwou6gv4f5dy7lrsxvtbnb2fly
« Previous Showing results 1 — 15 out of 7,017 results