SWAET 2018: Scandinavian Workshop on Applied Eye Tracking Abstracts 23-24 August, 2018 Copenhagen Business School Solbjerg Plads 3, SP2.01 2000 Frederiksberg DENMARK Edited by Daniel Barratt, Raymond Bertram, & Marcus Nyström
Journal of Eye Movement Research
Reinhold Kliegl is professor of experimental psychology at the University of Potsdam, Germany. His research focuses on how the dynamics of language-related, perceptual, and oculomotor processes subserve attentional control, using reading, spatial attention, and working memory tasks as experimental venues. He also examines neural correlates and age-related differences in these processes. His research has been carried out in interdisciplinary projects with colleagues from linguistics as well as
... om theoretical physics and mathematics. Monica Castelhano is an associate professor in psychology at Queen's University, Ontario, Canada. Her primary research interests are visual attention and visual memory, and how they function in our everyday lives: Of all the tasks we perform every day, there is one task that we engage in repeatedly and often without awareness: visual search. Whether looking for your car keys, wallet or simply where your mug is to have a sip of coffee, we engage in this simple task before almost every action. Visual information presented to us at any given moment from the real-world is complex and ever-changing. Consequently, one of the most surprising feats of our cognitive system is the ease with which we can perceive, identify and act upon the world around us. In my lecture, I will explore the various ways that scene context influences and guides eye movements. Rather than a singular influence, we'll unpack what is meant by scene context and examine the distinction between spatial, semantic, object-scene relations, and object function. Taken together, they improve our understanding of how we process complex information in the real-word and how we are able to perform such complex visual search tasks with relative ease. For evaluating whether the data from an eye tracker are precise enough for measuring microsaccades, Poletti and Rucci (2016) advocate that the measure "resolution" be used rather than the more established RMS-S2S. Resolution needs to be measured using an artificial eye that can be turned in very small steps, and visual estimation is used to assess whether the movements are visible in the recorded data from the eye tracker. As such, resolution cannot be measured with human data. Currently, resolution has an unclear and entirely uninvestigated relationship to existing RMS-S2S and STD measures of precision (Holmqvist & Andersson, 2017, p. 190). Resolution measurements have only been made on the DPI and one other eye tracker. We do not know resolution values for the most used eye trackers. In this talk, we present a mechanism -the Stepperbox -for moving artificial eyes arbitrary distances from 1 arcmin and upward. We first present a validation of the mechanism that shows that it is capable of reliably making these movements. We then use the Stepperbox to find the smallest reliably detectable movement in multiple eye trackers and empirically investigate how resolution relates to the extent (STD) and velocity (RMS-S2S) of noise produced by these eye trackers. Figure 1 shows one of our recordings. Figure 1 . Increasingly larger steps from 1 arcmin to 10 arcmin steps. Each staircase shape involved 10 movements of identical amplitude after a 4 s waiting period. Stops between steps are 1 s long. The smaller movements clearly drown in the noise of this Tobii TX300 eye tracker, and resolution is 6-7 arcmin. A preliminary analysis indicates that the RMS-S2S values have a linear relationship to the resolution values. Eye trackers with filters (coloured noise) differ slightly from eye trackers with no filtering (white noise). We take our results to show that RMS-S2S can be used to assess the minimal amplitude movement that can be reliably detected with an eye tracker. We argue that Poletti and Ruccis' criticism of RMS-S2S hides a conceptual confusion of resolution as the amplitude where events begin to drown in noise vs. resolution as quantization of the measurement space.