Generating accurate 3D gaze vectors using synchronized eye tracking and motion capture [article]

Scott A. Stone, Quinn A Boser, T Riley Dawson, Albert H Vette, Jacqueline S Hebert, Patrick M Pilarski, Craig S Chapman
2021 bioRxiv   pre-print
Assessing gaze behaviour during real-world tasks is difficult; dynamic bodies moving through dynamic worlds make finding gaze fixations challenging. Current approaches involve laborious coding of pupil positions overlaid on video. One solution is to combine eye tracking with motion tracking to generate 3D gaze vectors. When combined with tracked or known object locations, fixation detection can be automated. Here we use combined eye and motion tracking and explore how linear regression models
more » ... nerate accurate 3D gaze vectors. We compare spatial accuracy of models derived from four short calibration routines across three data types: the performance of calibration routines were assessed using calibration data, a validation task that demands short fixations on task-relevant locations, and an object interaction task we used to bridge the gap between laboratory and "in the wild" studies. Further, we generated and compared models using spherical and cartesian coordinate systems and monocular (Left or Right) or binocular data. Our results suggest that all calibration routines perform similarly, with the best performance (i.e., sub-centimeter errors) coming from the task (i.e., the most "natural") trials when the participant is looking at an object in front of them. Further, we found that spherical coordinate systems generate more accurate gaze vectors with no differences in accuracy when using monocular or binocular data. Overall, we recommend recording one-minute calibration datasets, using a binocular eye tracking headset (for redundancy), a spherical coordinate system when depth is not considered, and ensuring data quality (i.e., tracker positioning) is high when recording datasets.
doi:10.1101/2021.10.22.465332 fatcat:dpwbdjab6fd7vfjq3invam2y4y