Gaze patterns predicting successful collision avoidance in patients with homonymous visual field defects

Eleni Papageorgiou, Gregor Hardiess, Hanspeter A. Mallot, Ulrich Schiefer
2012 Vision Research  
Aim of the present study was to identify efficient compensatory gaze patterns applied by patients with homonymous visual field defects (HVFDs) under virtual reality (VR) conditions in a dynamic collision avoidance task. Thirty patients with HVFDs due to vascular brain lesions and 30 normal subjects performed a collision avoidance task with moving objects at an intersection under two difficulty levels. Based on their performance (i.e. the number of collisions), patients were assigned to either
more » ... "adequate" (HVFD A ) or "inadequate" (HVFD I ) subgroup by the median split method. Eye and head tracking data were available for 14 patients and 19 normal subjects. Saccades, fixations, mean number of gaze shifts, scanpath length and the mean gaze eccentricity, were compared between HVFD A , HVFD I patients and normal subjects. For both difficulty levels, the gaze patterns of HVFD A patients (N = 5) compared to HVFD I patients (N = 9) were characterized by longer saccadic amplitudes towards both the affected and the intact side, larger mean gaze eccentricity, more gaze shifts, longer scanpaths and more fixations on vehicles but fewer fixations on the intersection. Both patient groups displayed more fixations in the affected compared to the intact hemifield. Fixation number, fixation duration, scanpath length, and number of gaze shifts were similar between HVFD A patients and normal subjects. Patients with HVFDs who adapt successfully to their visual deficit, display distinct gaze patterns characterized by increased exploratory eye and head movements, particularly towards moving objects of interest on their blind side. In the context of a dynamic environment, efficient compensation in patients with HVFDs is possible by means of gaze scanning. This strategy allows continuous update of the moving objects' spatial location and selection of the task-relevant ones, which will be represented in visual working memory.
doi:10.1016/j.visres.2012.06.004 pmid:22721638 fatcat:itzgsqgwszf5znm3ehhetehrjy