Recognizing Gaze-Motor Behavioral Patterns in Manual Grinding Tasks
This paper reports our progress in developing techniques for "parsing" raw gaze and force data from manual grinding tasks into a principled model. A grinding task, though simple, requires the practitioner to combine elements from the large repertoire of her skillset. Based on the joint, gaze, and force data collected from a series of experiments, and by extending existing scanpath methods, we develop a visualization method called Gaze-Motor Space-Time Cube (GMSTC), which can help us gain
... help us gain insight into the joint gaze-motor routine existing in complex manual tasks. For instance, there exists a strong correlation between the spectra of a subject's fixation and force distributions. Such insight might be hard to extract through an examination of either the gaze or the force data separately. Furthermore, by comparing data obtained from operators with different levels of skill, we are able to quantitatively describe characteristics of human manual skill. For instance, we find that an experienced subject exhibits longer fixation durations and smaller fixation variations than an intermediate one. A detailed understanding of gaze-motor behavior broadens our knowledge of how a manual task is executed. Our results help to provide this extra insight, and have implications in the way in which knowledge and manual expertise is transferred from one generation of practitioners to the next. * Corresponding author. † It should be pointed out that gaze, which can be tracked by an eye-tracker, corresponds to the overt movements of the eyes, not the covert movements of visual attention. Thus, a very important assumption that is usually accepted in visual attention research is that the attention is linked to the gaze direction, even though this might not be always true (Duchowski, 2007) .