An efficient audiovisual saliency model to predict eye positions when looking at conversations

Antoine Coutrot, Nathalie Guyader
2015 2015 23rd European Signal Processing Conference (EUSIPCO)  
Classic models of visual attention dramatically fail at predicting eye positions on visual scenes involving faces. While some recent models combine faces with low-level features, none of them consider sound as an input. Yet it is crucial in conversation or meeting scenes. In this paper, we describe and refine an audiovisual saliency model for conversation scenes. This model includes a speaker diarization algorithm which automatically modulates the saliency of conversation partners' faces and
more » ... ies according to their speaking-or-not status. To merge our different features into a master saliency map, we use an efficient statistical method (Lasso) allowing a straightforward interpretation of feature relevance. To train and evaluate our model, we run an eye tracking experiment on a publicly available meeting videobase. We show that increasing the saliency of speakers' faces (but not bodies) greatly improves the predictions of our model, compared to previous ones giving an equal and constant weight to each conversation partner.
doi:10.1109/eusipco.2015.7362640 dblp:conf/eusipco/CoutrotG15 fatcat:k4pma5eskrcrrknzwidi4llpzu