Visual perception, language and gesture: A model for their understanding in multimodal dialogue systems

Frédéric Landragin
2006 Signal Processing  
The way we see the objects around us determines speech and gestures we use to refer to them. The gestures we produce structure our visual perception. The words we use have an influence on the way we see. In this manner, visual perception, language and gesture present multiple interactions between each other. The problem is global and has to be tackled as a whole in order to understand the complexity of reference phenomena and to deduce a formal model. This model may be useful for any kind of
more » ... an-machine dialogue system that focuses on deep comprehension. We show how a referring act takes place into a contextual subset of objects. This subset is called 'reference domain' and is implicit. It can be deduced from a lot of clues. Among these clues are those which come from the visual context and those which come from the multimodal utterance. We present the 'multimodal reference domain' model that takes these clues into account and that can be exploited in a multimodal dialogue system when interpreting.
doi:10.1016/j.sigpro.2006.02.046 fatcat:3yygaktsgfhqxirdrqw3a5xzsq