A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2011; you can also visit the original URL.
The file type is application/pdf
.
Visual perception, language and gesture: A model for their understanding in multimodal dialogue systems
2006
Signal Processing
The way we see the objects around us determines speech and gestures we use to refer to them. The gestures we produce structure our visual perception. The words we use have an influence on the way we see. In this manner, visual perception, language and gesture present multiple interactions between each other. The problem is global and has to be tackled as a whole in order to understand the complexity of reference phenomena and to deduce a formal model. This model may be useful for any kind of
doi:10.1016/j.sigpro.2006.02.046
fatcat:3yygaktsgfhqxirdrqw3a5xzsq