A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2013; you can also visit the original URL.
The file type is
This paper presents the design of a multimodal sign-language -enabled dialogue system. Its functionality was tested on a prototype of an information kiosk for the deaf people providing information about train connections. We use an automatic computer-vision-based sign language recognition, automatic speech recognition and touchscreen as input modalities. The outputs are shown on a screen displaying 3D signing avatar and on a touchscreen displaying graphical user interface. The information kioskdoi:10.1145/2049536.2049599 dblp:conf/assets/HruzCKZAS11 fatcat:fheaty2vibgpfmydiwnlxjftve