Designing Speech and Multimodal Interactions for Mobile, Wearable, and Pervasive Applications

Cosmin Munteanu, Keisuke Nakamura, Kazuhiro Nakadai, Pourang Irani, Sharon Oviatt, Matthew Aylett, Gerald Penn, Shimei Pan, Nikhil Sharma, Frank Rudzicz, Randy Gomez
2016 Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems - CHI EA '16  
Traditional interfaces are continuously being replaced by mobile, wearable, or pervasive interfaces. Yet when it comes to the input and output modalities through which we interact with such interfaces, we are yet to fully embrace some of the most natural forms of communication and information processing that humans possess: speech, language, gestures, thoughts. Very little HCI attention has been dedicated to designing and developing spoken language and multimodal interaction techniques,
more » ... ly for mobile and wearable devices. Independent of engineering progress in processing such modalities, there is now sufficient evidence that many real-life applications do not require 100% accuracy of processing multimodal input to be useful, particularly if such modalities complement each other. This multidisciplinary, two-day workshop will bring together interaction designers, usability researchers, and general HCI practitioners to analyze the opportunities and directions to take in designing more natural interactions with mobile and wearable devices, and to look at how we can leverage recent advances in speech and multimodal processing.
doi:10.1145/2851581.2856506 dblp:conf/chi/MunteanuIOAPPSR16 fatcat:yrr4nlfjpbdgjkfqtkzddvlgfy