A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2015; you can also visit the original URL.
The file type is application/pdf
.
Filters
MoodMusic
2011
Proceedings of the 24th annual ACM symposium adjunct on User interface software and technology - UIST '11 Adjunct
This work contributes a novel method for dynamically creating music playlists for groups based on their music preferences and current mood. ...
In this paper, we present MoodMusic, a method to dynamically generate contextually appropriate music playlists for groups of people. ...
The interface provides real-time feedback onthe current conversation as well as the corresponding mood within Thayer's mood model (see Figure 2 ). ...
doi:10.1145/2046396.2046435
dblp:conf/uist/BauerJC11
fatcat:ihkhoibwhbbazogofl7gahpfmm
CSCW '92 formal video program
1992
Proceedings of the 1992 ACM conference on Computer-supported cooperative work - CSCW '92
The application is a real-time simulation of objects modeled as point masses and springs. ...
for a period of ten conversing over a distance, using a phone or a video phone weeks [9]. ...
doi:10.1145/143457.371591
fatcat:zar6flqibngn5awdn27jnvzjse
From multimodal analysis to real-time interactions with virtual agents
2014
Journal on Multimodal User Interfaces
This is even more important for virtual agents that communicate with humans in a real-time face-to-face setting. ...
Introduction One of the aims in building multimodal user interfaces is to make the interaction between user and systems as natural as possible. ...
and recognizing relevant multimodal features, crafting or learning models from these features, generating the appropriate behavior in real-time based on these models and evaluating the system in a methodologically ...
doi:10.1007/s12193-014-0152-5
fatcat:kj5kyxyxfbg7peh5igjidb3xcm
Effect of Machine Translation in Interlingual Conversation
2015
Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI '15
Language barrier is the primary challenge for effective cross-lingual conversations. ...
We conducted two sets of studies with a total of 23 pairs (46 participants). Participants worked on storytelling tasks to simulate natural conversations with 3 different interface settings. ...
ACKNOWLEDGEMENTS We thank the Skype Translator team at Microsoft for many discussions and the feedback on this research. ...
doi:10.1145/2702123.2702407
dblp:conf/chi/HaraI15
fatcat:vv2ge2eoqbcljitdu6i6nfrrnu
A Scalable Avatar for Conversational User Interfaces
[chapter]
2003
Lecture Notes in Computer Science
Today's challenge is to build a suitable visualization architecture for anthropomorphic conversational user interfaces which will run on different devices like laptops, PDAs and mobile phones. ...
Concrete implementations as a part of conversational interfaces are User-Interface Avatars, anthropomorphic representatives on the base of artificial 2D or 3D characters. ...
Instead, the real-time lip sync with the audio output generated by a speech synthesis module is developed. ...
doi:10.1007/3-540-36572-9_27
fatcat:pwfbo7oda5g2dk4wmwpwfwkyu4
Requirements for an Architecture for Embodied Conversational Characters
[chapter]
1999
Eurographics
In this paper we describe the computational and architectural requirements for systems which support real-time multimodal interaction with an embodied conversational character. ...
We argue that the three primary design drivers are real-time multithreaded entrainment, processing of both interactional and propositional information, and an approach based on a functional understanding ...
We demonstrated our approach with the Rea system. ...
doi:10.1007/978-3-7091-6423-5_11
fatcat:i5x5pfndbngproyxcx2s7j6f3m
Embodiment in conversational interfaces
1999
Proceedings of the SIGCHI conference on Human factors in computing systems the CHI is the limit - CHI '99
In this paper, we argue for embodied corrversational characters as the logical extension of the metaphor of human -computer interaction as a conversation. ...
We argue that the only way to fully model the richness of human I&+ to-face communication is to rely on conversational analysis that describes sets of conversational behaviors as fi~lfilling conversational ...
Such models, have been described for other conversational systems: for example Brennan and Hulteen describe a general framework for applying conversational theory to speech interfaces [7]. ...
doi:10.1145/302979.303150
dblp:conf/chi/CassellBBCCVY99
fatcat:vjinbkrgrjdjzjlbq4ujrbmmpy
Powering interactive intelligent systems with the crowd
2014
Proceedings of the adjunct publication of the 27th annual ACM symposium on User interface software and technology - UIST'14 Adjunct
My work focuses on a new model of continuous, real-time crowdsourcing that en ables interactive crowd-powered systems. ...
But fully-automated intelligent systems are a far-off goal -currently, machines struggle in many real-world settings because problems can be almost entirely unconstrained and can vary greatly between instances ...
ACKNOWLEDGEMENTS My work is supported by a Microsoft Research Ph.D. Fellow ship, Google, and the National Science Foundation. I would also like to thank all of my collaborators on these projects. ...
doi:10.1145/2658779.2661168
dblp:conf/uist/Lasecki14
fatcat:fpgep3uxgfg45pjthjrjjoliei
Untethered gesture acquisition and recognition for a multimodal conversational system
2003
Proceedings of the 5th international conference on Multimodal interfaces - ICMI '03
We present a system that incorporates body tracking and gesture recognition for an untethered human-computer interface. ...
Humans use a combination of gesture and speech to convey meaning, and usually do so without holding a device or pointer. ...
This algorithm recovers the articulated pose of a user in real-time. The system then uses the pose of the 3D body model to recognize full-body gestures of a user. ...
doi:10.1145/958456.958461
fatcat:kraximkevfe3fgcdqzwrqywxhq
Macaw: An Extensible Conversational Information Seeking Platform
[article]
2019
arXiv
pre-print
Such research will require data and tools, to allow the implementation and study of conversational systems. ...
This paper introduces Macaw, an open-source framework with a modular architecture for CIS research. ...
ACKNOWLEDGEMENTS The authors wish to thank Ahmed Hassan Awadallah, Krisztian Balog, and Arjen P. de Vries for their invaluable feedback. ...
arXiv:1912.08904v1
fatcat:wfdzexyxbrbcnppxjfchbizcqe
On creating multimodal virtual humans—real time speech driven facial gesturing
2010
Multimedia tools and applications
In this paper we present a novel method for automatic speech driven facial gesturing for virtual humans capable of real time performance. ...
Further, we test the method using an application prototype-a system for speech driven facial gesturing suitable for virtual presenters. ...
Acknowledgments The work was partly carried out within the research project "Embodied Conversational Agents as interface for networked and mobile services" supported by the Ministry of Science, Education ...
doi:10.1007/s11042-010-0526-y
fatcat:ipjwyqaxuneu7esmph4s4slu5a
Design of a Multimodal Input Interface for a Dialogue System
[chapter]
2006
Lecture Notes in Computer Science
The system support speech input through an ASR and speech output through a TTS, synchronized with an animated face. ...
In the present stage our main focus it is on the development of a multimodal input interface to the system. ...
Two of them are responsible for the interfaces with the user and the centralized system for control of devices (based on a web server). The other block is responsible for the dialogue management. ...
doi:10.1007/11751984_18
fatcat:7waojtw6x5gjxo4dbncg2zxqzq
Untethered gesture acquisition and recognition for a multimodal conversational system
2003
Proceedings of the 5th international conference on Multimodal interfaces - ICMI '03
We present a system that incorporates body tracking and gesture recognition for an untethered human-computer interface. ...
Humans use a combination of gesture and speech to convey meaning, and usually do so without holding a device or pointer. ...
This algorithm recovers the articulated pose of a user in real-time. The system then uses the pose of the 3D body model to recognize full-body gestures of a user. ...
doi:10.1145/958432.958461
dblp:conf/icmi/KoDD03
fatcat:7uicmnrki5fktpuzekhulm4fwi
A Russian Keyword Spotting System Based on Large Vocabulary Continuous Speech Recognition and Linguistic Knowledge
2016
Journal of Electrical and Computer Engineering
The paper describes the key concepts of a word spotting system for Russian based on large vocabulary continuous speech recognition. ...
The system is based on CMU Sphinx open-source speech recognition platform and on the linguistic models and algorithms developed by Speech Drive LLC. ...
Acknowledgments The authors would like to thank SpRecord LLC authorities for providing real-world telephone-quality data used in training and testing of the keyword spotting system described in this paper ...
doi:10.1155/2016/4062786
fatcat:7jhohy6kerbuln7drrwcqfizcq
Effects of public vs. private automated transcripts on multiparty communication between native and non-native english speakers
2014
Proceedings of the 32nd annual ACM conference on Human factors in computing systems - CHI '14
Previous studies of ASR have focused on how transcripts aid NNS speech comprehension. In this study, we examine whether transcripts benefit multiparty real-time conversation between NS and NNS. ...
Real-time transcripts generated by automated speech recognition (ASR) technologies have the potential to facilitate communication between native speakers (NS) and non-native speakers (NNS). ...
We also thank the NTT development team for their technical support and the anonymous reviewers for their valuable comments. ...
doi:10.1145/2556288.2557303
dblp:conf/chi/GaoYHEF14
fatcat:z4sol7k7ifgd5j5txqwxdiejtm
« Previous
Showing results 1 — 15 out of 38,839 results