Filters








4,624 Hits in 4.7 sec

Introducing multimodal character agents into existing web applications

Kimihito Ito
2005 Special interest tracks and posters of the 14th international conference on World Wide Web - WWW '05  
This paper proposes a framework in which end-users can instantaneously modify existing Web applications by introducing multimodal user-interface.  ...  The authors use the Intel-ligentPad architecture and MPML as the basis of the framework. Example applications include character agents that read the latest news on a news Web site.  ...  Tanaka in Hokkaido University, for his kind help and appropriate advice on this research.  ... 
doi:10.1145/1062745.1062821 dblp:conf/www/Ito05 fatcat:geicck5kendzlkyebtq5wci2ye

A Natural Conversational Virtual Human with Multimodal Dialog System

Itimad Raheem Ali, Ghazali Sulong, Ahmad Hoirul Basori
2014 Jurnal Teknologi  
This paper specifically introduces the innovative concept of multimodal dialog systems of the virtual character and focuses the output part of such systems.  ...  The making of virtual human character to be realistic and credible in real time automated dialog animation system is necessary.  ...  Multi-Agent Systems Architectures Multi-agent systems research suggests several concrete architectures for controlling intelligent agents.  ... 
doi:10.11113/jt.v71.3859 fatcat:rtvoojoulrhargjk6xbrkmslwe

VOILA: An Optimised Dialogue System for Interactively Learning Visually-Grounded Word Meanings (Demonstration System)

Yanchao Yu, Arash Eshghi, Oliver Lemon
2017 Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue  
We present VOILA: an optimised, multimodal dialogue agent for interactive learning of visually grounded word meanings from a human user.  ...  VOILA is deployed on Furhat 1 , a humanlike, multi-modal robot head with backprojection of the face, and a graphical virtual character.  ...  Acknowledgements This research is supported by the EPSRC, under grant number EP/M01553X/1 (BABBLE project 3 ).  ... 
doi:10.18653/v1/w17-5524 dblp:conf/sigdial/YuEL17 fatcat:e3euu6jywnajxcikviemlq442y

Conversational User Interfaces

Wolfgang Wahlster
2004 it - Information Technology  
The paper presents a flexible framework for such a multi-application dialogue system and an applicationindependent scheme for dialogue processing.  ...  A blackboardarchitecture is described for the fusion of the speech and gesture analysis results. The paper by André and Rist presents new research in the area of embodied conversational characters.  ... 
doi:10.1524/itit.46.6.289.54685 fatcat:7ygwthu5hvfuvcw6trinyb6hkm

The Neem Platform: An Extensible Framework for the Development of Perceptual Collaborative Applications [chapter]

P. Barthelmess, C. A. Ellis
2002 Lecture Notes in Computer Science  
The Neem Platform is a research test bed for Project Neem, concerned with the development of socially and culturally aware group systems.  ...  The Neem Platform is a generic framework for the development of augmented collaborative applications, mainly targeting synchronous distributed collaboration over the internet.  ...  The distributed collaboration environment provides support for participants' interaction through NICs and the multi-agent environment supports back end augmentation functionality, such as multimodal processing  ... 
doi:10.1007/3-540-45785-2_44 fatcat:d6cqmi5dxjhtpcihkywcctxuwa

Discovering eye gaze behavior during human-agent conversation in an interactive storytelling application

Nikolaus Bee, Johannes Wagner, Elisabeth André, Thurid Vogt, Fred Charles, David Pizzi, Marc Cavazza
2010 International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction on - ICMI-MLMI '10  
We present an interactive eye gaze model for embodied conversational agents in order to improve the experience of users participating in Interactive Storytelling.  ...  The interactive model achieved a higher score for user ratings than the non-interactive model. In addition we analyzed the users' gaze behavior during the conversation with the virtual character.  ...  The focus of this work is the regulation of conversational flow in a multi-agent environment.  ... 
doi:10.1145/1891903.1891915 dblp:conf/icmi/BeeWAVCPC10 fatcat:wxpodsicpncnlie2ozgi6vcmca

Toward a Universal Platform for Integrating Embodied Conversational Agent Components [chapter]

Hung-Hsuan Huang, Tsuyoshi Masuda, Aleksandra Cerekovic, Kateryna Tarasenko, Igor S. Pandzic, Yukiko Nakano, Toyoaki Nishida
2006 Lecture Notes in Computer Science  
Embodied Conversational Agents (ECAs) are computer generated life-like characters that interact with human users in face-to-face conversations.  ...  To achieve natural multi-modal conversations, ECA systems are very sophisticated and require many building assemblies and thus are difficult for individual research groups to develop.  ...  We plan to produce a preliminary release for a field test in our proposed project at the eNTERFACE'06 [3] summer workshop on multimodal interfaces.  ... 
doi:10.1007/11893004_28 fatcat:pyhifkz6jzf7zmv3vvsdxa5ese

Towards a Multimedia Knowledge-Based Agent with Social Competence and Human Interaction Capabilities

Leo Wanner, Ioannis Kompatsiaris, Elisabeth André, Florian Lingenfelser, Gregor Mehlmann, Andries Stam, Ludo Stellingwerff, Bianca Vieru, Lori Lamel, Wolfgang Minker, Louisa Pragst, Josep Blat (+8 others)
2016 Proceedings of the 1st International Workshop on Multimedia Analysis and Retrieval for Multimodal Interaction - MARMI '16  
from the web needed for conducting a conversation on a given topic.  ...  We present work in progress on an intelligent embodied conversation agent in the basic care and healthcare domain.  ...  Multimodal Knowledge Representation Our multimodal knowledge representation framework in the Knowledge Base (KB) includes a number of ontologies designed to support the dialogue with the user and to represent  ... 
doi:10.1145/2927006.2927011 dblp:conf/mir/WannerBDDLMSKVK16 fatcat:h5znrszjszcujenxbw7zl2w7ui

Conversational Characters that Support Interactive Play and Learning for Children [chapter]

Andrea Corradini, Manish Mehta, Klaus Robering
2009 Multiagent Systems  
Acknowledgements We would like to thank Marcela Charfuelan, Dymtro Kupkin, Holmer Hemsen and Mykola Kolodnytsky for design and programming support and Svend Killerich for data entry.  ...  Finally, we think that such a complex system could have been implemented only within the framework of a multi-agent architecture.  ...  These objects had a central role in the writer's life and thus offer a topic of conversation to the user and form the basis for multimodal interaction with the character.  ... 
doi:10.5772/6610 fatcat:ldcyusm5bjfspnlsecspir5jlu

Towards Symmetric Multimodality: Fusion and Fission of Speech, Gesture, and Facial Expression [chapter]

Wolfgang Wahlster
2003 Lecture Notes in Computer Science  
We present the SmartKom system, that provides full symmetric multimodality in a mixed-initiative dialogue system with an embodied conversational agent.  ...  We introduce the notion of symmetric multimodality for dialogue systems in which all input modes (eg. speech, gesture, facial expression) are also available for output, and vice versa.  ...  As a resource-adaptive multimodal system, the SmartKom architecture supports a flexible embodiment of the life-like character, that is used as a conversational partner in multimodal dialogue.  ... 
doi:10.1007/978-3-540-39451-8_1 fatcat:sqdnkrur2zcpdmiainiy2ydybm

Richly Connected Systems and Multi-Device Worlds

Bill Tomlinson, Man Lok Yau, Eric Baumer, Joel Ross, Andrew Correa, Gang Ji
2009 Presence - Teleoperators and Virtual Environments  
agents that inhabit the multi-device system.  ...  The core contribution of this paper is a novel framework for collocated multi-device systems; by presenting this framework, this paper lays the groundwork for a wide range of potential applications.  ...  It was supported by the California Institute for Telecommunications and Information Technology (Calit2) and the Donald Bren School of Information and Computer Sciences.  ... 
doi:10.1162/pres.18.1.54 fatcat:562g4chiljbqrlzcbci3iiil34

All Together Now [chapter]

Arno Hartholt, David Traum, Stacy C. Marsella, Ari Shapiro, Giota Stratou, Anton Leuski, Louis-Philippe Morency, Jonathan Gratch
2013 Lecture Notes in Computer Science  
We help address this challenge by introducing the ICT Virtual Human Toolkit 1 , which offers a flexible framework for exploring a variety of different types of virtual human systems, from virtual listeners  ...  into a single framework that allows us to efficiently create characters that can engage users in meaningful and realistic social interactions.  ...  Rather than focusing on a specific type of agent, the Toolkit offers a flexible framework for exploring the vast space of different types of agent systems.  ... 
doi:10.1007/978-3-642-40415-3_33 fatcat:2dcigna2xfbqhgiyi42233ztxu

VirtualHuman

Norbert Reithinger, Patrick Gebhard, Markus Löckelt, Alassane Ndiaye, Norbert Pfleger, Martin Klesen
2006 Proceedings of the 8th international conference on Multimodal interfaces - ICMI '06  
It provides a knowledge-based framework to create interactive applications in a multi-user, multi-agent setting.  ...  Natural multimodal interaction with realistic virtual characters provides rich opportunities for entertainment and education. In this paper we present the current VirtualHuman demonstrator system.  ...  CONCLUSIONS In this paper, we presented an overview to the Virtual-Human system. It provides a knowledge-based framework to create interactive applications in a multi-user, multi-agent setting.  ... 
doi:10.1145/1180995.1181007 dblp:conf/icmi/ReithingerGLNPK06 fatcat:yipx3n2xgvdvjjfsoqi6nh72ju

A flexible platform for building applications with life-like characters

Thomas Rist, Elisabeth Andrࡕ, Stephan Baldes
2003 Proceedings of the 8th international conference on Intelligent user interfaces - IUI '03  
During the last years, an increasing number of R&D projects has started to deploy life-like characters for presentation tasks in a diverse range of application areas, including, for example, E-Commerce  ...  In this contribution, we first analyse a number of existing user interfaces with presentation characters from an architectural point of view.  ...  Thanks to Peter Rist for designing the characters shown in Fig. 1-3 .  ... 
doi:10.1145/604050.604071 fatcat:2tm6wmzd75hghmitrq5maijcgu

A Review of the Development of Embodied Presentation Agents and Their Application Fields [chapter]

Thomas Rist, Elisabeth André, Stephan Baldes, Patrick Gebhard, Martin Klesen, Michael Kipp, Peter Rist, Markus Schmitt
2004 Cognitive Technologies  
Embodied conversational agents provide a promising option for presenting information to users.  ...  While in all systems the purpose of using characters is to convey information to the user, there are significant variations in the style of presentation and the assumed conversational setting.  ...  than a mere trace log-file as known from multi-agent expert systems.  ... 
doi:10.1007/978-3-662-08373-4_16 fatcat:kq4bsnoyfreshclfrkav6447uy
« Previous Showing results 1 — 15 out of 4,624 results