Filters








5,399 Hits in 6.4 sec

References to graphical objects in interactive multimodal queries

Daqing He, Graeme Ritchie, John Lee
2008 Knowledge-Based Systems  
In a system where graphical objects such as diagrams may be on the screen during the language interaction, there is possibility that the user may want to allude to these visual entities.  ...  Traditionally, interactive natural language systems assume a semantic model in which the entities referred to are in some abstract representation of a real or imagined world.  ...  [6] discuss natural language expressions using the graphical features of objects to refer to those objects. Their scenarios involve querying and displaying a local area network.  ... 
doi:10.1016/j.knosys.2008.03.023 fatcat:gas6oxcyz5gqdhjutcv5yqt6dy

Two multimodal interfaces to military simulations

Kenneth Wauchope
1997 Proceedings of the fifth conference on Applied natural language processing Descriptions of system demonstrations and videos -  
We also sought for the two modalities to interact in a number of ways, such as deictic reference (combined pointing and speaking) and NL interaction with graphical dialog windows.  ...  Since with LACE there was no graphical command interface to mirror in natural language, we opted instead to focus on database query (also included in Eucalyptus) and the issuing of verbal onroad route  ... 
doi:10.3115/974281.974294 dblp:conf/anlp/Wauchope97 fatcat:vbpdwj5q6vagpnrdwwufsyote4

Miamm — A Multimodal Dialogue System Using Haptics [chapter]

Norbert Reithinger, Dirk Fedeler, Ashwani Kumar, Christoph Lauer, Elsa Pecourt, Laurent Romary
2005 Text, Speech and Language Technology  
Its objective is the development of new concepts and techniques for user interfaces employing graphics, haptics and speech to allow fast and easy navigation in large amounts of data.  ...  In this chapter we describe the MIAMM project.  ...  Typical individuals in the MIAMM environment are the user, multimedia objects and graphical objects.  ... 
doi:10.1007/1-4020-3933-6_14 fatcat:smx26ejhwnasxorzgjf7f5xrhy

Resolving References to Graphical Objects in Multimodal Queries by Constraint Satisfaction [chapter]

Daqing He, Graeme Ritchie, John Lee
2000 Lecture Notes in Computer Science  
In natural language queries to an intelligent multimodal system, ambiguities related to referring expressionssource ambiguities -can occur between items in the visual display and objects in the domain  ...  In natural language queries to an intelligent multimodal system, ambiguities related to referring expressions -source ambiguitiescan occur between items in the visual display and objects in the domain  ...  [2] mentioned the issue of referring to objects by using their graphical features, and indicated that, to handle this type of references, the graphical attributes on the screen should, like those in  ... 
doi:10.1007/3-540-40063-x_2 fatcat:oammjea7knernm55r6znuy5a7e

I-SEARCH: A Unified Framework for Multimodal Search and Retrieval [chapter]

Apostolos Axenopoulos, Petros Daras, Sotiris Malassiotis, Vincenzo Croce, Marilena Lazzaro, Jonas Etzold, Paul Grimm, Alberto Massari, Antonio Camurri, Thomas Steiner, Dimitrios Tzovaras
2012 Lecture Notes in Computer Science  
The I-SEARCH multimodal search engine is dynamically adapted to end-user's devices, which can vary from a simple mobile phone to a high-performance PC.  ...  In this article, a unified framework for multimodal search and retrieval is introduced. The framework is an outcome of the research that took place within the I-SEARCH European Project.  ...  This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original  ... 
doi:10.1007/978-3-642-30241-1_12 fatcat:e7mfkqvdjrgiplbujatfhzwv4u

Synergistic use of direct manipulation and natural language

P. R. Cohen, M. Dalrymple, D. B. Moran, F. C. Pereira, J. W. Sullivan
1989 Proceedings of the SIGCHI conference on Human factors in computing systems Wings for the mind - CHI '89  
Natural language helps direct manipulation in being able to specify objects and actions by description, while direct manipulation enables users to learn which objects and actions are available in the system  ...  Furthermore, graphical rendering and manipulation of context provides a partial solution to difficult problems of natural language anaphora.  ...  We would like to thank Bill Mark and Martha Pollack for valuable commentary on the paper.  ... 
doi:10.1145/67449.67494 dblp:conf/chi/CohenDMPSGST89 fatcat:waesh26ucbgi3cewafolpgsqz4

Synergistic use of direct manipulation and natural language

P. R. Cohen, M. Dalrymple, D. B. Moran, F. C. Pereira, J. W. Sullivan
1989 ACM SIGCHI Bulletin  
Natural language helps direct manipulation in being able to specify objects and actions by description, while direct manipulation enables users to learn which objects and actions are available in the system  ...  Furthermore, graphical rendering and manipulation of context provides a partial solution to difficult problems of natural language anaphora.  ...  We would like to thank Bill Mark and Martha Pollack for valuable commentary on the paper.  ... 
doi:10.1145/67450.67494 fatcat:hmo545p5tnd4hceldfujo2owpi

A user interface framework for multimodal VR interactions

Marc Erich Latoschik
2005 Proceedings of the 7th international conference on Multimodal interfaces - ICMI '05  
multimodally initialized gestural interactions.  ...  This article presents a User Interface (UI) framework for multimodal interactions targeted at immersive virtual environments.  ...  Implementation and acknowledgment: The core framework is implemented in C++. Additionally, all objects have bindings to the SCHEME scripting language.  ... 
doi:10.1145/1088463.1088479 dblp:conf/icmi/Latoschik05 fatcat:u6l5l7zqyzdszosbxd4qrne6di

Multimodal maps: An agent-based approach [chapter]

Adam Cheyer, Luc Julia
1998 Lecture Notes in Computer Science  
In this paper, we discuss how multiple input modalities may be combined to produce more natural user interfaces.  ...  To implement the described application, a hierarchical distributed network of heterogeneous software agents was augmented by appropriate functionality for developing synergistic multimodal applications  ...  The PAC-Amodeus systems such as VoicePaint and Notebook allow the user to synergistically combine vocal or mouse-click commands when interacting with notes or graphical objects.  ... 
doi:10.1007/bfb0052316 fatcat:ftponhj4hzdyvhp6itioozflnu

Integration and synchronization of input modes during multimodal human-computer interaction

Sharon Oviatt, Antonella DeAngeli, Karen Kuhn
1997 Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '97  
To provide a foundation for theory and design, the present research analyzed multimodal interaction while people spoke and wrote to a simulated dynamic map system.  ...  Keywords multimodal interaction, integration and synchronization, speech and pen input, dynamic interactive maps, spatial location information, predictive modeling  ...  ACKNOWLEDGMENTS Thanks to A. Cheyer, M. Reinfreid, and L. Waugh for assistance with programming the simulation and acting as simulation assistant, and to P. Schmidt and D.  ... 
doi:10.1145/258549.258821 dblp:conf/chi/OviattDK97 fatcat:juegvbftkbh2vnobyvzoa7enmq

A Multimodal Search Engine for Medical Imaging Studies

Eduardo Pinho, Tiago Godinho, Frederico Valente, Carlos Costa
2016 Journal of digital imaging  
In this article, we detail a technical approach to the problem by describing its main architecture and each sub-component, as well as the available web interfaces and the multimodal query techniques applied  ...  Graphical user interface (GUI) . Information storage and retrieval . PACS . Reproducibility of results . Software design . Multimodal information retrieval . Query fusion . Web services  ...  /multimodal/stash provides the means to store media objects for use in future queries.  ... 
doi:10.1007/s10278-016-9903-z pmid:27561754 pmcid:PMC5267596 fatcat:6zplir2l6zdn3ktpfuwpr2ykna

Context-Aware Querying for Multimodal Search Engines [chapter]

Jonas Etzold, Arnaud Brousseau, Paul Grimm, Thomas Steiner
2012 Lecture Notes in Computer Science  
In this paper, we present our work in the context of the I-SEARCH project, which aims at enabling context-aware querying of a multimodal search framework including real-world data such as user location  ...  We introduce the concepts of MuSe-Bag for multimodal query interfaces, UIIFace for multimodal interaction handling, and CoFind for collaborative search as the core components behind the I-SEARCH multimodal  ...  Acknowledgments This work is partly funded by the EU FP7 I-SEARCH project under project reference 248296. We would like to thank all of the partners in the I-SEARCH project for their support.  ... 
doi:10.1007/978-3-642-27355-1_77 fatcat:dg23c6oudvb6zdmifj4y5lxwke

Multimodal Dialogue Systems: A Case Study for Interactive TV [chapter]

Aseel Ibrahim, Pontus Johansson
2003 Lecture Notes in Computer Science  
In this case study we have shown the advantages of combining natural language and a graphical interface in the interactive TV domain.  ...  In this paper we describe a multimodal dialogue TV program guide system that is a research prototype built for the case study by adding speech interaction to an already existing TV program guide.  ...  Acknowledgements This work is a result from a project on multimodal interaction for information services supported by Santa Anna IT Research/SITI and VINNOVA [13] .  ... 
doi:10.1007/3-540-36572-9_17 fatcat:5m3umqxs5zfivbi7wdaz7ivb7i

SmartKom

Norbert Reithinger, Michael Streit, Valentin Tschernomas, Jan Alexandersson, Tilman Becker, Anselm Blocher, Ralf Engel, Markus Löckelt, Jochen Müller, Norbert Pfleger, Peter Poller
2003 Proceedings of the 5th international conference on Multimodal interfaces - ICMI '03  
In this paper we present a generic multimodal interface system where the user interacts with an anthropomorphic personalized interface agent using speech and natural gestures.  ...  We demonstrate the main ideas in a walk through the main processing steps from modality fusion to modality fission.  ...  ACKNOWLEDGEMENTS The system was developed in the context of the "Human Computer Interaction" research program funded by the German Federal Ministry of Education and Research from 1999 to 2003 under grant  ... 
doi:10.1145/958432.958454 dblp:conf/icmi/ReithingerABBELMPPST03 fatcat:apfqppgainbw3kox45xzeyhhja

SmartKom

Norbert Reithinger, Michael Streit, Valentin Tschernomas, Jan Alexandersson, Tilman Becker, Anselm Blocher, Ralf Engel, Markus Löckelt, Jochen Müller, Norbert Pfleger, Peter Poller
2003 Proceedings of the 5th international conference on Multimodal interfaces - ICMI '03  
In this paper we present a generic multimodal interface system where the user interacts with an anthropomorphic personalized interface agent using speech and natural gestures.  ...  We demonstrate the main ideas in a walk through the main processing steps from modality fusion to modality fission.  ...  ACKNOWLEDGEMENTS The system was developed in the context of the "Human Computer Interaction" research program funded by the German Federal Ministry of Education and Research from 1999 to 2003 under grant  ... 
doi:10.1145/958451.958454 fatcat:6ajqlgczsvdsnivgzweko2uh7i
« Previous Showing results 1 — 15 out of 5,399 results