Filters








9,174 Hits in 4.0 sec

COMPASS2008: Multimodal, Multilingual and Crosslingual Interaction for Mobile Tourist Guide Applications [chapter]

Ilhan Aslan, Feiyu Xu, Hans Uszkoreit, Antonio Krüger, Jörg Steffen
2005 Lecture Notes in Computer Science  
The main goals of COM-PASS2008 are to help foreigners to overcome language barriers in Beijing and assist them in finding information anywhere and anytime they need it.  ...  Novel strategies have been developed to exploit the interaction of multimodality, multilinguality and cross-linguality for intelligent information service access and information presentation via mobile  ...  In this paper we would like to bring together both lines of research by introducing translation techniques, multilinguality and cross-linguality into the design of mobile multimodal interfaces.  ... 
doi:10.1007/11590323_1 fatcat:biwzi2cihnauplyep3j5it3ybm

Testing Two Tools for Multimodal Navigation

Mats Liljedahl, Stefan Lindberg, Katarina Delsing, Mikko Polojärvi, Timo Saloranta, Ismo Alakärppä
2012 Advances in Human-Computer Interaction  
This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval.  ...  Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data.  ...  Acknowledgments The II City project was funded by EU Interreg IV A North, the County Administrative Board of Norrbotten (Länsstyrelsen i Norrbotten), Sweden, the Regional Council of Lapland (Lapin Liitto  ... 
doi:10.1155/2012/251384 fatcat:perwtgugpjfotiwo5g2vltgv2q

MEMODULES as Tangible Shortcuts to Multimedia Information [chapter]

Elena Mugellini, Denis Lalanne, Bruno Dumas, Florian Evéquoz, Sandro Gerardi, Anne Le Calvé, Alexandre Boder, Rolf Ingold, Omar Abou Khaled
2009 Lecture Notes in Computer Science  
This center of competence will have the mission to place humans at the center of technology design and further to disseminate knowledge through the creation of a course on "Multimodal Interfaces" both  ...  This course will give students a wide overview of existing techniques for designing and implementing multimodal interfaces.  ... 
doi:10.1007/978-3-642-00437-7_5 fatcat:djnjnib2oreczf37gxmihkfqwe

I-SEARCH: A Unified Framework for Multimodal Search and Retrieval [chapter]

Apostolos Axenopoulos, Petros Daras, Sotiris Malassiotis, Vincenzo Croce, Marilena Lazzaro, Jonas Etzold, Paul Grimm, Alberto Massari, Antonio Camurri, Thomas Steiner, Dimitrios Tzovaras
2012 Lecture Notes in Computer Science  
In this article, a unified framework for multimodal search and retrieval is introduced. The framework is an outcome of the research that took place within the I-SEARCH European Project.  ...  All I-SEARCH components advance the state of the art in the corresponding scientific fields.  ...  This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original  ... 
doi:10.1007/978-3-642-30241-1_12 fatcat:e7mfkqvdjrgiplbujatfhzwv4u

Intelligent Multimedia Communication [chapter]

Mark Maybury, Oliviero Stock, Wolfgang Wahlster
1998 Lecture Notes in Computer Science  
Multimedia communication is a part of everyday life and its appearance in computer applications is increasing in frequency and diversity.  ...  This article defines the area of intelligent multimedia communication, outlines fundamental research questions, summarizes the associated scientific and technical history, identifies current challenges  ...  The generation system, part of the output presentation system, is influenced by the user's interest model that develops in the course of the multimodal interaction.  ... 
doi:10.1007/3-540-49653-x_1 fatcat:jbd4454lnzc23nrhoma4v3u2ge

PicSOM Experiments in TRECVID 2005

Markus Koskela, Jorma Laaksonen, Mats Sjöberg, Hannes Muurinen
2005 TREC Video Retrieval Evaluation  
In the highlevel feature extraction task, we applied a method of representing semantic concepts as class models on a set of parallel Self-Organizing Maps (SOMs).  ...  Our small-scale interactive search experiments were performed with our prototype retrieval interface supporting only relevance feedback -based retrieval.  ...  ACKNOWLEDGEMENTS This work was supported by the Academy of Finland in the projects Neural methods in information retrieval based on automatic content analysis and relevance feedback and New information  ... 
dblp:conf/trecvid/KoskelaLSM05 fatcat:ywlqpkobyva3hlfqhwf5mtelvi

Perlustration on Image Processing under Free Hand Sketch Based Image Retrieval

S. Amarnadh, P.V.G.D. Reddy, N.V.E.S. Murthy
2018 EAI Endorsed Transactions on Internet of Things  
In general information retrieval has taken vast diversions in visualizing the content presentation for the users who generates the queries for the system, where it includes the concept of content based  ...  Image Retrieval to provide the results in a better way by adapting the approaches like Text Based Image Retrieval(TBIR) and Sketch Based Image Retrieval(SBIR).  ...  In earlier days the user interfaces were designed with respect to the conventional use of traditional desktops or laptops.  ... 
doi:10.4108/eai.21-12-2018.159334 fatcat:2wjongwrhrfflm2amb3zyd52b4

Design of a Tourist Driven Bandwidth Determined MultiModal Mobile Presentation System [chapter]

Anthony Solon, Paul Mc Kevitt, Kevin Curran
2004 Lecture Notes in Computer Science  
This paper concentrates on the motivation for & issues surrounding such intelligent systems.  ...  TeleMorph is a tourist information system which aims to dynamically generate multimedia presentations using output modalities that are determined by the bandwidth available on a mobile device's connection  ...  The objectives of TeleMorph are: (1) receive and interpret questions from the user, (2) map questions to multimodal semantic representation, (3) match multimodal representation to knowledge base to retrieve  ... 
doi:10.1007/978-3-540-30178-3_32 fatcat:v7ug5g4hpff33p3v3xwfwgjcri

Personalizing Virtual and Augmented Reality for Cultural Heritage Indoor and Outdoor Experiences [article]

Fotis Liarokapis, Stella Sylaiou, David Mountain
2008 VAST: International Symposium on Virtual Reality  
Different case studies illustrate the majority of the capabilities of the multimodal interfaces used and also how personalisation and customisation can be performed in both kiosk and mobile guide exhibitions  ...  Our solution takes into account the diverse needs of visitors to heritage and mobile guide exhibitions allowing for multimedia representations of the same content but using diverse interfaces including  ...  Acknowledgments Part of the work presented in this paper was conducted within the EU FP5 ARCO project as well as the LOCUS project, funded by EPSRC, and the EU FP5 WebPark project.  ... 
doi:10.2312/vast/vast08/055-062 fatcat:zujerdkebnbdnelrlebfv6n65m

MVA: The Multimodal Virtual Assistant

Michael Johnston, John Chen, Patrick Ehlen, Hyuckchul Jung, Jay Lieske, Aarthi Reddy, Ethan Selfridge, Svetlana Stoyanchev, Brant Vasilieff, Jay Wilpon
2014 Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)  
This demonstration will highlight incremental recognition, multimodal speech and gesture input, contextually-aware language understanding, and the targeted clarification of potentially incorrect segments  ...  The Multimodal Virtual Assistant (MVA) is an application that enables users to plan an outing through an interactive multimodal dialog with a mobile device.  ...  Acknowledgements Thanks to Mike Kai and to Deepak Talesra for their work on the MVA project.  ... 
doi:10.3115/v1/w14-4335 dblp:conf/sigdial/JohnstonCEJLRSS14 fatcat:qm2nress6vdc3k7sj7xeq27woe

Context-Aware Querying for Multimodal Search Engines [chapter]

Jonas Etzold, Arnaud Brousseau, Paul Grimm, Thomas Steiner
2012 Lecture Notes in Computer Science  
In this paper, we present our work in the context of the I-SEARCH project, which aims at enabling context-aware querying of a multimodal search framework including real-world data such as user location  ...  We introduce the concepts of MuSe-Bag for multimodal query interfaces, UIIFace for multimodal interaction handling, and CoFind for collaborative search as the core components behind the I-SEARCH multimodal  ...  Acknowledgments This work is partly funded by the EU FP7 I-SEARCH project under project reference 248296. We would like to thank all of the partners in the I-SEARCH project for their support.  ... 
doi:10.1007/978-3-642-27355-1_77 fatcat:dg23c6oudvb6zdmifj4y5lxwke

Haptic Interfaces for Individuals with Visual Impairments

Benjamin Vercellone, John Shelestak, Yaser Dhaher, Robert Clements
2018 G|A|M|E The Italian Journal of Game Studies  
Here we present the rationale and benefits for using this type of multimodal interaction for individuals with visual impairments as well as the current state of the art in haptic interfaces for gaming,  ...  Clearly, the inclusion of these additional modalities will provide a method for individuals with visual impairments to enjoy and interact with the virtual worlds and content in general.  ...  In addition, the abstraction layers are designed for interaction using the inherent interface and methods built into the controlling software (for example, a device can only be used to control typical  ... 
doaj:cc3d34cf8c0540a78a8c27ec681fc134 fatcat:zushnpfcnffrvajqvouygaeu2a

Construction of Garden Landscape Design System Based on Multimodal Intelligent Computing and Deep Neural Network

Xueyong Yu, Heng Yu, Chunjing Liu, Gengxin Sun
2022 Computational Intelligence and Neuroscience  
The problem of module discrimination and identification in the field of landscape design is the focus of researchers.  ...  This method only uses 15% of the features of the original feature set. The complexity of the recognition system also reduces the recognition error rate.  ...  method obtains the target motion information by calculating the temporal change of the corresponding pixels of two adjacent frames in the garden video stream. e feature matrix is sent to CNN for learning  ... 
doi:10.1155/2022/8332180 pmid:35845884 pmcid:PMC9283027 fatcat:ufwxgmtui5ekjdilmvtliu6qge

MirBot: A Multimodal Interactive Image Retrieval System [chapter]

Antonio Pertusa, Antonio-Javier Gallego, Marisa Bernabeu
2013 Lecture Notes in Computer Science  
After taking a picture, the region of interest of the target can be selected, and the image information is sent with a set of metadata to a server in order to classify the object.  ...  This study presents a multimodal interactive image retrieval system for smartphones (MirBot).  ...  This study was supported by the Consolider Ingenio 2010 program (MIPRCV, CSD2007-00018), the PASCAL2 Network of Excellence IST-2007-216886, and the Spanish CICyT TIN2009-14205-C04-C1.  ... 
doi:10.1007/978-3-642-38628-2_23 fatcat:bpksc3gtdzc6jb4fctewafsknm

Knowledge in the Loop: Semantics Representation for Multimodal Simulative Environments [chapter]

Marc Erich Latoschik, Peter Biermann, Ipke Wachsmuth
2005 Lecture Notes in Computer Science  
Second, the KRL's expressiveness is demonstrated in the design of multimodal interactions.  ...  The KRL supports two different implementation methods. The first method uses XSLT processing to transform the external KRL format into the representation formats of the diverse target systems.  ...  Acknowledgement: This work is partially supported by the Deutsche Forschungsgemeinschaft (DFG  ... 
doi:10.1007/11536482_3 fatcat:pdxbkkveofhzffwmojhmnerxyq
« Previous Showing results 1 — 15 out of 9,174 results