A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is application/pdf
.
Filters
A domain-specific textual language for rapid prototyping of multimodal interactive systems
2014
Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems - EICS '14
Some provide domain-specific visual languages with which a broad range of interactions can be modeled but at the expense of bulky diagrams. ...
We propose a domain-specific textual language and its supporting toolkit; they both overcome the shortcomings of the existing approaches while retaining their strengths. ...
Other toolkits can interpret textual specifications [1, 15, 24, 13] made in some existing languages, e.g. XML or CLIPS, not specialized for multimodal systems. ...
doi:10.1145/2607023.2607036
dblp:conf/eics/CuencaBLC14
fatcat:64dlhkagtvbtdfliymuuys56aq
Assessing the support provided by a toolkit for rapid prototyping of multimodal systems
2013
Proceedings of the 5th ACM SIGCHI symposium on Engineering interactive computing systems - EICS '13
Choosing an appropriate toolkit for creating a multimodal interface is a cumbersome task. ...
Unfortunately, the extent to which these toolkits can facilitate the creation of a multimodal interface is hard or impossible to estimate, due to the absence of a scale where the toolkit's capabilities ...
ACKNOWLEDGMENTS We want to thank the BOF financing of Hasselt University for supporting this research, and our colleague, Jan Van den Bergh, for his valuable feedback. ...
doi:10.1145/2494603.2480326
fatcat:2oo4okgzlzge3kmr4hyaznoley
A user study for comparing the programming efficiency of modifying executable multimodal interaction descriptions: a domain-specific language versus equivalent event-callback code
2015
Proceedings of the 6th Workshop on Evaluation and Usability of Programming Languages and Tools - PLATEAU 2015
The present paper describes an empirical user study intended to compare the programming efficiency of our proposed domain-specific language versus a mainstream event language when it comes to modify multimodal ...
The paper also discusses the considerations we took into account when designing a multimodal interaction description language that intends to be well regarded by its users. ...
Hasselt, a Language for Rapid Prototyping Multimodal Systems Hasselt is a declarative language aimed at describing multimodal interactions. ...
doi:10.1145/2846680.2846686
dblp:conf/oopsla/CuencaBLC15
fatcat:o6isbzac2jhploh2tryuewhm2q
A Discourse and Dialogue Infrastructure for Industrial Dissemination
[chapter]
2010
Lecture Notes in Computer Science
We think that modern speech dialogue systems need a prior usability analysis to identify the requirements for industrial applications. ...
These requirements can then be met by multimodal semantic processing, semantic navigation, interactive semantic mediation, user adaptation/personalisation, interactive service composition, and semantic ...
the implementation and evaluation of the dialogue infrastructure. ...
doi:10.1007/978-3-642-16202-2_12
fatcat:scwpfviocbf77ordj53rbngao4
Towards Sonification in Multimodal and User-friendlyExplainable Artificial Intelligence
2021
Proceedings of the 2021 International Conference on Multimodal Interaction
Sonification has also great potential for explainable AI (XAI) in systems that deal with non-audio data -for example, because it does not require visual contact or active attention of a user. ...
Today's Artificial Intelligence (AI), however, is -if at all -largely providing explanations of decisions in a visual or textual manner. ...
from AI-systems, particularly for complex human-system interactions. ...
doi:10.1145/3462244.3479879
fatcat:m2gtqpihabgtded2bdjskwyvs4
Spoken language and multimodal applications for electronic realities
1999
Virtual Reality
In this article, we describe our efforts to apply multimodal and spoken language interfaces to a number of ER applications, with the goal of creating an even more 'realistic' or natural experience for ...
In contrast, when people typically interact with computers or appliances, interactions are unimodal, with a single method of communication such as the click of a mouse or a set of keystrokes serving to ...
Although our current prototype runs on a laptop equipped with a touch screen instead of a wearable computer, the system is able to provide multimodal interactions to a team of wireless robots. ...
doi:10.1007/bf01408590
fatcat:7pmli3y4bndzjobcnvtlqv3iay
The Neem Platform: An Extensible Framework for the Development of Perceptual Collaborative Applications
[chapter]
2002
Lecture Notes in Computer Science
It supports rapid prototyping, as well as Wizard of Oz experiments to ease development and evolution of such applications. ...
The Neem Platform is a research test bed for Project Neem, concerned with the development of socially and culturally aware group systems. ...
A multimodal system strives for meaning [17] . ...
doi:10.1007/3-540-45785-2_44
fatcat:d6cqmi5dxjhtpcihkywcctxuwa
ELLE the EndLess LEarner: Exploring Second Language Acquisition Through an Endless Runner-style Video Game
2017
Digital Humanities Conference
The system is designed so that different game features, specifically auditory, visual, and textual cues, can be modified easily by a researcher and the efficacy of each studied in relation to language ...
Such socio-cognitive variables include questions about how the gaming environment influences learners' motivation and how the interaction within the multimodal space of a video game could be affected by ...
dblp:conf/dihu/MerrittJG17
fatcat:f7remmlwwva7zm4nrzz2kderim
Multimodal output specification / simulation platform
2005
Proceedings of the 7th international conference on Multimodal interfaces - ICMI '05
The design of an output multimodal system is a complex task due to the richness of today interaction contexts. ...
The diversity of environments, systems and user profiles requires a new generation of software tools to specify complete and valid output interactions. ...
However, the design of such a system implies specific constraints from the output multimodality domain which might call into question the model interest. ...
doi:10.1145/1088463.1088480
dblp:conf/icmi/RousseauBV05
fatcat:t2zlir72hnbrxcpt43a6nbcpnq
A multimodal guide for the augmented campus
2007
Proceedings of the 35th annual ACM SIGUCCS conference on User services - SIGUCCS '07
In this work we propose and discuss a user-friendly, multi-modal guide system for pervasive context-aware service provision within augmented environments. ...
The auto-localization service relies on a RFID-based framework, which resides partly in the mobile side of the entire system (PDAs), and partly in the environment side. ...
Processing Analysis, Preservation and Retrieval of Spoken Natural Language Archives". ...
doi:10.1145/1294046.1294123
dblp:conf/siguccs/SorceASPGGG07
fatcat:ljcnfhsbxjdibp36gjgiabzwyy
Large-scale software integration for spoken language and multimodal dialog systems
2004
Natural Language Engineering
This contribution 1 presents a general framework for building integrated natural-language and multimodal dialog systems. ...
The development of large-scale dialog systems requires a flexible architecture model and adequate software support to cope with the challenge of system integration. ...
It does, however, provide a scalable and efficient platform for the kind of real-time interaction needed within a multimodal dialog system. ...
doi:10.1017/s1351324904003444
fatcat:7agcdxactbddbgytxf3ux6qe5u
Semantic framework for interactive animation generation and its application in virtual shadow play performance
2018
Virtual Reality
Finally, prototype of interactive Chinese shadow play performance system using deep motion sensor device is presented as the usage example. ...
In this paper, a semantic framework is proposed to model the construction of interactive animation and promote animation assets reuse in a systematic and standardized way. ...
, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were ...
doi:10.1007/s10055-018-0333-8
fatcat:ourl42j7kfairnlypuxfsthcq4
Generic Dialogue Modeling for Multi-application Dialogue Systems
[chapter]
2006
Lecture Notes in Computer Science
We present a novel approach to developing interfaces for multi-application dialogue systems. ...
The approach, based on the Rapid Dialogue Prototyping Methodology (RDPM) and the Vector Space Model techniques, is composed of three main steps: (1) producing finalized dialogue models for applications ...
writing, and the two anonymous reviewers for their useful comments on the first version of this paper. ...
doi:10.1007/11677482_15
fatcat:imyszjl7nbgf5mkp4res2pwdca
Gesture recognition corpora and tools: A scripted ground truthing method
2015
Computer Vision and Image Understanding
This article presents a framework supporting rapid prototyping of multimodal applications, the creation and management of datasets and the quantitative evaluation of classification algorithms for the specific ...
A review of the available corpora for gesture recognition highlights their main features and characteristics. ...
A framework supporting rapid prototyping, the creation and management of datasets and the evaluation of algorithms in the context of multimodal gesture recognition. ...
doi:10.1016/j.cviu.2014.07.004
fatcat:2hg2kjhzzzcqhfhwakiaxvwo64
Interactive design of multimodal user interfaces
2010
Journal on Multimodal User Interfaces
Third, we not only support the interactive design and rapid prototyping of multi modal interfaces but also provide advanced development and debugging techniques to improve technical and conceptual solutions ...
In contrast to the pioneers of multimodal interaction, e.g. Richard Bolt in the late seventies, today's researchers can benefit from various existing hardware devices and software toolkits. ...
ICARE [5] is a conceptual component model a nd a software toolkit for the rapid development of multimodal interfaces. ...
doi:10.1007/s12193-010-0044-2
fatcat:smjfb4df3zczfikln4fcdjz5kq
« Previous
Showing results 1 — 15 out of 1,481 results