Filters








563 Hits in 5.3 sec

The information-theoretic analysis of unimodal interfaces and their multimodal counterparts

Melanie Baljko
2005 Proceedings of the 7th international ACM SIGACCESS conference on Computers and accessibility - Assets '05  
A reinterpretation of Keates and Robinson's empirical data (1998) shows that their criticism of multimodal interfaces was, in part, unfounded.  ...  In this paper, the hypothesized benefits of semantically redundant multimodal input actions are described formally and are quantified using the formalisms provided by Information Theory.  ...  needed of the multimodal counterparts of unimodal systems (so that their information rates might be equivalent to or greater than the unimodal versions).  ... 
doi:10.1145/1090785.1090793 dblp:conf/assets/Baljko05 fatcat:kqfab6h5x5gwljfolgcgcifwpm

Multimodal interfaces for dynamic interactive maps

Sharon Oviatt
1996 Proceedings of the SIGCHI conference on Human factors in computing systems common ground - CHI '96  
In the present research, interfaces supporting spoken, pen-based, and multimodal input were analyze for their potential effectiveness in interacting with this new generation of map systems.  ...  Implications of this research are discussed for the design of high-performance multimodal interfaces for future map systems.  ...  modes, and that these spoken constructions averaged longer than their multimodal counterparts.  ... 
doi:10.1145/238386.238438 dblp:conf/chi/Oviatt96 fatcat:ila7oulginfz3i7xjd2tlgeclu

Foundations of multimodal representations: a taxonomy of representational modalities

Niels Ole Bernsen
1994 Interacting with computers  
The paper presents an generative approach to the analysis of output modality types and their combinations and takes some steps towards its implementation, departing from a taxonomy of generic unimodal  ...  Advances in information technologies are producing a very large number of possible interface modality combinations which are potentially useful for the expression and exchange of information in humancomputer  ...  Well-known atomic types (if any) of each of the generic unimodal modalities. mes better known, multimodal counterparts.  ... 
doi:10.1016/0953-5438(94)90008-6 fatcat:aha5dtx5fba2deq2jqmhtj6glu

Multimodality in Language and Speech Systems — From Theory to Design Support Tool [chapter]

Niels Ole Bernsen
2002 Text, Speech and Language Technology  
Finally, empirical and theoretical approaches to the combinatorial explosion of modality combinations in multimodal systems are discussed.  ...  The solutions cover the generation, at descending levels of abstraction, of taxonomies of unimodal input and output modalities from basic properties in the media of graphics, acoustics and haptics.  ...  taxonomy and systematic analysis of the unimodal modalities which go into the creation of multimodal output representations of information for HHSI.  ... 
doi:10.1007/978-94-017-2367-1_6 fatcat:kw4b7cm7inbetgqgsfdt4lh2xu

Bringing the analysis of animal orientation data full circle: model-based approaches with maximum likelihood

Robert R. Fitak, Sönke Johnsen
2017 Journal of Experimental Biology  
Here, we discuss some of the assumptions and limitations of common circular tests and report a new R package called CircMLE to implement the maximum likelihood analysis of circular data.  ...  Our software provides a convenient interface that facilitates the use of model-based approaches in animal orientation studies.  ...  Acknowledgements We thank the Duke Shared Cluster Resource for providing computational resources, N. Putman for providing the example dataset, and L. Schweikert and E.  ... 
doi:10.1242/jeb.167056 pmid:28860118 fatcat:2e2jpu2dwzdrraemfwapmsuggi

A review of affective computing: From unimodal analysis to multimodal fusion

Soujanya Poria, Erik Cambria, Rajiv Bajpai, Amir Hussain
2017 Information Fusion  
In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities.  ...  from conventional unimodal analysis to more complex forms of multimodal analysis.  ...  In this study, for example, it is reported that multimodal "systems were consistently (85% of systems) more accurate than their best unimodal counterparts, with an average improvement of 9.83% (median  ... 
doi:10.1016/j.inffus.2017.02.003 fatcat:ytebhjxlz5bvxcdghg4wxbvr6a

Multimodal Grounding for Language Processing [article]

Lisa Beinborn, Teresa Botschen, Iryna Gurevych
2019 arXiv   pre-print
We categorize the information flow in multimodal processing with respect to cognitive models of human information processing and analyze different methods for combining multimodal representations.  ...  Based on this methodological inventory, we discuss the benefit of multimodal grounding for a variety of language processing tasks and the challenges that arise.  ...  We thank Faraz Saeedan for his assistance with the computation of the visual embeddings for the imSitu images. We thank the anonymous reviewers for their insightful comments.  ... 
arXiv:1806.06371v2 fatcat:ucqjg2uhabf3vfkgjdfoa5z5yy

Multimodal vs. Unimodal Physiological Control in Videogames for Enhanced Realism and Depth [article]

Gonçalo Amaral da Silva
2014 arXiv   pre-print
Both versions were praised differently: the unimodal version for its simplicity of use, and the multimodal for its realism, activation safety of game mechanics and depth added to the game.  ...  three interaction flavours (no biofeedback/vanilla, unimodal and multimodal).  ...  Ratings regarding the Fun, Ease of Use and Originality aspects were processed using One-way Analysis of Variance (ANOVA) tests with the Vanilla, Unimodal and Multimodal versions of the game as the within-subjects  ... 
arXiv:1406.0532v1 fatcat:oumhxnq5ejfefliuccpbvkr7me

The efficiency of multimedia learning into old age

Pascal W.M. Gerven, Fred Paas, Jeroen J.G. Merriënboer, Maaike Hendriks, Henk G. Schmidt
2003 British Journal of Educational Psychology  
worked examples (unimodal condition) and solving conventional problems.  ...  On the basis of a multimodal model of working memory, cognitive load theory predicts that a multimedia-based instructional format leads to a better acquisition of complex subject matter than a purely visual  ...  We also wish to thank the staff and students of the Sintermeertencollege in Heerlen, The Netherlands, for their co-operation.  ... 
doi:10.1348/000709903322591208 pmid:14713374 fatcat:t3awmlgzrbgxxjxbn5lgoo6qrm

Audiovisual Information Fusion in Human–Computer Interfaces and Intelligent Environments: A Survey

Shankar T. Shivappa, Mohan Manubhai Trivedi, Bhaskar D. Rao
2010 Proceedings of the IEEE  
In this paper we describe the fusion strategies and the corresponding models used in audiovisual tasks such as speech recognition, tracking, biometrics, affective state recognition and meeting scene analysis  ...  Human brain processes the audio and video modalities extracting complementary and robust information from them.  ...  We sincerely thank the reviewers for their valuable advise which has helped us enhance the content as well as the presentation of the paper.  ... 
doi:10.1109/jproc.2010.2057231 fatcat:lfzgfmn2hjdq7h6o5txva3oapq

A Review on Explainability in Multimodal Deep Neural Nets

Gargi Joshi, Rahee Walambe, Ketan Kotecha
2021 IEEE Access  
Despite their outstanding performance, the complex, opaque and black-box nature of the deep neural nets limits their social acceptance and usability.  ...  Several topics on multimodal AI and its applications for generic domains have been covered in this paper, including the significance, datasets, fundamental building blocks of the methods and techniques  ...  systems, giving rise to multimodal multisensory interfaces and multimodal information retrieval systems.  ... 
doi:10.1109/access.2021.3070212 fatcat:5wtxr4nf7rbshk5zx7lzbtcram

A review of speech-based bimodal recognition

C.C. Chibelushi, F. Deravi, J.S.D. Mason
2002 IEEE transactions on multimedia  
Multimodal recognition is therefore acknowledged as a vital component of the next generation of spoken language systems.  ...  The combination of auditory and visual modalities promises higher recognition accuracy and robustness than can be obtained with a single modality.  ...  The extraction and analysis of such information is the target of research on audio-visual signal processing, which is gaining momentum in areas such as recognition, synthesis, and compression.  ... 
doi:10.1109/6046.985551 fatcat:6fezo5zovbdtti3lzxh24ksaii

Multiscale three-dimensional scaffolds for soft tissue engineering via multimodal electrospinning

Sherif Soliman, Stefania Pagliari, Antonio Rinaldi, Giancarlo Forte, Roberta Fiaccavento, Francesca Pagliari, Ornella Franzese, Marilena Minieri, Paolo Di Nardo, Silvia Licoccia
2010 Acta Biomaterialia  
Three conventional unimodal scaffolds with mean diameters of 300 nm and 2.6 and 5.2 lm, respectively, were used as controls to evaluate the new materials.  ...  Characterization of the microstructure (i.e. porosity, fiber distribution and pore structure) and mechanical properties (i.e. stiffness, strength and failure mode) indicated that the multimodal scaffold  ...  Acknowledgements A.R. and the other authors acknowledge William Sampson for his gracious support with the implementation of the pore size model and for insightful discussions.  ... 
doi:10.1016/j.actbio.2009.10.051 pmid:19887125 fatcat:beio2zs3trhxhotvtxwiwmdm6u

Sensitive Talking Heads [Applications Corner]

T.S. Huang, M.A. Hasegawa-Johnson, S.M. Chu, Zhihong Zeng, Hao Tang
2009 IEEE Signal Processing Magazine  
We find that both recognition accuracy and synthesis quality are improved when one takes advantage of multimodal information, synthesizing and recognizing information in both the audio and video modalities  ...  Automatic recognition and synthesis of emotionally nuanced speech, on the other hand, are still topics of active research. This column describes experiments in emotive spoken language user interface.  ...  Speech: Recognition by humans Human speech perception is a multimodal process, in which one's interpretation of the audio signal is constrained by many types of context information.  ... 
doi:10.1109/msp.2009.932562 fatcat:4bxlfcydwfasvhcd56oehoh7s4

Training of Procedural Tasks Through the Use of Virtual Reality and Direct Aids [chapter]

Jorge Rodrguez, Teresa Gutirrez, Emilio J., Sara Casado, Iker Aguinag
2012 Virtual Reality and Environments  
analysis of the data.  ...  Nowadays, due to the last advances in computer technologies to increase their capabilities for processing and managing diverse information in parallel, the multimodal systems are increasing their use.  ... 
doi:10.5772/36650 fatcat:l54easn5wjgvpg5e6hvlls76vy
« Previous Showing results 1 — 15 out of 563 results