10,471 Hits in 3.7 sec

Comparing and Reusing Visualisation and Sonification Designs using the MS-Taxonomy

Keith V. Nesbitt
2004 International Conference on Auditory Display  
Yet the designer of multi-sensory displays would like to make sensible decisions about when to use each modality.  ...  This paper describes a classification of abstract data displays that is general for all senses. This allows the same terminology to be used for describing both visualisations and sonifications.  ...  DISCUSSION The MS-Taxonomy is a structured group of concepts that describes the multi-sensory design space for abstract data display.  ... 
dblp:conf/icad/Nesbitt04 fatcat:zdbwo2pjl5e7hos35bhewl5mku

Grounding Semantics in Olfactory Perception

Douwe Kiela, Luana Bulat, Stephen Clark
2015 Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)  
Multi-modal semantics has relied on feature norms or raw image data for perceptual input.  ...  We use standard evaluations for multi-modal semantics, including measuring conceptual similarity and cross-modal zero-shot learning.  ...  We thank the anonymous reviewers for their helpful comments and Flaviu Bulat for providing useful feedback.  ... 
doi:10.3115/v1/p15-2038 dblp:conf/acl/KielaBC15 fatcat:lerou7iaobffhcophigoarqx7a

IMMIView: a multi-user solution for design review in real-time

Ricardo Jota, Bruno R. de Araújo, Luís C. Bruno, João M. Pereira, Joaquim A. Jorge
2009 Journal of Real-Time Image Processing  
The results show that users feel comfortable with the system and suggest that users prefer the multi-modal approach to more conventional interactions, such as mouse and menus, for the architectural tasks  ...  We present a multi-modal fusion system developed to support multi-modal commands on a collaborative, colocated, environment, i.e: with two or more users interacting at the same time, on the same system  ...  of each multi-modal metaphor used by user.  ... 
doi:10.1007/s11554-009-0141-1 fatcat:hnh4a7xsj5hife6vo7l677v3ni

Sensor Fusion and Environmental Modelling for Multimodal Sentient Computing

Christopher Town, Zhigang Zhu
2007 2007 IEEE Conference on Computer Vision and Pattern Recognition  
Adaptive Multi-modal Fusion of Tracking Hypotheses The dynamic component of the world model benefits from a high-level fusion of the visual and ultrasonic modalities for robust multi-object tracking and  ...  Integration is achieved at the system level through the metaphor of shared perceptions in the sense that the different modalities are guided by and provide updates for a shared internal model.  ...  Adaptive Multi-modal Fusion of Tracking Hypotheses The dynamic component of the world model benefits from a high-level fusion of the visual and ultrasonic modalities for robust multi-object tracking and  ... 
doi:10.1109/cvpr.2007.383526 dblp:conf/cvpr/TownZ07 fatcat:dfkkliujlnfxdconr5su6infhm

MMFeat: A Toolkit for Extracting Multi-Modal Features

Douwe Kiela
2016 Proceedings of ACL-2016 System Demonstrations  
We introduce a toolkit that can be used to obtain feature representations for visual and auditory information.  ...  Research at the intersection of language and other modalities, most notably vision, is becoming increasingly important in natural language processing.  ...  Acknowledgments The author was supported by EPSRC grant EP/I037512/1 and would like to thank Anita Verö, Stephen Clark and the reviewers for helpful suggestions.  ... 
doi:10.18653/v1/p16-4010 dblp:conf/acl/Kiela16 fatcat:qkq5gmchdnhmtcm5wpr7nekf7y

Interactive Visual Analysis of Transcribed Multi-Party Discourse

Mennatallah El-Assady, Annette Hautli-Janisz, Valentin Gold, Miriam Butt, Katharina Holzinger, Daniel Keim
2017 Proceedings of ACL 2017, System Demonstrations  
We present the first web-based Visual Analytics framework for the analysis of multi-party discourse data using verbatim text transcripts.  ...  On the client-side, browser-based Visual Analytics components enable multiple perspectives on the analyzed data.  ...  Summary The VisArgue framework provides a novel visual analytics toolbox for exploratory and confirmatory analyses of multi-party discourse data.  ... 
doi:10.18653/v1/p17-4009 dblp:conf/acl/El-AssadyHGBHK17 fatcat:fn2bv5vr4ngadct4qmy7tdlto4

TAPAS: A tangible End-User Development tool supporting the repurposing of Pervasive Displays

Tommaso Turchi, Alessio Malizia, Alan Dix
2017 Journal of Visual Languages and Computing  
These days we are witnessing a spread of many new digital systems in public spaces featuring easy to use and engaging interaction modalities, such as multi-touch, gestures, tangible, and voice.  ...  The aim of TUIs is to give bits a directly accessible 110 and manipulable interface by employing the real world, both as a medium 111 and as a display for manipulation; indeed by connecting data with physical  ...  We propose to consider 698 elements of human-centered information visualization in the redesign of the 699 widgets for the next interaction prototype; for instance, by following visual metaphors that incorporate  ... 
doi:10.1016/j.jvlc.2016.11.002 fatcat:pvecw6acivfuvphwnw3hliww2q

Exploiting Image Generality for Lexical Entailment Detection

Douwe Kiela, Laura Rimell, Ivan Vulić, Stephen Clark
2015 Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)  
We exploit the visual properties of concepts for lexical entailment detection by examining a concept's generality.  ...  We introduce three unsupervised methods for determining a concept's generality, based on its related images, and obtain state-ofthe-art performance on two standard semantic evaluation datasets.  ...  We thank the anonymous reviewers for their helpful comments.  ... 
doi:10.3115/v1/p15-2020 dblp:conf/acl/KielaRVC15 fatcat:uzpvladfw5efjlfcenqhfwpzsu

Conceptual Metaphors for Designing Smart Environments: Device, Robot, and Friend

Jingoog Kim, Mary Lou Maher
2020 Frontiers in Psychology  
We posit that conceptual metaphors of device, robot, and friend can open up new design spaces for the interaction design of smart environments.  ...  Digital technologies embedded in built environments provide an opportunity for environments to be more intelligent and interactive.  ...  FUNDING This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sector.  ... 
doi:10.3389/fpsyg.2020.00198 pmid:32256421 pmcid:PMC7090236 fatcat:xg5nbn6e2ndl3p3ptrjyvu3yx4

Use of Intelligent Agents in Home Entertainment

Mehrdad Jalali-Sohi
2002 Agent-Oriented Information Systems Workshop  
In this paper the approach of the EMBASSI project in the field of easy content access and retrieval for home usage is introduced.  ...  The transition from analog to digital video is about to bring the expected convergence of television, computer and communication, however a suitable platform for delivery of multimedia interactive services  ...  EMBASSI is sponsored by BMBF [3], German Ministry for Education and Research.  ... 
dblp:conf/aois/Jalali-Sohi02 fatcat:7veq5uuwkfce3fkgspk4glppqy

Multi-Media Access and Presentation in a Theatre Information Environment [chapter]

Anton Nijholt
2000 Eurographics  
This paper discusses a virtual world for presenting multi-media information and for natural interactions with the environment to get access to this information.  ...  These faces are shaded to visualize a three-dimensional virtual face. The 3D data is converted to VRML-data that can be used for real-time viewing of the virtual face.  ...  Multi-modality has two directions.  ... 
doi:10.1007/978-3-7091-6771-7_22 fatcat:glw5ofjcxjhetm4hjpf3dg4lgm

Perceptually Grounded Selectional Preferences

Ekaterina Shutova, Niket Tandon, Gerard de Melo
2015 Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)  
While SPs have been traditionally induced from textual data, human lexical acquisition is known to rely on both linguistic and perceptual experience.  ...  Our results show that it outperforms linguistic and visual models in isolation, as well as the existing SP induction approaches.  ...  We are grateful to the ACL reviewers for their insightful feedback.  ... 
doi:10.3115/v1/p15-1092 dblp:conf/acl/ShutovaTM15 fatcat:6thhjtgirnam3a2upkdii363pq

Visual Images as Tools of Teacher Inquiry

Nancy M. Bailey, Elizabeth M. Van Harken
2014 Journal of Teacher Education  
Therefore, much remains unknown about the use of multi- modality as a research methodology for teacher inquiry.  ...  Using combinations of visual and verbal language, they were able to arrive at understandings that they might otherwise have missed without the facilitation of multi- modal text.  ... 
doi:10.1177/0022487113519130 fatcat:pahxg75wqbcangv24bxuncdyxq

Discovering Potentials in Enterprise Interface Design - A Review of Our Latest Case Studies in the Enterprise Domain

Christian Lambeck, Dietrich Kammer, Rainer Groh
2013 Proceedings of the 15th International Conference on Enterprise Information Systems  
Hence, this contribution presents four case studies, which aim to establish innovative visualization and interaction modalities in the field of enterprise systems.  ...  The authors argue that these deficiencies are a major reason for existing usability problems related to the graphical user interface.  ...  Special thanks are also due to Christian Leyh, Dirk Schmalzried and Bettina Kirchner for their enthusiastic participation.  ... 
doi:10.5220/0004442000990104 dblp:conf/iceis/LambeckKG13 fatcat:b4bn5ee3trgarc7kx2gnwffzr4

The Kiki-Bouba Paradigm : Where Senses Meet And Greet

Aditya Shukla
2016 Indian Journal of Mental Health(IJMH)  
Applications include creatingtreatments and training methods to compensate for poor abstract thinking abilities caused by disorders like schizophrenia and autism, for example.  ...  Nonetheless, certain findings, on facevalue, correspond with implicit theoretical constructs.In an experiment controlling for sensory inputs in cats via uni-modal and multi-modal stimuli, Auditory, Visual  ...  Future research can be conducted with respect to utilizing congruence and incongruence along with multi-modal scaffolds (for example, use touch as a mode to acquire information that inherently has audio  ... 
doi:10.30877/ijmh.3.3.2016.240-252 fatcat:axgjbbuazjcjlpktnasnsyom7y
« Previous Showing results 1 — 15 out of 10,471 results