A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
THE ROLE OF GESTURES IN THE CONSTRUCTION OF MULTIMODAL METAPHORS: analysis of a political-electoral debate
2015
Revista Brasileira de Linguística Aplicada
This paper intends to analyze the role of gestures in the construction of multimodal metaphors in the "political-electoral debate" genre. ...
Starting from the operational concept of gesture excursion, we specifically observed the multimodal metaphoricity in speech and gesture compounds. ...
Predominantly verbal-gestural metaphors In the first two gesture samples (Sample #4 and Sample #5), both candidates denounce contradictions in their opponent's behavior. ...
doi:10.1590/1984-639820156105
fatcat:zlfbg4oxercyborxh3xdbrc2oq
The role of voice input for human-machine communication
1995
Proceedings of the National Academy of Sciences of the United States of America
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. ...
Perhaps the most challenging potential application of telephone-based spoken language technology is the interpretation of telephony (13, 14) in which two callers speaking different languages can engage ...
Below, we consider information needed about spontaneous speech, spoken natural language, spoken dialogue, and multimodal interaction. ...
doi:10.1073/pnas.92.22.9921
pmid:7479803
pmcid:PMC40712
fatcat:js3j6ovthzcujhfhumwke5tqfi
Towards an Embodied Conversational Agent Talking in Croatian
2007
2007 9th International Conference on Telecommunications
Differences in their conversation go beyond the languages they speak to the non-verbal behaviors they express while talking. ...
Japanese, Croatian and general western cultures speaking in English. ...
Differences in their conversation go beyond the languages they speak to the non-verbal behaviors they express while talking. ...
doi:10.1109/contel.2007.381848
fatcat:thyjmvykqfhkxltgbvksy4twza
User-centered modeling for spoken language and multimodal interfaces
1996
IEEE Multimedia
Such work is yielding more user-centered and robust interfaces for next-generation spoken language and multimodal systems. ...
The present article summarizes recent research on usercentered modeling of human language and performance during spoken and multimodal interaction, as well as interface design aimed at next-generation ...
Acknowledgments This research has been supported in part by Grants IRI-9213472 and IRI-9530666 from the National Science Foundation. ...
doi:10.1109/93.556458
fatcat:s4wrmd2ifbglnbihttjnitgtry
Natural communication with information systems
2000
Proceedings of the IEEE
Software agents fuse the sensory signals to estimate and interpret user intent. ...
An experimental multimodal system is developed to study several aspects of natural style human-computer communication. ...
This implies that the interaction can benefit from forms of communication other than spoken language, such as gesture and the manipulation and transformation of geometric objects. ...
doi:10.1109/5.880088
fatcat:zziia4ixb5gsxpzflpi6wbl5ma
Effects of Lips and Hands on Auditory Learning of Second-Language Speech Sounds
2010
Journal of Speech, Language and Hearing Research
Given that multimodal information, such as lip movement and hand gesture, influences many aspects of native language processing, the authors examined whether multimodal input helps to improve native English ...
The authors discuss possible benefits and limitations of using multimodal information in second-language phoneme learning. ...
Picker Institute for Interdisciplinary Studies in the Sciences and Mathematics at Colgate University. We thank Emily Cullings, Jason Demakakos, Jackie Burch, Jen Simester, and Grace Baik for ...
doi:10.1044/1092-4388(2009/08-0243)
pmid:20220023
fatcat:kayh55yejvh7thch7f6lmixbyq
Natural Conversational Interfaces to Geospatial Databases
2005
Transactions on GIS
Natural (spoken) language, combined with gestures and other human modalities, provides a promising alternative for interacting with computers, but such benefit has not been explored for interactions with ...
GeoDialogue serves as a semantic 'bridge' between the human language and the formal language that a GIS understands. ...
. , 1998 was the first multimodal system that incorporated natural language and gestures into a GIS query interface. ...
doi:10.1111/j.1467-9671.2005.00213.x
fatcat:kqdfkrmwsjcd3grwooyuc4jn2m
Multimodal behavior realization for embodied conversational agents
2010
Multimedia tools and applications
In this paper we discuss realization of ECA multimodal behaviors which include speech and nonverbal behaviors. ...
We devise RealActor, an open-source, multi-platform animation system for real-time multimodal behavior realization for ECAs. ...
The synchronization process in realizing multimodal behaviors for Embodied Conversational Agents, in general, depends on description of behaviors through multimodal representation languages and the temporal ...
doi:10.1007/s11042-010-0530-2
fatcat:lrndcekyivekvgdnz4jjjhlrki
Leveraging Multimodal Behavioral Analytics for Automated Job Interview Performance Assessment and Feedback
[article]
2020
arXiv
pre-print
Such analysis is then used to provide constructive feedback to the interviewee for their behavioral cues and body language. ...
Behavioral cues play a significant part in human communication and cognitive perception. ...
Research on automated analysis of both verbal and non-verbal behavior cues in the case of job interviews has recently gained momentum, which we aim to analyze in this work. ...
arXiv:2006.07909v2
fatcat:jd2jdmjmxve35m62iy3dp7bmfy
Multimodal Interaction in Architectural Design Applications
[chapter]
2004
Lecture Notes in Computer Science
In this paper we report on ongoing experiments with an advanced multimodal system for applications in architectural design. ...
We explain how the work in COMIC goes beyond previous research in multimodal interaction for eWork and eCommerce applications that combine speech and pen input with speech and graphics output: in design ...
interpreted with the help of verbal explanations. ...
doi:10.1007/978-3-540-30111-0_33
fatcat:c7dkl3fjkbefpg66u3irlhrk6e
The Neem Platform: An Extensible Framework for the Development of Perceptual Collaborative Applications
[chapter]
2002
Lecture Notes in Computer Science
Participants' multimodal interactions such as voice exchanges, textual messages, widget operations and eventually gestures, eye gaze and facial expressions are made available to applications, that apply ...
situated reasoning, using this rich contextual information to dynamically adapt their behavior. ...
Perceptual interfaces explore verbal and nonverbal human behaviors through the use of multiple modalities, such as speech, gestures, gaze, both for collection and for presentation. ...
doi:10.1007/3-540-45785-2_44
fatcat:d6cqmi5dxjhtpcihkywcctxuwa
Measuring the Quality of Service and Quality of Experience of multimodal human–machine interaction
2012
Journal on Multimodal User Interfaces
In order to guide the assessment and evaluation of such services, we first develop a taxonomy of the most relevant QoS and QoE aspects which result from multimodal human-machine interactions. ...
consists of three layers: (1) The quality factors influencing QoS and QoE related to the user, the system, and the context of use; (2) the QoS interaction performance aspects describing user and system behavior ...
By interaction modality, we mean the sensory channel used by a communicating agent to convey information to a communication partner, e.g. spoken language, intonation, gaze, hand gestures, body gestures ...
doi:10.1007/s12193-011-0088-y
fatcat:vjdy4azjgjbozjqz6m2zpdkyt4
Assistive Technology and Affective Mediation
2006
Human Technology: An Interdisciplinary Journal on Humans in ICT Environments
Finally, we describe our experience with Gestele, an application that incorporates multimodal elements of affective mediation for people with mobility impairments, as well as the results of an empirical ...
We also present several affective mediation technologies that are being applied or may be integrated in assistive technologies in order to improve affective communication for a range of disabilities. ...
systems: the subjective, or verbal; the behavioral; and the physiological. ...
doi:10.17011/ht/urn.2006159
fatcat:sygvpcn7prgppo5h7uajz7v2ee
Art-science and verbal articulation in hyper-visual techno-culture
2014
Journal of Professional Communication
A r t i c l e i n f o A b s t r A c t This article outlines a quasi-analytical process for better comprehension of current artistic and scientific representations in visual media representing language ...
Hyper-visual focus or the lack of neuro-typical visual focus can induce or indicate cognitive problems involving verbal competency. ...
and spoken communications. ...
doi:10.15173/jpc.v3i2.158
fatcat:qbaqxmclf5h4nmasw2hq7ivr5e
Remote Interpreting: Issues of Multi-Sensory Integration in a Multilingual Task
2005
Meta : Journal des traducteurs
ABSTRACT This article seeks to present evidence for the pivotal role of multi-sensory integration in simultaneous interpreting. ...
The lack of virtual presence has emerged as one of the major factors determining poorer performance in remote as opposed to live simultaneous interpreting. ...
Since face-to-face communication is a multimodal process it involves complex interactions between verbal and visual behaviors. ...
doi:10.7202/011014ar
fatcat:uzohkstaszgjnmgpyts643crh4
« Previous
Showing results 1 — 15 out of 218 results