A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Affect Recognition for Multimodal Natural Language Processing
2020
Cognitive Computation
Acknowledgments The guest editors are grateful to the Editor-in-Chief, Amir Hussain, and to the many reviewers who kindly agreed to serve for this special issue and submitted their insightful reviews in ...
Computational analysis of human multimodal language is an emerging research area in natural language processing (NLP). ...
Multimodal affect recognition in monologues, 2. Effective multimodal fusion. 3. Detecting affect in dyadic and multiparty multimodal conversations. ...
doi:10.1007/s12559-020-09738-0
fatcat:lmospfzn3barvk6fwnnk2rvw3i
Face and Body Gesture Analysis for Multimodal HCI
[chapter]
2004
Lecture Notes in Computer Science
Accordingly, in this paper we present a vision-based framework that combines face and body gesture for multimodal HCI. ...
In order to make human-computer interfaces truly natural, we need to develop technology that tracks human movement, body behavior and facial expression, and interprets these movements in an affective way ...
These are systems mostly combining auditory and visual information by processing facial expression and vocal cues for affective emotion recognition. ...
doi:10.1007/978-3-540-27795-8_59
fatcat:ib3d3ek6fjbyzceinv73k5baxi
3rd international workshop on affective interaction in natural environments (AFFINE)
2010
Proceedings of the international conference on Multimedia - MM '10
The 3rd International Workshop on Affective Interaction in Natural Environments, AFFINE, follows a number of successful AFFINE workshops and events commencing in 2008. ...
A key aim of AFFINE is the identification and investigation of significant open issues in real-time, affect-aware applications 'in the wild' and especially in embodied interaction, for example, with robots ...
(HCI) • Multimedia HCI • Multimodal and emotional corpora (naturally evoked or induced emotion) • Recognition of human behaviour for implicit tagging • Applications to interactive games, robots and virtual ...
doi:10.1145/1873951.1874357
dblp:conf/mm/CastellanoKMMPR10
fatcat:rkw5hvwc75hzfntjdogonvogdi
Guest Editorial: Special Section on Naturalistic Affect Resources for System Building and Evaluation
2012
IEEE Transactions on Affective Computing
ACKNOWLEDGMENTS We would like to thank the editor in chief, Jonathan Gratch, for his help with this special issue and the 35 reviewers that not only helped with the decision process but contributed with ...
The last article, "A Multimodal Affective Database for Affect Recognition and Implicit Tagging" by Mohammad Soleymani, Jeroen Lichtenauer, and Maja Pantic, introduces MAHNOB-HCI, a multimodal affective ...
Articles were invited in the area of mono and multimodal resources for research on emotion and affect. ...
doi:10.1109/t-affc.2012.10
fatcat:dtrp4z3jfbgidib6d6sff5kbme
Toward multimodal fusion of affective cues
2006
Proceedings of the 1st ACM international workshop on Human-centered multimedia - HCM '06
In this paper we provide a state of the art multimodal fusion and describe one way to implement a generic framework for multimodal emotion recognition. ...
The system is developed within the MAUI framework [31] and Scherer's Component Process Theory (CPT) [49, 50, 51, 24, 52] , with the goal to be modular and adaptive. ...
Multimodal Affective Cues Fusion More recently some works have described how multimodal fusion mechanisms can be used for emotion/affect recognition, see as example works from Pantic, Sebe, Li, Busso and ...
doi:10.1145/1178745.1178762
fatcat:vmgb4bommnd4dd5bpz3t55rxxm
Multimodal Sentiment Analysis: A Comparison Study
2018
Journal of Computer Science
The automatic analysis of multimodal opinion involves a deep understanding of natural languages, audio and video processing, whereas researchers are continuing to improve them. ...
Sentiment analysis is mainly focused on the automatic recognition of opinions' polarity, as positive or negative. ...
Ethics We testify that this research paper submitted to the Journal of Computer Science, title: "Multimodal Sentiment Analysis: A Comparison Study" has not been published in whole or in part elsewhere. ...
doi:10.3844/jcssp.2018.804.818
fatcat:wgchjlvjavenxptlitnpgktb7q
A Multimodal Emotion Sensing Platform for Building Emotion-Aware Applications
[article]
2019
arXiv
pre-print
We present a multimodal affect and context sensing platform. ...
This paper describes the different audio, visual and application processing components and explains how the data is stored and/or broadcast for other applications to consume. ...
Acknowledgments The authors would like to thank Michael Gamon, Mark Encarnacion, Ivan Tashev, Cha Zhang, Emad Barsoum, Dan Bohus and Nick Saw for the contribution of models and PSI components that are ...
arXiv:1903.12133v1
fatcat:xn33rcxypzg33gv4fwzpltpk2i
Construction of Spontaneous Emotion Corpus from Indonesian TV Talk Shows and Its Application on Multimodal Emotion Recognition
2018
IEICE transactions on information and systems
We perform multimodal emotion recognition utilizing the predictions of three modalities: acoustic, semantic, and visual. ...
When compared to the unimodal result, in the multimodal feature combination, we attain identical accuracy for the arousal at 92.6%, and a significant improvement for the valence classification task at ...
This means, language specific research is necessary for application in a certain language. In Asian languages, findings in affective computing continue to emerge. ...
doi:10.1587/transinf.2017edp7362
fatcat:sxkvqnmp7fcp7av3avhp2iqmge
A MultiModal Social Robot Toward Personalized Emotion Interaction
[article]
2021
arXiv
pre-print
Moreover, the affective states of human users can be the indicator for the level of engagement and successful interaction, suitable for the robot to use as a rewarding factor to optimize robotic behaviors ...
This study demonstrates a multimodal human-robot interaction (HRI) framework with reinforcement learning to enhance the robotic interaction policy and personalize emotional interaction for a human user ...
Methodology This section will first formulate the natural language processing (NLP) problem we will solve with the RL agent based on physiological rewards. ...
arXiv:2110.05186v1
fatcat:dtvb5mcv35glhco7dhydyapmqy
Multimodal Interaction System for Home Appliances Control
2020
International Journal of Interactive Mobile Technologies
Speech recognition process with the Google Cloud Speech, gesture recognition process with the K-Means clustering, and dialogue system process with the finite state machine. ...
The sensor to capture speech, in the Indonesian language, and gestures from users are Kinect v2. ...
Acknowledgement The first author acknowledges support from the Lembaga Pengelola Dana Pendidikan (Indonesia Endowment Fund for Education) scholarship, Ministry of Finance, The Republic of Indonesia. ...
doi:10.3991/ijim.v14i15.13563
fatcat:cwxr7yv7vbbklpqtrtwxjs3qaq
A Proposal for Processing and Fusioning Multiple Information Sources in Multimodal Dialog Systems
[chapter]
2014
Communications in Computer and Information Science
Multimodal dialog systems can be defined as computer systems that process two or more user input modes and combine them with multimedia system output. ...
We describe an application of our technique to build multimodal systems that process user's spoken utterances, tactile and keyboard inputs, and information related to the context of the interaction. ...
such as speech interaction and natural language processing [1, 2] . ...
doi:10.1007/978-3-319-07767-3_16
fatcat:addywh5y3bfgvb3hyggmlr3npu
Multimodal Approach for Emotion Recognition Using a Formal Computational Model
2013
International Journal of Applied Evolutionary Computation
We elaborate a multimodal emotion recognition method from Physiological Data based on signal processing algorithms. ...
In this paper, we present a multimodal approach for the emotion recognition from many sources of information. ...
Jennifer Healey of the Affective Computing Group at the MIT for providing the experimental data employed in this research. ...
doi:10.4018/jaec.2013070102
fatcat:enevytkrubb3vpztlilviae2su
A review of affective computing: From unimodal analysis to multimodal fusion
2017
Information Fusion
Affective computing is an emerging interdisciplinary research field bringing together researchers and practitioners from various fields, ranging from artificial intelligence, natural language processing ...
In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities. ...
associated with natural language opinions. ...
doi:10.1016/j.inffus.2017.02.003
fatcat:ytebhjxlz5bvxcdghg4wxbvr6a
From VoiceXML to multimodal mobile Apps: development of practical conversational interfaces
2016
Advances in Distributed Computing and Artificial Intelligence Journal
Conversational interfaces; VoiceXML; Mobile devices; Android Speech Technologies and Language Processing have made possible the development of a number of new applications which are based on conversational ...
multimodal services by means of mobile devices (for instance, using the facilities provided by the Android OS). ...
Natural language generation is the process of obtaining texts in natural language from a non-linguistic representation. ...
doi:10.14201/adcaij2016534353
fatcat:n7bovgvlgfgwtinbqvytv4eq7q
Multimodal interaction: A review
2014
Pattern Recognition Letters
Multimodal human-computer interaction has sought for decades to endow computers with similar capabilities, in order to provide more natural, powerful, and compelling interactive experiences. ...
Finally, we list challenges that lie ahead for research in multimodal human-computer interaction. ...
Advantages of multimodal interaction Multimodal interaction systems aim to support the recognition of naturally occurring forms of human language and behavior through the use of recognition-based technologies ...
doi:10.1016/j.patrec.2013.07.003
fatcat:xhbzycgarbd3vjnptybvdoezcy
« Previous
Showing results 1 — 15 out of 20,667 results