Filters








8,872 Hits in 6.6 sec

Vibrotactile Rendering of Human Emotions on the Manifold of Facial Expressions

Shafiq Ur Réhman, Li Liu
2008 Journal of Multimedia  
Facial expressions play an important role in every day social interaction.  ...  The Locally Linear Embedding (LLE) algorithm is extended to compute the manifold of facial expressions, which is used to control vibration of motors to render emotions.  ...  Facial features can not only be used in face identification but also in emotion recognition [2] , [3] .  ... 
doi:10.4304/jmm.3.3.18-25 fatcat:352jzexyergvbakszxapp4tvgu

A 3-D Audio-Visual Corpus of Affective Communication

Gabriele Fanelli, Juergen Gall, Harald Romsdorfer, Thibaut Weise, Luc Van Gool
2010 IEEE transactions on multimedia  
In this work, we present a new audio-visual corpus for possibly the two most important modalities used by humans to communicate their emotional states, namely speech and facial expression in the form of  ...  The corpus is a valuable tool for applications like affective visual speech synthesis or view-independent facial expression recognition.  ...  The proposed corpus is valuable for applications like emotional visual speech modeling, but also for view-independent facial expression recognition, or audio-visual emotion recognition.  ... 
doi:10.1109/tmm.2010.2052239 fatcat:yrpiyqkmvjf3nioh37q5lecsii

Multimodal emotion estimation and emotional synthesize for interaction virtual agent

Minghao Yang, Jianhua Tao, Hao Li, Kaihui Mu
2012 2012 IEEE 2nd International Conference on Cloud Computing and Intelligence Systems  
In this study, we create a 3D interactive virtual character based on multi-modal emotional recognition and rule based emotional synthesize techniques.  ...  For the output module of the agent, the voice is generated by TTS (Text-to-Speech)system by freely given text.  ...  any emotion recognition for users and without emotion output for agent; 2) Emotion based speech conversation with bimodal emotion recognition for users facial expressions and audio input.  ... 
doi:10.1109/ccis.2012.6664394 dblp:conf/ccis/YangTLM12 fatcat:fiyjm7wdojfyhflr2lnrnuczwa

Face to virtual face

N.M. Thalmann, P. Kalra, M. Escher
1998 Proceedings of the IEEE  
It includes the analysis of the facial expression and speech of the cloned face, which can be used to elicit a response from the autonomous virtual human with both verbal and nonverbal facial movements  ...  The objective is for one's representative to look, talk, and behave like oneself in the virtual world.  ...  Facial-Expression Recognition Accurate recognition of facial expression from a sequence of images is complex. The difficulty is greatly increased when the task is to be done in real time.  ... 
doi:10.1109/5.664277 fatcat:xpadzpwckjbxbmhm5eko55xvly

Emotion Expression of Avatar through Eye Behaviors, Lip Synchronization and MPEG4 in Virtual Reality based on Xface Toolkit: Present and Future

Ahmad Hoirul Basori, Itimad Raheem Ali
2013 Procedia - Social and Behavioral Sciences  
In this paper, the recent advances in 3D facial expression are introduced focusing on the presentation of Xface platform toolkit that developed a 3D talking avatars synthesis by implementing text-to-speech  ...  Eye movement combined with lip synchronization, eye movements, and emotional facial expression revealed an interesting research field that gives information about f f verbal and nonverbal behaviors occurring  ...  Fig. 2 . 2 Part of the facial feature points defined in the MPEG-4 standard [7] Fig. 3 . 3 Avatar expression process of Xface [3] Table 1 . 1 Xface Emotional expression results.  ... 
doi:10.1016/j.sbspro.2013.10.290 fatcat:bjbsqftfl5bfnh3zc26v56wao4

3D shape estimation in video sequences provides high precision evaluation of facial expressions

László A. Jeni, András Lőrincz, Tamás Nagy, Zsolt Palotai, Judit Sebők, Zoltán Szabó, Dániel Takács
2012 Image and Vision Computing  
Person independent and pose invariant estimation of facial expressions and action unit (AU) intensity estimation is important for situation analysis and for automated video annotation.  ...  PMS from 3D CLM offers pose invariant emotion estimation that we studied by rendering a 3D emotional database for different poses and different subjects from the BU 4DFE database.  ...  Acknowledgments We are grateful to Jason Saragih for providing his CLM code for our work.  ... 
doi:10.1016/j.imavis.2012.02.003 fatcat:7dnrt5yccbe2daj6gmul52pe2a

Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation [article]

Lincheng Li, Suzhen Wang, Zhimeng Zhang, Yu Ding, Yixing Zheng, Xin Yu, Changjie Fan
2021 arXiv   pre-print
In this paper, we propose a novel text-based talking-head video generation framework that synthesizes high-fidelity facial expressions and head motions in accordance with contextual sentiments as well  ...  It takes the animation parameters as input and exploits an attention mask to manipulate facial expression changes for the input individuals.  ...  For example, humans unconsciously use facial expressions and head movements to express their emotions (Mignault Input text: "Get out! You shouldn't be here!"  ... 
arXiv:2104.07995v2 fatcat:foltetmbjzhsdk2pgfs4x7j5ky

Visual Emotion-Aware Cloud Localization User Experience Framework Based on Mobile Location Services

Aiman Mamdouh Ayyal Awwad
2021 International Journal of Interactive Mobile Technologies  
<p>Recently, the study of emotional recognition models has increased in the human-computer interaction field.  ...  The harnessing of emotional recognition in mobile apps can dramatically enhance users' experience.  ...  We also thank the students who participated in the study. Finally, we would like to point out that the figures in this work were created using Creately App.  ... 
doi:10.3991/ijim.v15i14.20061 fatcat:ytwj36ponvcrtpkmeoultldv6a

Neural Emotion Director: Speech-preserving semantic control of facial expressions in "in-the-wild" videos [article]

Foivos Paraperas Papantoniou, Panagiotis P. Filntisis, Petros Maragos, Anastasios Roussos
2021 arXiv   pre-print
In this paper, we introduce a novel deep learning method for photo-realistic manipulation of the emotional state of actors in "in-the-wild" videos.  ...  Finally, the altered facial expressions are used to photo-realistically manipulate the facial region in the input scene based on an especially-designed neural face renderer.  ...  Material. 3D-based Emotion Manipulator Following the 3D Face Analysis step, information related to the facial expression in a frame is encoded in the expression vector e ∈ R 50 and the 3 jaw parameters  ... 
arXiv:2112.00585v1 fatcat:llumjxtldffbromkjq5mrplavq

Performance-Driven Facial Animation: Basic Research on Human Judgments of Emotional State in Facial Avatars

A.A. Rizzo, U. Neumann, R. Enciso, D. Fidaleo, J.Y. Noh
2001 CyberPsychology & Behavior  
The emotional stimulus induction involved presenting text-based, still image, and video clips to subjects that were previously rated to induce facial expressions for the six universals 2 of facial expression  ...  facial expression-rendering methods (www.  ...  This was done to maximize the range of facial expressions that we had to choose from for presentation to human facial expression raters in phase 3.  ... 
doi:10.1089/109493101750527033 pmid:11708727 fatcat:tesz2tknxnfxzofmklqtuiqhiq

A Real-Time Facial Expression Recognition System for Online Games

Ce Zhan, Wanqing Li, Philip Ogunbona, Farzad Safaei
2008 International Journal of Computer Games Technology  
In this paper, we propose an automatic expression recognition system that can be integrated into an MOG to control the facial expressions of avatars.  ...  Although detailed facial animation has already been achieved in a number of MOGs, players have to use text commands to control the expressions of avatars.  ...  Thus, the facial expression recognition system allows a rendering of the appropriate avatar with the required emotion on clients' world views.  ... 
doi:10.1155/2008/542918 fatcat:jyoixkkx3zfbjhyren5vvw465u

Perception of congruent facial and kinesthetic expressions of emotions

Yoren Gaffary, Jean-Claude Martin, Mehdi Ammi
2015 2015 International Conference on Affective Computing and Intelligent Interaction (ACII)  
The use of virtual avatars, through facial or gestural expressions, is considered to be a main support for affective communication.  ...  We also observed a link between the recognition rate of emotions expressed with the visual modality (resp. kinesthetic modality) and the magnitude of that emotion's pleasure dimension (resp. arousal dimension  ...  Future research will explore how to use these results to improve the recognition of close emotions according to the pleasure dimension and presenting similar visual expressions.  ... 
doi:10.1109/acii.2015.7344697 dblp:conf/acii/GaffaryMA15 fatcat:wzqnmjxkqzf6zlf74pltojjb2y

Sensitive Talking Heads [Applications Corner]

T.S. Huang, M.A. Hasegawa-Johnson, S.M. Chu, Zhihong Zeng, Hao Tang
2009 IEEE Signal Processing Magazine  
Automatic recognition and synthesis of emotionally nuanced speech, on the other hand, are still topics of active research. This column describes experiments in emotive spoken language user interface.  ...  In order to give the user interface a fighting chance, why not give it a certain amount of emotional sensitivity?  ...  For example, the image intensity of the mouth region may reflect aspects of the 3-D shape of the lips that are not represented in the 2-D geometric features.  ... 
doi:10.1109/msp.2009.932562 fatcat:4bxlfcydwfasvhcd56oehoh7s4

Joint Audio-Text Model for Expressive Speech-Driven 3D Facial Animation [article]

Yingruo Fan, Zhaojiang Lin, Jun Saito, Wenping Wang, Taku Komura
2021 arXiv   pre-print
In this work, we present a joint audio-text model to capture the contextual information for expressive speech-driven 3D facial animation.  ...  In contrast to prior approaches which learn phoneme-level features from the text, we investigate the high-level contextual text features for speech-driven 3D facial animation.  ...  To this end, we build a fusion the emotional state, as such text representations would be layer, named Tensor Fusion (Zadeh et al. 2017), in our model useful for facial expression synthesis. to  ... 
arXiv:2112.02214v2 fatcat:77tyq4cslfatrghj7aypwnmnuy

Special Issue on Multimodal Affective Interaction

Nicu Sebe, Hamid Aghajan, Thomas Huang, Nadia Magnenat-Thalmann, Caifeng Shan
2010 IEEE transactions on multimedia  
Calix et al. study automatic emotion detection in descriptive sentences and how this can be used to tune facial expression parameters for 3-D character generation in "Emotion Recognition in Text for 3-  ...  D Facial Expression Rendering", where mutual information is adopted for word feature selection.  ...  Caifeng Shan received the B.Eng. degree in computer science from the University of Science and Technology of China (USTC), the M.Eng degree in pattern recognition and intelligent system from the Institute  ... 
doi:10.1109/tmm.2010.2052315 fatcat:kvyrbb4kubazdjfns3w3dn442m
« Previous Showing results 1 — 15 out of 8,872 results