1,039 Hits in 4.7 sec

Statistical synthesis of facial expressions for the portrayal of emotion

Lisa Gralewski, Neill Campbell, Barry Thomas, Colin Dalton, David Gibson, University of Bristol
2004 Proceedings of the 2nd international conference on Computer graphics and interactive techniques in Austalasia and Southe East Asia - GRAPHITE '04  
This paper presents a novel technique for the generation of 'video textures' to display human emotion.  ...  This is achieved by a method which uses existing video footage to synthesise new sequences of coherent facial expression and head motions.  ...  Acknowledgements The SOM code used here is based on the SOM toolbox for matlab: URL:  ... 
doi:10.1145/988834.988867 dblp:conf/graphite/GralewskiCTDG04 fatcat:4yv4yodefrbinp5g4mqlakwdna

Vocal communication of emotion: A review of research paradigms

K Scherer
2003 Speech Communication  
In addition, the advantages and disadvantages of research paradigms for the induction or observation of emotional expression in voice and speech and the experimental manipulation of vocal cues are discussed  ...  In particular, it is suggested to use the Brunswikian lens model as a base for research on the vocal communication of emotion.  ...  The development of synthesis and copy synthesis methods in the domain of speech technology has provided researchers in the area of vocal expression of emotion with a remarkably effective tool, allowing  ... 
doi:10.1016/s0167-6393(02)00084-5 fatcat:tdwwdj2pvjgldmyddina6lpz4u

Human and machine validation of 14 databases of dynamic facial expressions

Eva G. Krumhuber, Dennis Küster, Shushi Namba, Lina Skora
2020 Behavior Research Methods  
The prototypicality of an expression in turn predicted emotion classification accuracy, with higher performance observed for more prototypical facial behavior.  ...  For this, 14 dynamic databases were selected that featured facial expressions of the basic six emotions (anger, disgust, fear, happiness, sadness, surprise) in posed or spontaneous form.  ...  Acknowledgements The authors would like to thank Jasmine Or, Sylvie Simons, and Gerda Storpirstyte for their help with data collection.  ... 
doi:10.3758/s13428-020-01443-y pmid:32804342 fatcat:bgfjw75umfcxrgp2dwakpgvf6a

FACSGen: A Tool to Synthesize Emotional Facial Expressions Through Systematic Manipulation of Facial Action Units

Etienne B. Roesch, Lucas Tamarit, Lionel Reveret, Didier Grandjean, David Sander, Klaus R. Scherer
2010 Journal of nonverbal behavior  
To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals.  ...  Keywords Emotion Á Facial expression Á Software Á Research material Á Facial action coding system Á FACS FACSGen is a software developed at the Swiss Centre for Affective Sciences for research purposes  ...  2009 for the facial expression of pain).  ... 
doi:10.1007/s10919-010-0095-9 fatcat:rcjftnvz5nalvb74us2mfnivqi

Reliable facial muscle activation enhances recognizability and credibility of emotional expression

Marc Mehu, Marcello Mortillaro, Tanja Bänziger, Klaus R. Scherer
2012 Emotion  
Activation of the reliable AUs had a stronger effect than that of versatile AUs on the identification, perceived authenticity, and perceived intensity of the emotion expressed.  ...  Professional actors enacted a series of emotional states using method acting techniques, and their facial expressions were rated by independent judges.  ...  We thank Eva Krumhuber for her contribution to the reliability coding of emotional expressions.  ... 
doi:10.1037/a0026717 pmid:22642350 fatcat:kvx2qdfrjngmnkkplv7m7tagom

In the eye of the beholder? Universality and cultural specificity in the expression and perception of emotion

Klaus R. Scherer, Elizabeth Clark-Polner, Marcello Mortillaro
2011 International Journal of Psychology  
The question of cultural universality versus specificity in emotional expression has been a hot topic of debate for more than half a century, but, despite a sizeable amount of empirical research produced  ...  D o members of different cultures express (or "encode") emotions in the same fashion? How well can members of distinct cultures recognize (or "decode") each other's emotion expressions?  ...  The table summarizes data from studies using realistic (not drawn or animated) portrayals of facial expressions of emotion to compare identification accuracy for more than one emotion, using at least  ... 
doi:10.1080/00207594.2011.626049 pmid:22126090 fatcat:uv233nd65nfdba4d3mx46gsw6m

Automated recognition of complex categorical emotions from facial expressions and head motions

Andra Adams, Peter Robinson
2015 2015 International Conference on Affective Computing and Intelligent Interaction (ACII)  
The classifier has been integrated into an expression training interface which gives meaningful feedback to humans on their portrayal of complex emotions through face and head movements.  ...  On a simplified 6-choice classification problem, the classifier had an accuracy of 0.64 compared with the validated human accuracy of 0.74.  ...  The authors would like to thank Erroll Wood for his eye gaze code, and David Dobias, Tadas Baltrušaitis and Marwa Mahmoud for many helpful discussions on this research.  ... 
doi:10.1109/acii.2015.7344595 dblp:conf/acii/Adams015 fatcat:pybyufvhb5hrflxwily3buy3mm

Laughter Research: A Review of the ILHAIRE Project [chapter]

Stéphane Dupont, Hüseyin Çakmak, Will Curran, Thierry Dutoit, Jennifer Hofmann, Gary McKeown, Olivier Pietquin, Tracey Platt, Willibald Ruch, Jérôme Urbain
2016 Intelligent Systems Reference Library  
Acknowledgements We would like to acknowledge all colleagues within the ILHAIRE project, from the following partner organisations: University of Mons (Belgium), Télécom Paris-Tech / Centre National de  ...  expressive behaviour to the emotional context of the interaction.  ...  For schadenfreude laughter, two hypotheses were put forward [48] : Schadenfreude may either be a blend of a positive and negative emotion (entailing facial features of both), or expressed by a joy display  ... 
doi:10.1007/978-3-319-31056-5_9 fatcat:46ef33zknba4jjuasd3tqinpam

Expression of emotion in voice and music

Klaus R. Scherer
1995 Journal of Voice  
Finally, based on speculations about the joint origin of speech and vocal music in nonlinguistic affect vocalizations, similarities of emotion expression in speech and music are discussed.  ...  Vocal communication of emotion is biologically adaptive for socially living species and has therefore evolved in a phylogenetically continuous manner.  ...  and the rich information content of emotion in facial expressions (8) .  ... 
doi:10.1016/s0892-1997(05)80231-0 pmid:8541967 fatcat:hwbpup4hdzaehilqdqasnlwzvy

Studying the dynamics of emotional expression using synthesized facial muscle movements

Thomas Wehrle, Susanne Kaiser, Susanne Schmidt, Klaus R. Scherer
2000 Journal of Personality and Social Psychology  
Synthetic images of facial expression were used to assess whether judges can correctly recognize emotions exclusively on the basis of configurations of facial muscle movements.  ...  In addition, the effect of static versus dynamic presentation of the expressions was studied.  ...  of the facial expression of emotion.  ... 
doi:10.1037//0022-3514.78.1.105 pmid:10653509 fatcat:6q7rgp4habb65cgbvb6gyuwolm

Appraisal-driven facial actions as building blocks for emotion inference

Klaus R. Scherer, Marcello Mortillaro, Irene Rotondi, Ilaria Sergi, Stéphanie Trznadel
2018 Journal of Personality and Social Psychology  
As a strong case can be made for an appraisal theory account of emotional expression, which holds that appraisal results directly determine the nature of facial muscle actions, we claim that observers  ...  We conclude by highlighting the importance of adopting a theory-based experimental approach in future research, focusing on the dynamic unfolding of facial expressions of emotion.  ...  Emotion portrayals. Banse and Scherer (1996) asked professional actors to portray the expressions of 14 major emotions.  ... 
doi:10.1037/pspa0000107 pmid:29461080 fatcat:vx2pg4ewibf5find6inznki6vi

Communication of emotions in vocal expression and music performance: Different channels, same code?

Patrik N. Juslin, Petri Laukka
2003 Psychological bulletin  
The results can explain why music is perceived as expressive of emotion, and they are consistent with an evolutionary perspective on vocal expression of emotions.  ...  This review of 104 studies of vocal expression and 41 studies of music performance reveals similarities between the 2 channels concerning (a) the accuracy with which discrete emotions were communicated  ...  Types of method included portrayal (P), manipulated portrayal (M), synthesis (S), natural speech sample (N), and induction of emotion (I).  ... 
doi:10.1037/0033-2909.129.5.770 pmid:12956543 fatcat:ysixirngw5ezjn5onrbko22wye

Recent developments in social signal processing

Albert Ali Salah, Maja Pantic, Alessandro Vinciarelli
2011 2011 IEEE International Conference on Systems, Man, and Cybernetics  
Nowadays, computers are not only the new interaction partners of humans, but also a privileged interaction medium for social exchange between humans.  ...  Social signal processing has the ambitious goal of bridging the social intelligence gap between computers and humans.  ...  ACKNOWLEDGMENT The research that has led to this work has been supported by the European Communitys Seventh Framework Program (FP7/2007-2013), under grant agreement no. 231287 (SSPNet).  ... 
doi:10.1109/icsmc.2011.6083695 dblp:conf/smc/SalahPV11 fatcat:f44zwpkgjfgevlxgdh6cv4w74a

Can a Humanoid Face be Expressive? A Psychophysiological Investigation

Nicole Lazzeri, Daniele Mazzei, Alberto Greco, Annalisa Rotesi, Antonio LanatÃ, Danilo Emilio De Rossi
2015 Frontiers in Bioengineering and Biotechnology  
This study concerns the capability of a humanoid robot to exhibit emotions through facial expressions.  ...  Indeed, in all cultures, facial expressions are the most universal and direct signs to express innate emotional cues.  ...  Funding: this work was partially funded by the European Commission under the 7th Framework Program projects EASEL, "Expressive Agents for Symbiotic Education and Learning, " under Grant 611971-FP7-ICT-  ... 
doi:10.3389/fbioe.2015.00064 pmid:26075199 pmcid:PMC4443734 fatcat:svjlvvypw5hqthswkpbfnhuwlm

Path Models of Vocal Emotion Communication

Tanja Bänziger, Georg Hosoya, Klaus R. Scherer, David Reby
2015 PLoS ONE  
The statistical models generated show that more sophisticated acoustic parameters need to be developed to explain the distal underpinnings of subjective voice quality percepts that account for much of  ...  More recently, Scherer [36] has formalized the earlier suggestion for an extension of the lens model as a tripartite emotion expression and perception (TEEP) model (see Fig 1) .  ...  case of facial expression and arousal in the case of vocal expression (see Table A in S1 File-Appendix).  ... 
doi:10.1371/journal.pone.0136675 pmid:26325076 pmcid:PMC4556609 fatcat:6uxek2lnijcsxdyn77ejfwujii
« Previous Showing results 1 — 15 out of 1,039 results