Filters








15,428 Hits in 5.9 sec

Convergence of behavioral and cardiac indicators of distress in toddlerhood: A systematic review and narrative synthesis

Jordana A. Waxman, Miranda G. DiLorenzo, Rebecca R. Pillai Riddell
2020 International Journal of Behavioral Development  
HR was consistently positively ( D = .05 to .54) related to expressed emotion behaviors. No other cardiac and behavioral indicators were consistently related.  ...  rate [HR], heart period, respiratory sinus arrhythmia, pre-ejection period) and type of behavior measured (i.e., coding for expressed emotion behaviors vs. emotion regulation behaviors).  ...  Effect sizes ranged from a D ¼ .05 to D ¼ .54.  ... 
doi:10.1177/0165025420922618 fatcat:pbfoaaucivbyjcu7xolpbwoffi

EAVA: A 3D Emotive Audio-Visual Avatar

Hao Tang, Yun Fu, Jilin Tu, Thomas S. Huang, Mark Hasegawa-Johnson
2008 2008 IEEE Workshop on Applications of Computer Vision  
Primary work is focused on 3D face modeling, realistic emotional facial expression animation, emotive speech synthesis, and the co-articulation of speech gestures (i.e., lip movements due to speech production  ...  ) and facial expressions.  ...  Figure 3 . 3 Diagram for emotive speech synthesis. Figure 4 . 4 Examples of animation of emotive audio-visual avatar with associated synthetic emotive speech waveform.  ... 
doi:10.1109/wacv.2008.4544003 dblp:conf/wacv/TangFTHH08 fatcat:ojxu52bvjve5vj6z36fh5cgt5m

Synthesizing Skeletal Motion and Physiological Signals as a Function of a Virtual Human's Actions and Emotions [article]

Bonny Banerjee, Masoumeh Heidari Kapourchali, Murchana Baruah, Mousumi Deb, Kenneth Sakauye, Mette Olufsen
2021 arXiv   pre-print
This will allow generation of potentially infinite amounts of shareable data from an individual as a function of his actions, interactions and emotions in a care facility or at home, with no risk of confidentiality  ...  Round-the-clock monitoring of human behavior and emotions is required in many healthcare applications which is very expensive but can be automated using machine learning (ML) and sensor technologies.  ...  Figure 1 : 1 Emotionally-expressive skeletal action synthesis (seeFig. 3) (c) ECG, BP signal synthesis (6 sec duration shown) (d) SCR signal synthesis (60 sec duration shown) (e) Respiration signal synthesis  ... 
arXiv:2102.04548v2 fatcat:7vq37n3zvbcbdeds6ahlh5rhfu

Sleep and Emotional Memory Processing

Els van der Helm, Matthew P. Walker
2011 Sleep Medicine Clinics  
This review provides a synthesis of these findings, describing an intimate relationship between sleep, emotional brain function, and clinical mood disorders and offers a tentative first theoretical framework  ...  and 4, are often grouped together under the term slow wave sleep (SWS), reflecting the occurrence of low frequency waves (0.5-4 Hz), representing an expression of underlying mass cortical synchrony. 7,8  ...  Specific predictions emerge from this model.  ... 
doi:10.1016/j.jsmc.2010.12.010 pmid:25285060 pmcid:PMC4182440 fatcat:zvppx3zjojb2jo3bof3pkthneq

Mixed feelings: expression of non-basic emotions in a muscle-based talking head

Irene Albrecht, Marc Schröder, Jörg Haber, Hans-Peter Seidel
2005 Virtual Reality  
A physics-based facial animation system was combined with an equally flexible and expressive text-to-speech synthesis system, based upon the same emotion model, to form a talking head capable of expressing  ...  With a variety of life-like intermediate facial expressions captured as snapshots from the system we demonstrate the appropriateness of our approach.  ...  Procedural methods try to synthesise lip movement for speech from scratch [27, 26] .  ... 
doi:10.1007/s10055-005-0153-5 fatcat:h5looxugwrdjvgd62m35hrndpi

Emotional Speech Synthesis for a Radio DJ: Corpus Design and Expression Modeling

Martí Umbert, Jordi Janer, Jordi Bonada
2010 Zenodo  
This master thesis concerns the design of a corpus for speech synthesis as well as the modeling of different emotions in the context of a Radio DJ speaker.  ...  By labeling the phonemes of the recorded phonemes, control parameters have been extracted from these sentences in order to transform or synthesize them in other emotion and speech rate conditions, and  ...  Acoustic correlates of emotion dimensions in view of speech synthesis. IN: PROCEEDINGS EUROSPEECH, 1:87-90, 2001. [Sch95] K.R. Scherer. Expression of emotion in voice and music.  ... 
doi:10.5281/zenodo.3753080 fatcat:wioalcgc5zcsbchqf5ah3vouey

Norepinephrine in the brain is associated with aversion to financial loss

H Takahashi, S Fujie, C Camerer, R Arakawa, H Takano, F Kodaka, H Matsui, T Ideno, S Okubo, K Takemura, M Yamada, Y Eguchi (+5 others)
2012 Molecular Psychiatry  
Central NE blockade by propranolol reduced sensitivity to the magnitude of possible losses from gambles. 7 A recent psychophysiological study demonstrated that arousal is associated with loss aversion  ...  8 This means that losses are more emotionally laden and salient than equivalent gains.  ... 
doi:10.1038/mp.2012.7 pmid:22349782 fatcat:p27pkp3n3rci5ciieviwk5huze

Dysfunction in the Neural Circuitry of Emotion Regulation--A Possible Prelude to Violence

R. J. Davidson
2000 Science  
We posit that impulsive aggression and violence arise as a consequence of faulty emotion regulation.  ...  Individuals vulnerable to faulty regulation of negative emotion are at risk for violence and aggression.  ...  (D) Anterior cingulate cortex.  ... 
doi:10.1126/science.289.5479.591 pmid:10915615 fatcat:aejunhqxbnellph7vv375n7qnm

Multimodal emotion estimation and emotional synthesize for interaction virtual agent

Minghao Yang, Jianhua Tao, Hao Li, Kaihui Mu
2012 2012 IEEE 2nd International Conference on Cloud Computing and Intelligence Systems  
This agent estimates users' emotional state by combining the information from the audio and facial expression with CART and boosting.  ...  The synchronous visual information of agent, including facial expression, head motion, gesture and body animation, are generated by multi-modal mapping from motion capture database.  ...  2 1 13 4 9 3 p p p p + (3) 2 d = 19 17 p p (4) where θ , γ , α and β are angles defined in Fig.3 .  ... 
doi:10.1109/ccis.2012.6664394 dblp:conf/ccis/YangTLM12 fatcat:fiyjm7wdojfyhflr2lnrnuczwa

A Model Based Method for Automatic Facial Expression Recognition [chapter]

Hans van Kuilenburg, Marco Wiering, Marten den Uyl
2005 Lecture Notes in Computer Science  
A system will be described that can classify expressions from one of the emotional categories joy, anger, sadness, surprise, fear and disgust with remarkable accuracy.  ...  Finally, we show how the system can be used for expression analysis and synthesis.  ...  Fig. 3 . 3 Artificially created expressions; original images from [15] .  ... 
doi:10.1007/11564096_22 fatcat:uvs42uvenjg2lnr3uirhmue66u

The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach [article]

Noé Tits, Kevin El Haddad, Thierry Dutoit
2019 arXiv   pre-print
From the recording of expressive speech to its modeling, the reader will have an overview of the main paradigms used in this field, through some of the most prominent systems and methods.  ...  As part of the Human-Computer Interaction field, Expressive speech synthesis is a very rich domain as it requires knowledge in areas such as machine learning, signal processing, sociology, psychology.  ...  Acknowledgments Noé Tits is funded through a PhD grant from the Fonds pour la Formatioǹ a la Recherche dans l'Industrie et l'Agriculture (FRIA), Belgium.  ... 
arXiv:1910.06234v1 fatcat:bktitvmpt5fapgcutrp3ncspwi

ExpNet: Landmark-Free, Deep, 3D Facial Expressions [article]

Feng-Ju Chang, Anh Tuan Tran, Tal Hassner, Iacopo Masi, Ram Nevatia, Gerard Medioni
2018 arXiv   pre-print
We further offer a novel means of evaluating the accuracy of estimated expression coefficients: by measuring how well they capture facial emotions on the CK+ and EmotiW-17 emotion recognition benchmarks  ...  Finally, at the same level of accuracy, our ExpNet is orders of magnitude faster than its alternatives.  ...  Here, this is not the case and expression parameters vary from one image to the next, regardless of subject identity. D.  ... 
arXiv:1802.00542v1 fatcat:zbdyklherjd3hktmgvxhk774jm

Deep Encoder-Decoder Models for Unsupervised Learning of Controllable Speech Synthesis [article]

Gustav Eje Henter, Jaime Lorenzo-Trueba, Xin Wang, Junichi Yamagishi
2018 arXiv   pre-print
We illustrate the utility of the various approaches with an application to acoustic modelling for emotional speech synthesis, where the unsupervised methods for learning expression control (without access  ...  Generating versatile and appropriate synthetic speech requires control over the output expression separate from the spoken text.  ...  Emotional Speech Synthesis The experiments in this paper consider speech synthesis from a large corpus of acted emotional speech, described in [63] .  ... 
arXiv:1807.11470v3 fatcat:2hqdissmirbrvoh4iu353fhzmi

ET-GAN: Cross-Language Emotion Transfer Based on Cycle-Consistent Generative Adversarial Networks [article]

Xiaoqi Jia, Jianwei Tai, Hang Zhou, Yakai Li, Weijuan Zhang, Haichao Du, Qingjia Huang
2020 arXiv   pre-print
Despite the remarkable progress made in synthesizing emotional speech from text, it is still challenging to provide emotion information to existing speech segments.  ...  To cope with such problems, we propose an emotion transfer system named ET-GAN, for learning language-independent emotion transfer from one emotion to another without parallel training samples.  ...  The purpose of transfer learning is to map data from different domains (e.g. Da, D b ) into an emotion space E, and make it as close as possible in the space.  ... 
arXiv:1905.11173v3 fatcat:y2ufl7zu3nczzmqpszrbxbjsaq

Musical Emotion Recognition with Spectral Feature Extraction Based on a Sinusoidal Model with Model-Based and Deep-Learning Approaches

Baijun Xie, Jonathan C. Kim, Chung Hyuk Park
2020 Applied Sciences  
The extracted features are evaluated for predicting the levels of emotional dimensions, namely arousal and valence.  ...  the prediction error rate.  ...  Given a source domain D s and learning task T s , a target domain D T and learning task T T , transfer learning aims to better learn the target predictive function f T (·) in D T using the knowledge from  ... 
doi:10.3390/app10030902 fatcat:asl5gzdtijcu5m45e25gosgwhy
« Previous Showing results 1 — 15 out of 15,428 results