Filters








12 Hits in 5.4 sec

EmoNets: Multimodal deep learning approaches for emotion recognition in video [article]

Samira Ebrahimi Kahou, Xavier Bouthillier, Pascal Lamblin, Caglar Gulcehre, Vincent Michalski, Kishore Konda, Sébastien Jean, Pierre Froumenty, Yann Dauphin, Nicolas Boulanger-Lewandowski, Raul Chandias Ferrari, Mehdi Mirza (+5 others)
2015 arXiv   pre-print
In this paper we present our approach to learning several specialist models using deep learning techniques, each focusing on one modality.  ...  The task of the emotion recognition in the wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies.  ...  We thank NSERC, Ubisoft, the German BMBF, project 01GQ0841 and CIFAR for their support. We also thank Abhishek Aggarwal, Emmanuel Bengio, Jörg Bornschein, Pierre-Luc Carrier,  ... 
arXiv:1503.01800v2 fatcat:e2ounsvhnzctnmw5ehqobn6nwq

Emotion Recognition System from Speech and Visual Information based on Convolutional Neural Networks

Nicolae-Catalin Ristea, Liviu Cristian Dutu, Anamaria Radoi
2019 2019 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)  
In this paper, we propose a system that is able to recognize emotions with a high accuracy rate and in real time, based on deep Convolutional Neural Networks.  ...  Experimental results show the effectiveness of the proposed scheme for emotion recognition and the importance of combining visual with audio data.  ...  An interesting approach towards emotion recognition is to use multimodal systems of recognition.  ... 
doi:10.1109/sped.2019.8906538 dblp:conf/sped/RisteaDR19 fatcat:ivjhman7ybfntpv2akmvfaqsua

A Review Paper on Emotion Recognition Analysis on Real Time Video by Using the Concept of Computer Vision

Rewati Saha
2021 International Journal for Research in Applied Science and Engineering Technology  
Keywords: Computer Vision, Machine Learning, Emotion Analysis, Deep Learning  ...  which is work on the concept of the emotion analysis, in this paper basically we did the study about the previous existing work on emotion analysis and try to find out the research gaps and there future  ...  In this paper we present a deep learning based approach to modeling different input modalities and to combining them in order to infer emotion labels from a given video sequence.  ... 
doi:10.22214/ijraset.2021.39424 fatcat:3zgbfzrurjegdkw4tnvyztgwgu

A Survey on Databases for Multimodal Emotion Recognition and an Introduction to the VIRI (Visible and InfraRed Image) Database

Mohammad Faridul Haque Siddiqui, Parashar Dhakal, Xiaoli Yang, Ahmad Y. Javaid
2022 Multimodal Technologies and Interaction  
The prodigious use of affective identification in e-learning, marketing, security, health sciences, etc., has increased demand for high-precision emotion recognition systems.  ...  Few unimodal DBs are also discussed that work in conjunction with other DBs for affect recognition.  ...  The corpus was used to train and test the Emonets, the multimodal DL method for detecting emotions in videos using CNN, DBN, K-means, and relational auto-encoders for classifying the seven emotions and  ... 
doi:10.3390/mti6060047 fatcat:xeempccu3zdavm3am3t3w3suda

Emotion schemas are embedded in the human visual system

Philip A. Kragel, Marianne C. Reddan, Kevin S. LaBar, Tor D. Wager
2019 Science Advances  
In two functional magnetic resonance imaging studies, we demonstrate that patterns of human visual cortex activity encode emotion category–related model output and can decode multiple categories of emotional  ...  These results suggest that rich, category-specific visual features can be reliably mapped to distinct emotions, and they are coded in distributed representations within the human visual system.  ...  The model is named EmoNet, as it is based on a seminal deep neural net model of object recognition called AlexNet (28) and has been adapted to identify emotional situations rather than objects.  ... 
doi:10.1126/sciadv.aaw4358 pmid:31355334 pmcid:PMC6656543 fatcat:tpdwbbgwyzdk5fhuaxjtqbh2ha

AttendAffectNet–Emotion Prediction of Movie Viewers Using Multimodal Fusion with Self-Attention

Ha Thi Phuong Thao, B T Balamurali, Gemma Roig, Dorien Herremans
2021 Sensors  
Current studies on this topic focus on video representation learning and fusion techniques to combine the extracted features for predicting affect.  ...  Particularly, visual, audio, and text features are considered for predicting emotions (and expressed in terms of valence and arousal).  ...  Emonets: Multimodal deep learning approaches for emotion recognition in video. J. Multimodal User Interfaces 2016, 10, 99–111. [CrossRef] 10.  ... 
doi:10.3390/s21248356 pmid:34960450 pmcid:PMC8704548 fatcat:dhqibcpfozgm3as4rzp2n4qa4u

Group Emotion Detection Based on Social Robot Perception

Marco Quiroz, Raquel Patiño, José Diaz-Amado, Yudith Cardinale
2022 Sensors  
Additionally, this work proposes a strategy to create datasets with images/videos in order to validate the estimation of emotions in scenes and personal emotions.  ...  However, in social environments in which it is common to find a group of persons, new approaches are needed in order to make robots able to recognise groups of people and the emotion of the groups, which  ...  Data Availability Statement: Data available in a publicly accessible repository. Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/s22103749 pmid:35632160 fatcat:n563potqe5bdvp3djv4bc2qqiu

DeepSpace: Mood-Based Image Texture Generation for Virtual Reality from Music

Misha Sra, Prashanth Vijayaraghavan, Ognjen Rudovic, Pattie Maes, Deb Roy
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
The main novelty of our DeepSpace approach is that it uses music to automatically create kaleidoscopic textures for virtual environments designed to elicit emotional responses in users.  ...  Affective virtual spaces are of interest for many VR applications in areas of wellbeing, art, education, and entertainment.  ...  In Emonets [20] , the authors perform sentiment analysis using multimodal deep learning techniques to predict emotions in videos.  ... 
doi:10.1109/cvprw.2017.283 dblp:conf/cvpr/SraVRMR17 fatcat:dbfd4sn4jnelvnxfyo6pr23lp4

Multi-view representation learning for natural language processing applications [article]

Nikolaos Papasarantopoulos, University Of Edinburgh, University Of Edinburgh, Shay Cohen, Stephen Renals
2020
The pervasion of machine learning in a vast number of applications has given rise to an increasing demand for the effective processing of complex, diverse and variable datasets.  ...  The nature of multi-view datasets calls for special treatment in terms of representation.  ...  EmoNets: Multimodal deep learning approaches for emotion recognition in video. Journal on Multimodal User Interfaces, ( ): -.  ... 
doi:10.7488/era/267 fatcat:f7tjkwf6prc3zb74oh653zav5a

30th Annual Computational Neuroscience Meeting: CNS*2021–Meeting Abstracts

2021 Journal of Computational Neuroscience  
Currently, most functional models of neural activity are based on firing rates, while the most relevant signals for inter-neuron communication are spikes.  ...  One of the goals of neuroscience is to understand the computational principles that describe the formation of behaviorally relevant signals in the brain, as well as how these computations are realized  ...  Acknowledgements We thank NIH for supporting N. Kadakia with grants-1F32MH118700 and 1K99DC019397 and supporting T. Emonet with grant 2R01GM106189.  ... 
doi:10.1007/s10827-021-00801-9 pmid:34931275 pmcid:PMC8687879 fatcat:evpmmfpaivgpxdqpive5xdgmwu

Abstracts from the Thirty-fourth Annual Meeting of the Association for Chemoreception Sciences

2013 Chemical Senses  
Here we show that the de novo DNA methyltransferase Dnmt3a is expressed in postnatal neural stem cells (NSCs) and is required for neurogenesis.  ...  Thus, non-promoter DNA methylation by Dnmt3a may be utilized for maintaining active chromatin states of genes critical for development.  ...  Lastly, no changes in questionnaires, psychophysiology, or ratings of videos were observed in the non-olfactory control condition.  ... 
doi:10.1093/chemse/bjs091 pmid:23300212 fatcat:or55rqfvszfqdmmdlusi5jfrbi

Peripheral and Central Mechanisms of Limb Position Sense and Body Representation

Anthony John Tsay
2017
For instance, when we feel hungry, where our mouth is relative to our hands, and how to get food from the plate into the mouth with our fingers.  ...  Little is known in regards to how the brain processes sensory information in order to build a coherent central representation of the body.  ...  Acknowledgements We would like to thank Prof Uwe Proske for his feedback in improving this manuscript.  ... 
doi:10.4225/03/5874659bca4de fatcat:c454mx7ij5cbnipq47cxdry5mi