10,769 Hits in 6.9 sec

Cross-lingual speech emotion recognition system based on a three-layer model for human perception

Reda Elbarougy, Masato Akagi
2013 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference  
Most of the previous studies for automatic speech emotion recognition were based on detecting the emotional state working on mono-language.  ...  The experimental results reveal that the proposed method is effective for selecting acoustic features representing emotion dimensions, working with two different databases, one in Japanese and the other  ...  Therefore, using the proposed model for building an automatic speech emotion recognition system allows us to find many acoustic features, which allow us to investigate the cross-language mode.  ... 
doi:10.1109/apsipa.2013.6694137 dblp:conf/apsipa/ElbarougyA13 fatcat:dbfxu3rx4fcgvpfxmk6b53tcbq

Discriminating Emotions in the Valence Dimension from Speech Using Timbre Features

Anvarjon Tursunov, Soonil Kwon, Hee-Suk Pang
2019 Applied Sciences  
From extensive experiments, it was found that timbre acoustic features could characterize emotions sufficiently in a speech in the valence dimension.  ...  of emotions for the Berlin Emotional Speech Database.  ...  Methodology General Speech Emotion Recognition System To automatically recognize emotions from a speech signal, there must be an SER system.  ... 
doi:10.3390/app9122470 fatcat:6srw77oqgnedvnhskikaav6gl4

Improving speech emotion dimensions estimation using a three-layer model of human perception

Reda Elbarougy, Masato Akagi
2014 Acoustical Science and Technology  
First, a top-down acoustic feature selection method based on this model was conducted to select the most relevant acoustic features for each emotion dimension.  ...  The purpose of this research is to construct a speech emotion recognition system that has the ability to precisely estimate values of emotion dimensions especially valence.  ...  the most related acoustic features for each emotion dimensions, (2) whether using these selected acoustic features as inputs to an automatic emotion recognition system will improve the accuracy of all  ... 
doi:10.1250/ast.35.86 fatcat:tuixzuulebde3hxlmtrelfzusm

User Identity Protection in Automatic Emotion Recognition through Disguised Speech

Fasih Haider, Pierre Albert, Saturnino Luz
2021 AI  
In this article, acoustic features extracted from the non-disguised and disguised speech are evaluated in an affect recognition task using six different machine learning classification methods.  ...  The transfer learning from non-disguised to disguised speech results in a reduction of UAR (65.13%). However, feature selection improves the UAR (68.32%).  ...  extraction of features from speech, widely used for emotion and affect recognition in speech [23] .  ... 
doi:10.3390/ai2040038 fatcat:zegmpfk66za2tn5q4t5bhmbyre

Toward detecting emotions in spoken dialogs

Chul Min Lee, S.S. Narayanan
2005 IEEE Transactions on Speech and Audio Processing  
Optimization of the acoustic correlates of emotion with respect to classification error was accomplished by investigating different feature sets obtained from feature selection, followed by principal component  ...  Most previous studies in emotion recognition have used only the acoustic information contained in speech.  ...  Lee for their valuable help. Any opinions, findings and conclusionsor recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the funding agencies.  ... 
doi:10.1109/tsa.2004.838534 fatcat:nwqy6pnxizgbzc4q4nnf4zskn4

A Transfer Learning Method for Speech Emotion Recognition from Automatic Speech Recognition [article]

Sitong Zhou, Homayoon Beigi
2020 arXiv   pre-print
The proposed method resolves this problem by applying transfer learning techniques in order to leverage data from the automatic speech recognition (ASR) task for which ample data is available.  ...  Using only speech, we obtain an accuracy 71.7% for anger, excitement, sadness, and neutrality emotion content.  ...  Another challenge for emotion recognition is that speakers express emotions in different ways, in addition, environments can affect acoustic features.  ... 
arXiv:2008.02863v2 fatcat:ryklvv5r5rfllp7ovloaw4frse

Approbation of a method for studying the reflection of emotional state in children's speech and pilot psychophysiological experimental data

Elena Lyakso
2020 International Journal of Advanced Trends in Computer Science and Engineering  
Two approaches to emotional speech recognition are planned to use -by man and automatically, moreover, for automatic recognition, different analysis algorithms will be applied.  ...  The use of different algorithms for automated assessment of emotional states according to speech features that were previously used in analyzing a specific language is new and relevant.  ...  Series 2 -Automatic recognition of emotional child speech The method of automatic recognition of emotional child speech was developed and tested in our study [51] .  ... 
doi:10.30534/ijatcse/2020/91912020 fatcat:t2ti6sw3j5ch3l4wovmuej3lyu

Vocal Emotion Recognition with Log-Gabor Filters

Yu Gu, Eric Postma, Hai-Xiang Lin
2015 Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge - AVEC '15  
This leads us to conclude that tuned log-Gabor filters support the automatic recognition of emotions from speech and may be beneficial to other speech-related tasks.  ...  We compared the emotion-recognition performances of tuned log-Gabor filters with standard acoustic features.  ...  Features for Automatic Emotion Recognition Current automatic emotion recognition systems rely on machine learning.  ... 
doi:10.1145/2808196.2811635 dblp:conf/mm/GuPL15 fatcat:rb6y3aejdzf4tmg7g3mdn2fgvq

Problems of the Automatic Emotion Recognitions in Spontaneous Speech; An Example for the Recognition in a Dispatcher Center [chapter]

Klára Vicsi, Dávid Sztahó
2011 Lecture Notes in Computer Science  
In a testing experiment it was examined that what kind of acoustical features are the most important for the characterization of emotions, using spontaneous speech database.  ...  In a real life experiment automatic recognition system was prepared for a telecommunication call center.  ...  We would like to thank the leader of the SPSS Hungary Ltd. and INVITEL Telecom Zrt. to give free run of the recorded 1000 dialogues for us.  ... 
doi:10.1007/978-3-642-18184-9_28 fatcat:fnf4iz66nnbsroischwpdloi6m

Spoken affect classification using neural networks

D. Morrison, R. Wang, L.C. De Silva
2005 2005 IEEE International Conference on Granular Computing  
This paper aims to build an affect recognition system by analysing acoustic speech signals. A database of 391 authentic emotional utterances was collected from 11 speakers.  ...  Two emotions, angry and neutral, were considered. Features relating to pitch, energy and rhythm were extracted and used as feature vectors for a neural network.  ...  ACKNOWLEDGEMENTS This study was funded by the Technology for Industry Fellowships (TIF), New Zealand. The authors are grateful for the use of the speech database provided by Mabix International.  ... 
doi:10.1109/grc.2005.1547359 dblp:conf/grc/MorrisonWS05 fatcat:65xwt2afizgh7ibb5xxdfvhuom

Support Vector Regression for Automatic Recognition of Spontaneous Emotions in Speech

Michael Grimm, Kristian Kroschel, Shrikanth Narayanan
2007 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07  
Feature selection and parameter optimization are studied. The data was recorded from 47 speakers in a German talk-show on TV.  ...  We present novel methods for estimating spontaneously expressed emotions in speech.  ...  Feature selection To reduce the large amount of acoustic features, we used the Sequential Forward Selection (SFS) technique for feature selection [15] .  ... 
doi:10.1109/icassp.2007.367262 dblp:conf/icassp/GrimmKN07 fatcat:4crg6mnqtnbellg3pla6z4hzqq

Comparative Study on Feature Selection and Fusion Schemes for Emotion Recognition from Speech

Santiago Planet, Ignasi Iriondo
2012 International Journal of Interactive Multimedia and Artificial Intelligence  
The automatic analysis of speech to detect affective states may improve the way users interact with electronic devices.  ...  The acoustic set was reduced by a greedy procedure selecting the most relevant features to optimize the learning stage.  ...  The task of emotion recognition from speech can be tackled from different perspectives [3] .  ... 
doi:10.9781/ijimai.2012.166 fatcat:4ft7p4uionfjho57cc77fazxma

Automatic Re-Formulation of user's Irrational Behavior in Speech Recognition using Acoustic Nudging Model

Lydia Kehinde Ajayi, Ambrose Azeta, Isaac Odun-Ayo, Felix Chidozie, Ajayi Peter Taiwo
2020 Journal of Computer Science  
In automatic speech recognition for development of automatic speech recognition applications, there has been numerous claims on the presence of speech recognition errors known as classified into lexical  ...  of user's acoustic irrational behavior in speech signal automatically, thereby making the model applicable to any speech recognition applications.  ...  Acknowledgement In this study, the researchers would like to express our deep appreciation for the support rendered by Covenant University Centre for Research, Innovation and Discovery (CUCRID).  ... 
doi:10.3844/jcssp.2020.1731.1741 fatcat:muwvb2fuevgxfhai3a2hd26j7y

A Novel Approach for Classification of Speech Emotions Based on Deep and Acoustic Features

Mehmet Bilal Er
2020 IEEE Access  
One of the main advantages of deep learning techniques is, for example, the automatic selection of important features inherent in audio files with a particular emotion in the task of recognizing speech  ...  In conventional approaches to recognition of speech emotions, features representing the acoustic content of speech are extracted.  ...  In [58] , presents a speech emotion recognition system using the recurrent neural network (RNN) model.  ... 
doi:10.1109/access.2020.3043201 fatcat:khpnyeyqvjej5mg7scdexbplfu

Speech Emotion Analysis in Noisy Real-World Environment

Ashish Tawari, Mohan M. Trivedi
2010 2010 20th International Conference on Pattern Recognition  
Automatic recognition of emotional states via speech signal has attracted increasing attention in recent years.  ...  In this paper, we present a framework with adaptive noise cancellation as front end to speech emotion recognizer.  ...  We are thankful to our colleagues at CVRR lab for useful discussions and assistances.  ... 
doi:10.1109/icpr.2010.1132 dblp:conf/icpr/TawariT10 fatcat:3gs6tnbpqvdcjma6lp3mozsblq
« Previous Showing results 1 — 15 out of 10,769 results