A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Adaptive Domain-Aware Representation Learning for Speech Emotion Recognition
2020
Interspeech 2020
However, representation learning is challenging due to much variability from speech emotion signals across diverse domains, such as gender, age, languages, and social cultural context. ...
Many approaches focus on domain-invariant representation learning which loses the domain-specific knowledge and results in unsatisfactory speech emotion recognition across domains. ...
With the domainaware attention, the performance of the model is further improved significantly. ...
doi:10.21437/interspeech.2020-2572
dblp:conf/interspeech/FanXXH20
fatcat:z4eel4633zcytdmtt2fqtrob64
Accounting for Variations in Speech Emotion Recognition with Nonparametric Hierarchical Neural Network
[article]
2021
arXiv
pre-print
However, existing models face a few constraints: 1) they rely on a clear definition of domains (e.g. gender, noise condition, etc.) and the availability of domain labels; 2) they often attempt to learn ...
In recent years, deep-learning-based speech emotion recognition models have outperformed classical machine learning models. ...
Multitask CNN (MTL-CNN). Multitask Learning attempts to learn models that perform well on multiple tasks simultaneously. ...
arXiv:2109.04316v1
fatcat:74k6kwobwzg5vptsbi4clw6uby
The consequences of media multitasking for youth: A review
2015
Computers in Human Behavior
The consequences of media multitasking for youth: a review van der Schuur, W.A.; Baumgartner, S.E.; Sumter, S.R.; Valkenburg, P.M. ...
To improve the comparability across studies and to examine the consequences of each type of media multitasking, both types should be examined. ...
switch between tasks); (2) their academic performance (e.g., perceived academic learning and course grades); and, more recently, (3) their socioemotional functioning (e.g., depression and social anxiety ...
doi:10.1016/j.chb.2015.06.035
fatcat:aawhmzbobbfehmvcafzistreh4
Multi-Task Semi-Supervised Adversarial Autoencoding for Speech Emotion Recognition
[article]
2020
arXiv
pre-print
In particular, we use gender identifications and speaker recognition as auxiliary tasks, which allow the use of very large datasets, e.g., speaker classification datasets. ...
Inspite the emerging importance of Speech Emotion Recognition (SER), the state-of-the-art accuracy is quite low and needs improvement to make commercial applications of SER viable. ...
[27] used gender and naturalness (natural or acted corpus) recognition as auxiliary tasks to improve the performance of emotion recognition using different emotional databases. Zhang et al. ...
arXiv:1907.06078v5
fatcat:drhb3dfjsraxxkt73dnncqhsqa
MindLink-Eumpy: An Open-Source Python Toolbox for Multimodal Emotion Recognition
2021
Frontiers in Human Neuroscience
In the detection of facial expressions, the algorithm used by MindLink-Eumpy is a multitask convolutional neural network (CNN) based on transfer learning technique. ...
The feasibility and efficiency of MindLink-Eumpy for emotion recognition is thus demonstrated. ...
We gratefully acknowledge the developers of Python, Tensorflow, Keras, scikit-learn, NumPy, Pandas, MNE, and other software packages that MindLink-Eumpy builds upon. ...
doi:10.3389/fnhum.2021.621493
pmid:33679348
pmcid:PMC7933462
fatcat:7vdb3nslibdmrk3dtlglc64pau
Ten Recent Trends in Computational Paralinguistics
[chapter]
2012
Lecture Notes in Computer Science
more and more types of features, fusing linguistic and non-linguistic phenomena, devoting more effort to optimisation of the machine learning aspects, standardising the whole processing chain, addressing ...
The field of computational paralinguistics is currently emerging from loosely connected research in speech analysis, including speaker classification and emotion recognition. ...
In the future, enhanced modelling of multiple correlated target variables could be performed through multitask learning [15] . ...
doi:10.1007/978-3-642-34584-5_3
fatcat:2kacfc7f6fbyfhvo2yha2iunzm
Guest Editorial: The Computational Face
2018
IEEE Transactions on Pattern Analysis and Machine Intelligence
ACKNOWLEDGMENTS This project has been partially supported by the Spanish projects TIN2015-66951-C2-2-R and TIN2016-74946-P (MINECO/FEDER, UE) and CERCA Programme/Generalitat de Catalunya and by INAOE, ...
We thank ChaLearn Looking at People sponsors for their support, including Microsoft Research, Google, NVIDIA Coorporation, Amazon, Facebook, and Disney Research. ...
Most articles on facial markers and facial attribute estimation adopted a deep learning methodology for feature learning. A deep learning model also was used most often for recognition problems. ...
doi:10.1109/tpami.2018.2869610
fatcat:izmdxwpzujdv3ctx63lrselk24
A Multitask Deep Learning Approach for User Depression Detection on Sina Weibo
[article]
2020
arXiv
pre-print
However, existing studies of depression detection based on machine learning still leave relatively low classification performance, suggesting that there is significant improvement potential for improvement ...
It includes more than 20,000 normal users and more than 10,000 depressed users, both of which are manually labeled and rechecked by professionals. ...
The user domain contains the user's gender, birthday, profile (a short text of the user's self-description), the number of followers, the number of followings, and the list of tweets. ...
arXiv:2008.11708v1
fatcat:cyyip7bczfchplskgpteyiijma
Analysis of Facial Information for Healthcare Applications: A Survey on Computer Vision-Based Approaches
2020
Information
This paper gives an overview of the cutting-edge approaches that perform facial cue analysis in the healthcare area. ...
A research taxonomy is introduced by dividing the face in its main features: eyes, mouth, muscles, skin, and shape. ...
In [209] , the authors present a range of application of deep learning in healthcare, and they focus their attention also on speech recognition. ...
doi:10.3390/info11030128
fatcat:yx7izg2jlvhsjpppf6ektkmlye
SPEECH EMOTION RECOGNITION SURVEY
2020
JOURNAL OF MECHANICS OF CONTINUA AND MATHEMATICAL SCIENCES
To clarify JOURNAL OF MECHANICS OF CONTINUA AND MATHEMATICAL SCIENCES www.journalimcms.org ISSN (Online) : ...
Speech emotion recognition (SER) research field extends back to 1996, but still one main obstacle still exists, which is achieving real-time SER systems. ...
Multitask learning was used to make use of the mutual features between gender and emotion classification. ...
doi:10.26782/jmcms.2020.09.00016
fatcat:ejnl2rlatnhhzeotno2ymmjjke
Toward a multimodal multitask model for neurodegenerative diseases diagnosis and progression prediction
[article]
2021
arXiv
pre-print
This article overviews various categories of models used for Alzheimer's disease prediction with their respective learning methods, by establishing a comparative study of early prediction and detection ...
Recent studies on modelling the progression of Alzheimer's disease use a single modality for their predictions while ignoring the time dimension. ...
Multitask learning has been used successfully in all applications of machine learning, natural language processing (Worsham and Kalita, 2020) , speech recognition (Pironkov et al., 2016) , computer vision ...
arXiv:2110.09309v1
fatcat:52n6u73lefgndlkp72ljimh5pu
Distributing Recognition in Computational Paralinguistics
2014
IEEE Transactions on Affective Computing
, pathology, age and gender. ...
In order to preliminarily investigate the feasibility and reliability of the proposed system, we focus on the trade-off between transmission bandwidth and recognition accuracy. ...
The authors would also thank to J€ urgen Geiger for his feedback on an early version of this paper. ...
doi:10.1109/taffc.2014.2359655
fatcat:olqvz67r7nbwpmp34eslj7j33u
Deep Learning in Human Activity Recognition with Wearable Sensors: A Review on Advances
[article]
2022
arXiv
pre-print
Many of these applications are made possible by leveraging the rich collection of low-power sensors found in many mobile and wearable devices to perform human activity recognition (HAR). ...
Recently, deep learning has greatly pushed the boundaries of HAR on mobile and wearable devices. ...
Acknowledgments Special thanks to Haik Kalamtarian and Krystina Neuman for their valuable feedback. ...
arXiv:2111.00418v5
fatcat:wylhzwkndjar7fc3esvhca2axi
Multilabel convolution neural network for facial expression recognition and ordinal intensity estimation
2021
PeerJ Computer Science
Facial Expression Recognition (FER) has gained considerable attention in affective computing due to its vast area of applications. ...
We also carried out a comparative study of our model with some popularly used multilabel algorithms using standard multilabel metrics. ...
Xu et al. (2020) proposed a multitasking learning system using a cascaded CNN, and the objectives tend towards incorporating students attentiveness and students emotion recognition and intensity estimation ...
doi:10.7717/peerj-cs.736
pmid:34909462
pmcid:PMC8641570
fatcat:jsv3tmfejfadxdzc7umux4qal4
Abstracts Presented at the International Neuropsychological Society, British Neuropsychological Society and the Division of Neuropsychology of the British Psychological Society Joint Mid–Year Meeting, July 6–9, 2005, Dublin, Ireland
2005
Journal of the International Neuropsychological Society
Using structural equation modeling and adjusting for age, gender, years of education, depression, and injecting drug use, results showed that increased levels of cortisol in response to stress mediate ...
Comparison of baseline and intervention phases showed a significant improvement on anxiety and depression scores. ...
Recall and recognition was obtained immediately, and 30 minutes following initial learning. ...
doi:10.1017/s1355617705059941
fatcat:7iyw5adwpjgavcwjbpfocarzkm
« Previous
Showing results 1 — 15 out of 868 results