Filters








57,243 Hits in 9.0 sec

Continuing Pre-trained Model with Multiple Training Strategies for Emotional Classification

Bin Li, Yixuan Weng, Qiya Song, Bin Sun, Shutao Li
2022 Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis   unpublished
This paper describes the continual pre-training method for the masked language model (MLM) to enhance the DeBERTa pre-trained language model.  ...  Moreover, our submission ranked Top-1 with all metrics in the evaluation phase for the Emotion Classification task.  ...  ., 2020) model with continuing pre-training method for processing this classification task, where the main method structure is shown in Figure 1 .  ... 
doi:10.18653/v1/2022.wassa-1.22 fatcat:dwd2qgyznjazbh5kvajf2frlgm

Codified audio language modeling learns useful representations for music information retrieval

Rodrigo Castellon, Chris Donahue, Percy Liang
2021 Zenodo  
four MIR tasks: tagging, genre classification, emotion recognition, and key detection.  ...  For key detection, we observe that representations from Jukebox are considerably stronger than those from models pre-trained on tagging, suggesting that pre-training via codified audio language modeling  ...  We also thank all reviewers for their helpful feedback.  ... 
doi:10.5281/zenodo.5624605 fatcat:trinlottffdvpizt5r43be7ajq

Codified audio language modeling learns useful representations for music information retrieval [article]

Rodrigo Castellon and Chris Donahue and Percy Liang
2021 arXiv   pre-print
four MIR tasks: tagging, genre classification, emotion recognition, and key detection.  ...  For key detection, we observe that representations from Jukebox are considerably stronger than those from models pre-trained on tagging, suggesting that pre-training via codified audio language modeling  ...  We also thank all reviewers for their helpful feedback.  ... 
arXiv:2107.05677v1 fatcat:26yvxejc7vdlnph66gd6rnrcze

Sentiment Analysis for Spanish Tweets based on Continual Pre-training and Data Augmentation

Yingwen Fu, Ziyu Yang, Nankai Lin, Lianxi Wang, Feng Chen
2021 Annual Conference of the Spanish Society for Natural Language Processing  
In addition, we leverage two augmented strategies to enhance the classic fine-tuned model, namely continual pre-training and data augmentation to improve the generalization capability.  ...  Experimental results demonstrate the effectiveness of the BERT model and two augmented strategies.  ...  In addition, continual pre-training with training set and low proportion data back translation respectively outperforms continual pre-training with general corpus and whole data back translation.  ... 
dblp:conf/sepln/FuYLWC21 fatcat:74q72ppbu5b55gtm3gvkjpk53y

Speech Emotion Recognition with Heterogeneous Feature Unification of Deep Neural Network

Wei Jiang, Zheng Wang, Jesse S. Jin, Xianfeng Han, Chunguang Li
2019 Sensors  
Automatic speech emotion recognition is a challenging task due to the gap between acoustic features and human emotions, which rely strongly on the discriminative acoustic features extracted for a given  ...  for recognition task.  ...  For example, in Lakomkin et al. [29] , two models which use a pre-trained automatic speech recognition (ASR) network were proposed for speech emotion recognition.  ... 
doi:10.3390/s19122730 fatcat:n4pgdcbcnzannd5p3ystsyuclm

Joint Deep Cross-Domain Transfer Learning for Emotion Recognition [article]

Dung Nguyen, Sridha Sridharan, Duc Thanh Nguyen, Simon Denman, Son N. Tran, Rui Zeng, Clinton Fookes
2020 arXiv   pre-print
Despite such substantial progress, existing approaches are still hindered by insufficient training data, and the resulting models do not generalize well under mismatched conditions.  ...  Deep learning has been applied to achieve significant progress in emotion recognition.  ...  fusing the pre/post-trained models with a classification loss.  ... 
arXiv:2003.11136v1 fatcat:rtlw75elrvcope6zfkkz2qvku4

CAiRE: An Empathetic Neural Chatbot [article]

Zhaojiang Lin, Peng Xu, Genta Indra Winata, Farhad Bin Siddique, Zihan Liu, Jamin Shin, Pascale Fung
2020 arXiv   pre-print
., 2019) learning approach that fine-tunes a large-scale pre-trained language model with multi-task objectives: response language modeling, response prediction and dialogue emotion detection.  ...  We evaluate our model on the recently proposed empathetic-dialogues dataset (Rashkin et al., 2019), the experiment results show that CAiRE achieves state-of-the-art performance on dialogue emotion detection  ...  The cross-entropy is applied for emotion classification loss L E .  ... 
arXiv:1907.12108v4 fatcat:4qh7cs5un5chjp3wrxy3uqlthe

A Data-Driven Adaptive Emotion Recognition Model for College Students Using an Improved Multifeature Deep Neural Network Technology

Li Liu, Yunfeng Ji, Yun Gao, Tao Li, Wei Xu, Xin Ning
2022 Computational Intelligence and Neuroscience  
Second, feature fusion is performed on multiple features using the autosklearn model integration technique.  ...  With the increasing pressure on college students in terms of study, work, emotion, and life, the emotional changes of college students are becoming more and more obvious.  ...  Typical continuous emotional expression models include Wundt emotional space [30] , Schlosberg 3D cone emotional space [31] , 3D emotional space for PAD [32] , and Plutchik emotional wheel continuous  ... 
doi:10.1155/2022/1343358 pmid:35665293 pmcid:PMC9162810 fatcat:neiez4r2f5fldik4isszengbci

Emotion Classification for Spanish with XLM-RoBERTa and TextCNN

Suidong Qu, Yanhua Yang, Qinyu Que
2021 Annual Conference of the Spanish Society for Natural Language Processing  
Finally, the output of the model is input into the fully connected layer for classification. Our model rank 14th in this task. The weighted-averaged F1 is 0.5570, and the accurcy is 0.5368.  ...  Our team (team name is Dong) first use XLM-Roberta for embedding.  ...  Acknowledgements We would like to thank the organizers for organizing this task and providing data support, and thank the review experts for their patience.  ... 
dblp:conf/sepln/QuYQ21 fatcat:svb7w2kymjckliqfjqsbalfl2y

Text Emotion Distribution Learning via Multi-Task Convolutional Neural Network

Yuxiang Zhang, Jiamei Fu, Dongyu She, Ying Zhang, Senzhang Wang, Jufeng Yang
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
However, there exists ambiguity characteristic for the emotion analysis, since a single sentence can evoke multiple emotions with different intensities.  ...  the emotion classification.  ...  [Wang and Pal, 2015] propose a model with several constraints based on an emotion lexicon for emotion classification.  ... 
doi:10.24963/ijcai.2018/639 dblp:conf/ijcai/ZhangFSZWY18 fatcat:mh6gdl2m6vdyjbcdhfzr6chhei

EmotionX-IDEA: Emotion BERT – an Affectional Model for Conversation [article]

Yen-Hao Huang, Ssu-Rui Lee, Mau-Yun Ma, Yi-Hsin Chen, Ya-Wen Yu, Yi-Shin Chen
2019 arXiv   pre-print
In this paper, we investigate the emotion recognition ability of the pre-training language model, namely BERT.  ...  The experiments show that by mapping the continues dialogue into a causal utterance pair, which is constructed by the utterance and the reply utterance, models can better capture the emotions of the reply  ...  The pre-training strategies are described below.  ... 
arXiv:1908.06264v1 fatcat:37m4nyb2xbbkjkzygqttb43oyy

Emotion Embedding Spaces for Matching Music to Stories

Minz Won, Justin Salamon, Nicholas J. Bryan, Gautham Mysore, Xavier Serra
2021 Zenodo  
Both the music and text domains have existing datasets with emotion labels, but mismatched emotion vocabularies prevent us from using mood or emotion annotations directly for matching.  ...  ., books), use multiple sentences as input queries, and automatically retrieve matching music. We formalize this task as a cross-modal text-to-music retrieval problem.  ...  Embedding Models to Bridge the Modality Gap Classification As a starting point, we train two separate mood classification models for text and music (Figure 2-(a) ).  ... 
doi:10.5281/zenodo.5624482 fatcat:uqlm3s5korb5rm2ybkbvr42qpi

Dimensional Emotion Detection from Categorical Emotion [article]

Sungjoon Park, Jiseon Kim, Seonghyeon Ye, Jaeyeol Jeon, Hee Young Park, Alice Oh
2021 arXiv   pre-print
We present a model to predict fine-grained emotions along the continuous dimensions of valence, arousal, and dominance (VAD) with a corpus with categorical emotion annotations.  ...  We use pre-trained RoBERTa-Large and fine-tune on three different corpora with categorical labels and evaluate on EmoBank corpus with VAD scores.  ...  We use pre-splits of train, valid, test set of the dataset. EmoBank. Sentences paired with continuous VAD scores as labels.  ... 
arXiv:1911.02499v2 fatcat:yedt7nfdnnhijkle5e32fwth7u

Image based Static Facial Expression Recognition with Multiple Deep Network Learning

Zhiding Yu, Cha Zhang
2015 Proceedings of the 2015 ACM on International Conference on Multimodal Interaction - ICMI '15  
The pre-trained models are then fine-tuned on the training set of SFEW 2.0.  ...  Each CNN model is initialized randomly and pre-trained on a larger dataset provided by the Facial Expression Recognition (FER) Challenge 2013.  ...  Network Pre-training on FER We pre-train our CNN model on the combined FER dataset formed by train, validation and test set.  ... 
doi:10.1145/2818346.2830595 dblp:conf/icmi/YuZ15 fatcat:t5v5qcpj65crdhmmmzzytl3vqq

BERT-based Acronym Disambiguation with Multiple Training Strategies [article]

Chunguang Pan, Bingyan Song, Shengguang Wang, Zhipeng Luo
2021 arXiv   pre-print
Since few works have been done for AD in scientific field, we propose a binary classification model incorporating BERT and several training strategies including dynamic negative sample selection, task  ...  adaptive pretraining, adversarial training and pseudo labeling in this paper.  ...  Combining multiple advantages in above works, we propose a binary classification model utilizing BERT and several training strategies such as adversarial training and so on.  ... 
arXiv:2103.00488v2 fatcat:rnllmywavjbl5onjhhxgejdm54
« Previous Showing results 1 — 15 out of 57,243 results