Filters








708 Hits in 6.4 sec

Sentiment analysis in non-fixed length audios using a Fully Convolutional Neural Network

María Teresa García-Ordás, Héctor Alaiz-Moretón, José Alberto Benítez-Andrades, Isaías García-Rodríguez, Oscar García-Olalla, Carmen Benavides
2021 Biomedical Signal Processing and Control  
In this work, a sentiment analysis method that is capable of accepting audio of any length, without being fixed a priori, is proposed.  ...  Mel spectrogram and Mel Frequency Cepstral Coefficients are used as audio description methods and a Fully Convolutional Neural Network architecture is proposed as a classifier.  ...  The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.  ... 
doi:10.1016/j.bspc.2021.102946 fatcat:wdlxldvtubeu5j76qtkex74fwi

Real-Time Speech Emotion and Sentiment Recognition for Interactive Dialogue Systems

Dario Bertero, Farhad Bin Siddique, Chien-Sheng Wu, Yan Wan, Ricky Ho Yin Chan, Pascale Fung
2016 Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing  
A separate, CNN-based sentiment analysis module recognizes sentiments from speech recognition results, with 82.5 Fmeasure on human-machine dialogues when trained with out-of-domain data.  ...  In this paper, we describe our approach of enabling an interactive dialogue system to recognize user emotion and sentiment in realtime.  ...  The segments have an average length slightly above 13 seconds. Convolutional Neural Network model The Convolutional Neural Network (CNN) model using raw audio as input is shown in Figure 1 .  ... 
doi:10.18653/v1/d16-1110 dblp:conf/emnlp/BerteroSWWCF16 fatcat:bgooboql3zd2ramzaneouax5eq

Sentiment analysis by deep learning approaches

Sreevidya P., O. V. Ramana Murthy, S. Veni
2020 TELKOMNIKA (Telecommunication Computing Electronics and Control)  
We propose a model for carrying out deep learning based multimodal sentiment analysis. The MOUD dataset is taken for experimentation purposes.  ...  We developed two parallel text based and audio basedmodels and further, fused these heterogeneous feature maps taken from intermediate layers to complete thearchitecture.  ...  Li and Li [7] used SVM for classifying sentiments in micro blogs. Neural network and SVM were applied for sentiment analysis and compared by Moraes et al. [8] .  ... 
doi:10.12928/telkomnika.v18i2.13912 fatcat:5s5ghjwwv5eclf23yzyg73evby

Cyclic Autoencoder for Multimodal Data Alignment Using Custom Datasets

Zhenyu Tang, Jin Liu, Chao Yu, Y. Ken Wang
2021 Computer systems science and engineering  
To this end, we propose: (i) a novel cyclic autoencoder based on convolutional neural network.  ...  convolutional network for video subtitle recognition.  ...  ð0; lÞ; l 2 ½1; T Ã Pg; C i 2 D (3) For preprocessing, the audio is converted to mono and the sampling frequency of the audio is fixed, then the audio is converted to a spectrum using a sliding window  ... 
doi:10.32604/csse.2021.017230 fatcat:nxj75oyf4jaavohw5aprjnm53q

Towards Empathetic Human-Robot Interactions [article]

Pascale Fung, Dario Bertero, Yan Wan, Anik Dey, Ricky Ho Yin Chan, Farhad Bin Siddique, Yang Yang, Chien-Sheng Wu, Ruixi Lin
2016 arXiv   pre-print
Although research on empathetic robots is still in the early stage, we described our approach using signal processing techniques, sentiment analysis and machine learning algorithms to make robots that  ...  In this paper, we present our work so far in the areas of deep learning of emotion and sentiment recognition, as well as humor recognition.  ...  The first model we use is the Convolutional Neural Network [9] , which is useful to obtain a fixed-length vector representation of an utterance, an audio signal or an image.  ... 
arXiv:1605.04072v1 fatcat:vav36uondncudj7n2aefedsl5e

Enterprise Strategic Management From the Perspective of Business Ecosystem Construction Based on Multimodal Emotion Recognition

Wei Bi, Yongzhen Xie, Zheng Dong, Hongshen Li
2022 Frontiers in Psychology  
Through the comparative analysis of the accuracy of single-modal and multi-modal ER, the self-attention mechanism is applied in the experiment.  ...  This paper aims to study a multimodal ER method based on attention mechanism.  ...  ZD and HL: data analysis. All authors contributed to the article and approved the submitted version.  ... 
doi:10.3389/fpsyg.2022.857891 pmid:35310264 pmcid:PMC8927019 doaj:82cf2c71b7bf4e4f9bdeda763b6e1939 fatcat:hssh4dpwzbahvpv5vyupuuoxuu

Sentiment Detection from Speech Recognition Output

Iv. Tashev, D. Emmanouilidou
2020 Engineering Sciences  
Sentiment detection from text has been one of the first text analysis applications. Recently it made serious progress with using deep learning algorithms.  ...  In this article we perform a review of established and novel features for text analysis, combine them with the latest deep learning algorithms, and evaluate the proposed models.  ...  ACKNOWLEDGEMENTS Authors would like to thank our colleagues Ashley Chang, Bryan Li, Dimitrios Dimitriadis, and Andreas Stolcke for labeling the data and fruitful discussions on various aspects of sentiment  ... 
doi:10.7546/engsci.lvii.20.02.01 fatcat:wkatypbw7fcqxczb43spbox3mi

Speech Emotion Recognition Using Spectrogram and Phoneme Embedding

Promod Yenigalla, Abhay Kumar, Suraj Tripathi, Chirag Singh, Sibsambhu Kar, Jithendra Vepa
2018 Interspeech 2018  
We performed various experiments with different kinds of deep neural networks with phoneme and spectrogram as inputs.  ...  A phoneme and spectrogram combined CNN model proved to be most accurate in recognizing emotions on IEMOCAP data.  ...  Acknowledgements The authors would like to acknowledge the support of Samsung R&D Institute-India, Bangalore in this work.  ... 
doi:10.21437/interspeech.2018-1811 dblp:conf/interspeech/YenigallaKTSKV18 fatcat:vr55svaxkjed5dlxhuqxtuqn6q

Convolutional Attention Networks for Multimodal Emotion Recognition from Speech and Text Data [article]

Chan Woo Lee, Kyu Ye Song, Jihoon Jeong, Woo Yong Choi
2019 arXiv   pre-print
In this paper, we propose a new method of learning about the hidden representations between just speech and text data using convolutional attention networks.  ...  Emotion recognition has become a popular topic of interest, especially in the field of human computer interaction.  ...  Work of Tzirakis et al. uses deep residual networks to extract features from facial expressions, convolutional neural networks to extract features from speech, and concatenates them to input into a LSTM  ... 
arXiv:1805.06606v2 fatcat:jg7ixtuc2rb3pauhcvzrj2xwti

A Transfer Learning End-to-End ArabicText-To-Speech (TTS) Deep Architecture [article]

Fady Fahmy, Mahmoud Khalil, Hazem Abbas
2020 arXiv   pre-print
This work describes how to generate high quality, natural, and human-like Arabic speech using an end-to-end neural deep network architecture.  ...  This work uses just 〈 text, audio 〉 pairs with a relatively small amount of recorded audio samples with a total of 2.41 hours.  ...  It offers a unified purely neural network approach and eliminates the non-neural network parts used previously by Tacotron, such as the Griffen-Lim reconstruction algorithm to synthesize speech.  ... 
arXiv:2007.11541v1 fatcat:zfknuaxmo5a2xopaajbuqneemq

Multimodal Sentiment Analysis using Hierarchical Fusion with Context Modeling [article]

N. Majumder, D. Hazarika, A. Gelbukh, E. Cambria, S. Poria
2018 arXiv   pre-print
Multimodal sentiment analysis is a very actively growing field of research. A promising area of opportunity in this field is to improve the multimodal fusion mechanism.  ...  On multimodal sentiment analysis of individual utterances, our strategy outperforms conventional concatenation of features by 1%, which amounts to 5% reduction in error rate.  ...  Acknowledgement The work was partially supported by the Instituto Politécnico Nacional via grant SIP 20172008 to A. Gelbukh.  ... 
arXiv:1806.06228v1 fatcat:5ovpbqltrremdmeiogwyqbez4q

Ensemble Deep Learning for Multilabel Binary Classification of User-Generated Content

Giannis Haralabopoulos, Ioannis Anagnostopoulos, Derek McAuley
2020 Algorithms  
Sentiment analysis usually refers to the analysis of human-generated content via a polarity filter. Affective computing deals with the exact emotions conveyed through information.  ...  Emotional information most frequently cannot be accurately described by a single emotion class. Multilabel classifiers can categorize human-generated content in multiple emotional classes.  ...  Text generation is addressed via a recurrent neural network in [36] and an extensively trained model outperforms the best non-NN models.  ... 
doi:10.3390/a13040083 fatcat:vdcqiyqjvvevjlgrfqbigbtbze

Predicting the pandemic: sentiment evaluation and predictive analysis from large-scale tweets on Covid-19 by deep convolutional neural network

Sourav Das, Anup Kumar Kolya
2021 Evolutionary Intelligence  
In this paper, we propose a novel approach for achieving sentiment evaluation accuracy by using a deep neural network on live-streamed tweets on Coronavirus and future case growth prediction.  ...  Engaging deep neural networks for textual sentiment analysis is an extensively practiced domain of research.  ...  Acknowledgments We dedicate our work to the countless people who lost their lives or closed ones in this catastrophe.  ... 
doi:10.1007/s12065-021-00598-7 pmid:33815622 pmcid:PMC8007226 fatcat:rrijxtbe65aiflbf2ccxqlklsm

AttendAffectNet–Emotion Prediction of Movie Viewers Using Multimodal Fusion with Self-Attention

Ha Thi Phuong Thao, B T Balamurali, Gemma Roig, Dorien Herremans
2021 Sensors  
To explore these correlations, a neural network architecture—namely AttendAffectNet (AAN)—uses the self-attention mechanism for predicting the emotions of movie viewers from different input modalities.  ...  The models that use all visual, audio, and text features simultaneously as their inputs performed better than those using features extracted from each modality separately.  ...  Twitter sentiment analysis with deep convolutional neural networks.  ... 
doi:10.3390/s21248356 pmid:34960450 pmcid:PMC8704548 fatcat:dhqibcpfozgm3as4rzp2n4qa4u

Deep Cross-Modal Correlation Learning for Audio and Lyrics in Music Retrieval [article]

Yi Yu, Suhua Tang, Francisco Raposo, Lei Chen
2017 arXiv   pre-print
Pre-trained Doc2vec model followed by fully-connected layers (fully-connected deep neural network) is used to represent lyrics.  ...  In this work, we propose a deep cross-modal correlation learning architecture involving two-branch deep neural networks for audio modality and text modality (lyrics).  ...  Research motivation and background are introduced in Sec.II. Sec.III gives the preliminaries of Convolutional Neural Networks (CNNs) and Deep Canonical Correlation Analysis (DCCA).  ... 
arXiv:1711.08976v2 fatcat:m5uk6lbadrcanpb3prxfv7lueu
« Previous Showing results 1 — 15 out of 708 results