Filters








51,042 Hits in 3.9 sec

Self-training from labeled features for sentiment analysis

Yulan He, Deyu Zhou
2011 Information Processing & Management  
The word-class distributions of such self-learned features are estimated from the pseudolabeled examples and are used to train another classifier by constraining the model's predictions on unlabeled instances  ...  Documents classified with high confidence are then used as pseudo-labeled examples for automatical domain-specific feature acquisition.  ...  Self-training from labeled features for sentiment analysis.  ... 
doi:10.1016/j.ipm.2010.11.003 fatcat:d2xopnal5fgxtpn5p2qi6ete5a

Domain Adaptation for Opinion Classification: A Self-Training Approach

Ning Yu
2013 Journal of Information Science Theory and Practice  
Findings of this study suggest that, when there are limited labeled data, self-training is a promising approach for opinion classification, although the contributions vary across data domains.  ...  Specifically, self-training is evaluated for effectiveness in sparse data situations and feasibility for domain adaptation in opinion classification.  ...  Since information used for sentiment analysis is typically lexical and lexical means of expressing sentiments may vary not only from domain to domain but also from register to register, a sentiment analysis  ... 
doi:10.1633/jistap.2013.1.1.1 fatcat:wj2hazhsnfbfnfkvoqmh63xfju

LeSSA: A Unified Framework based on Lexicons and Semi-Supervised Learning Approaches for Textual Sentiment Classification

Jawad Khan, Young-Koo Lee
2019 Applied Sciences  
A reliable training data is vital to learn a sentiment classifier for textual sentiment classification, but due to domain heterogeneity, manually construction of reliable labeled sentiment corpora is a  ...  (b) training classification models based on a high-quality training dataset generated by using k-mean clustering, active learning, self-learning, and co-training algorithms.  ...  Self-training-S [61] : A self-training approach in which multiple feature subspace-based classifiers are used to explore a set of good features and select informative samples for automatic labeling.  ... 
doi:10.3390/app9245562 fatcat:adzlvshbmbfklew457auwrh7ue

The identification of indicators of sentiment using a multi-view self-training algorithm

Brett Drury, Alneu De Andrade Lopes
2015 Oslo Studies in Language  
Este artigo apresenta um algoritmo de "multi-view self-training" , que identifica os indicadores de sentimento por: 1. extração relações causais, 2.  ...  One semi-supervised strategy for sentiment classification is self-training (He & Zhou 2011). Self-training induces a model from labelled instances and unlabelled data in an iterative way.  ...  The guided self-training algorithm for sentiment classification is described in Algorithm 2.  ... 
doi:10.5617/osla.1446 fatcat:wq3sq5nc7fbqtdnjrt5wgqdkv4

Speech Sentiment Analysis via Pre-trained Features from End-to-end ASR Models [article]

Zhiyun Lu, Liangliang Cao, Yu Zhang, Chung-Cheng Chiu, James Fan
2020 arXiv   pre-print
In this paper, we propose to use pre-trained features from end-to-end ASR models to solve speech sentiment analysis as a down-stream task.  ...  We use well benchmarked IEMOCAP dataset and a new large-scale speech sentiment dataset SWBD-sentiment for evaluation.  ...  Acknowledgment We are grateful to Rohit Prabhavalkar, Ruoming Pang, Wei Han, Bo Li, Gary Wang, and Shuyuan Zhang for their fruitful discussions and suggestions.  ... 
arXiv:1911.09762v2 fatcat:mh57xcoz7bbhxaauuvkhgt3dya

Opinion Sentence Extraction and Sentiment Analysis for Chinese Microblogs [chapter]

Hanxiao Shi, Wei Chen, Xiaojun Li
2013 Communications in Computer and Information Science  
First, we manually label the sample of microblog corpus supplied by the organization, and expand the sentiment lexicon by introducing the Internet sentiment words; second, we construct the different feature  ...  Sentiment analysis of Chinese microblogs is important for scientific research in public opinion supervision, personalized recommendation and social computing.  ...  , extract 451 microblogs labeled opinion sentences from 1219 microblogs labeled manually as the training corpus.  ... 
doi:10.1007/978-3-642-41644-6_41 fatcat:6jwfvqlzave65g6mltl3v6yvi4

AVAYA: Sentiment Analysis on Twitter with Self-Training and Polarity Lexicon Expansion

Lee Becker, George Erhart, David Skiba, Valentine Matula
2013 International Workshop on Semantic Evaluation  
These automatically labeled data are used for two purposes: 1) to discover prior polarities of words and 2) to provide additional training examples for self-training.  ...  This paper describes the systems submitted by Avaya Labs (AVAYA) to SemEval-2013 Task 2 -Sentiment Analysis in Twitter.  ...  Acknowledgments We would like to thank the organizers of SemEval 2013 and the Sentiment Analysis in Twitter task for their time and energy.  ... 
dblp:conf/semeval/BeckerESM13 fatcat:sf6mzmdiarfhzj7lvfcokndtei

Lost in Translation: Viability of Machine Translation for Cross Language Sentiment Analysis [chapter]

Balamurali A.R., Mitesh M. Khapra, Pushpak Bhattacharyya
2013 Lecture Notes in Computer Science  
The idea is to use the annotated resources of one language (say, L1) for performing Sentiment Analysis in another language (say, L2) which does not have annotated resources.  ...  Based on our study, we take the stand that languages which have a genuine need for a Sentiment Analysis engine should focus on collecting a few polarity annotated documents in their language instead of  ...  The feature set comprises of unigrams extracted from the seed labeled data. We also experimented with bigram features but did not find much difference in the performance.  ... 
doi:10.1007/978-3-642-37256-8_4 fatcat:xfjzx2z4ovhinfjxhb7katlhbu

Self-Reflective Sentiment Analysis

Benjamin Shickel, Martin Heesacker, Sherry Benton, Ashkan Ebadi, Paul Nickerson, Parisa Rashidi
2016 Proceedings of the Third Workshop on Computational Lingusitics and Clinical Psychology  
In this paper, we automatically categorize patients' internal sentiment and emotions using machine learning classifiers based on n-grams, syntactic patterns, sentiment lexicon features, and distributed  ...  As self-directed online anxiety treatment and e-mental health programs become more prevalent and begin to rapidly scale to a large number of users, the need to develop automated techniques for monitoring  ...  Using all features from the previously outlined extraction process, we train a separate model on each of the five existing sentiment analysis corpora.  ... 
doi:10.18653/v1/w16-0303 dblp:conf/naacl/ShickelHBENR16 fatcat:h5cdfpmmyjh3vi7sntd7pquoem

Semi-Stacking for Semi-supervised Sentiment Classification

Shoushan Li, Lei Huang, Jingjing Wang, Guodong Zhou
2015 Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)  
Specifically, we apply metalearning to predict the unlabeled data given the outputs from the member algorithms and propose N-fold cross validation to guarantee a suitable size of the data for training  ...  In this paper, we address semi-supervised sentiment learning via semi-stacking, which integrates two or more semi-supervised learning algorithms from an ensemble learning perspective.  ...  More recently, Gao et al. (2014) propose a feature subspace-based self-training to semi-supervised sentiment classification.  ... 
doi:10.3115/v1/p15-2005 dblp:conf/acl/LiHWZ15a fatcat:rztox6tbl5bann33u2poj2aqke

bwbaugh : Hierarchical sentiment analysis with partial self-training

Wesley Baugh
2013 International Workshop on Semantic Evaluation  
Using additional unlabeled data that is believed to contain sentiment, we allow the polarity classifier to continue learning using self-training.  ...  Using labeled Twitter training data from SemEval-2013, we train both a subjectivity classifier and a polarity classifier separately, and then combine the two into a single hierarchical classifier.  ...  Background The sentiment analysis in Twitter task of SemEval-2013 [Wilson et al., 2013] provides 9,864 labeled tweets from Twitter to be used as a training dataset.  ... 
dblp:conf/semeval/Baugh13 fatcat:v2iqerkdhrfepblvkptz3plhxy

Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis

Wenmeng Yu, Hua Xu, Ziqi Yuan, Jiele Wu
2021 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
On the SIMS dataset, our method achieves comparable performance than human- annotated unimodal labels. The full codes are available at https://github.com/thuiar/Self-MM.  ...  Then, joint training the multimodal and uni-modal tasks to learn the consistency and difference, respectively.  ...  ., China Joint Research Center for Industrial Intelligence and Internet of Things.  ... 
doi:10.1609/aaai.v35i12.17289 fatcat:xdyfy6bqozbrrff7xuu3hivzkq

Performing sentiment analysis in Bangla microblog posts

Shaika Chowdhury, Wasifa Chowdhury
2014 2014 International Conference on Informatics, Electronics & Vision (ICIEV)  
SELF-TRAINING BOOTSTRAPPING: Self-training bootstrapping is performed to develop our labeled training data set.  ...  This way, we repeat the self-training bootstrapping until all the 1000 unlabeled tweets are labeled.  ... 
doi:10.1109/iciev.2014.6850712 fatcat:nt4v2so43bb5zbvd5goo52ynyi

Sentiment Word Aware Multimodal Refinement for Multimodal Sentiment Analysis with ASR Errors [article]

Yang Wu, Yanyan Zhao, Hao Yang, Song Chen, Bing Qin, Xiaohuan Cao, Wenting Zhao
2022 arXiv   pre-print
The refined embeddings are taken as the textual inputs of the multimodal feature fusion module to predict the sentiment labels.  ...  Furthermore, our approach can be adapted for other multimodal feature fusion models easily. Data and code are available at https://github.com/albertwy/SWRM.  ...  Self-MM (Yu et al., 2021) first generates the pseudo unimodal sentiment labels and then adopts them to train the model in a multi-task learning manner. 5 Results and Analysis Quantitative Results In  ... 
arXiv:2203.00257v1 fatcat:jipbjsxg3jdqhilaqpmapo6z4q

Incorporating Context and Knowledge for Better Sentiment Analysis of Narrative Text

Chenyang Lyu, Tianbo Ji, Yvette Graham
2020 European Conference on Information Retrieval  
However, for the purpose of sentiment analysis of narrative text in particular, we introduce two new features: a contextual feature and extra knowledge feature that prove to aid text understanding for  ...  In this paper, we present an approach to sentiment analysis of narrative text that employs pre-trained language model, an approach already proven e↵ective for a range of other NLP tasks.  ...  The authors would like to thank Jennifer Foster and three anonymous reviewers for their helpful comments.  ... 
dblp:conf/ecir/LyuJG20 fatcat:sanr3i5apnelnodxxb7ttura3u
« Previous Showing results 1 — 15 out of 51,042 results