Sentiment Analysis on Multi-View Social Data [chapter]

Teng Niu, Shiai Zhu, Lei Pang, Abdulmotaleb El Saddik
<span title="">2016</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
With the proliferation of social networks, people are likely to share their opinions about news, social events and products on the Web. There is an increasing interest in understanding users' attitude or sentiment from the large repository of opinion-rich data on the Web. This can benefit many commercial and political applications. Primarily, the researchers concentrated on the documents such as users' comments on the purchased products. Recent works show that visual appearance also conveys
more &raquo; ... human affection that can be predicted. While great efforts have been devoted on the single media, either text or image, little attempts are paid for the joint analysis of multi-view data which is becoming a prevalent form in the social media. For example, paired with the posted textual messages on Twitter, users are likely to upload images and videos which may carry their affective states. One common obstacle is the lack of sufficient manually annotated instances for model learning and performance evaluation. To prompt the researches on this problem, we introduce a multi-view sentiment analysis dataset (MVSA) including a set of manually annotated image-text pairs collected from Twitter. The dataset can be utilized as a valuable benchmark for both single-view and multi-view sentiment analysis. In this thesis, we further conduct a comprehensive study on computational analysis of sentiment from the multi-view data. The state-of-the-art approaches on single view (image or text) or multiview (image and text) data are introduced, and compared through extensive experiments conducted on our constructed dataset and other public datasets. More importantly, the effectiveness of the correlation between different views is also studied using the widely used fusion strategies and advanced multi-view feature extraction methods. Index Terms: Sentiment analysis, social media, multi-view data, textual feature, visual feature, joint feature learning. ii Acknowledgements I would like to give my sincerest gratitude and appreciation to my supervisor, Prof. Abdulmotaleb El Saddik, for his continuous guidance and support not only in academic domain but also in my personal life. Unique and sincere thanks go to Dr. Shiai Zhu for the precious assistance, invaluable guidance, and feedback he supplied going through my research, as well as his review and revision of this thesis. I also need to thank to Mr. Lei Pang for helping me on the implementation of some algorithms. I would also like to thank all my colleagues in the MCRLab for their suggestions and contributions throughout the research, and all my friends for their help on my campus life.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="">doi:10.1007/978-3-319-27674-8_2</a> <a target="_blank" rel="external noopener" href="">fatcat:qwi4nsyelbhxdh64ekeq7dyscy</a> </span>
<a target="_blank" rel="noopener" href="" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href=""> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> </button> </a>