6,408 Hits in 9.4 sec

Multimodal Entity Linking for Tweets [chapter]

Omar Adjali, Romaric Besançon, Olivier Ferret, Hervé Le Borgne, Brigitte Grau
2020 Lecture Notes in Computer Science  
In this paper, we address the task of multimodal entity linking (MEL), an emerging research field in which textual and visual information is used to map an ambiguous mention to an entity in a knowledge  ...  Then, we propose a model for jointly learning a representation of both mentions and entities from their textual and visual contexts.  ...  different approaches e.g., Canonical Correlation Analysis [46] , linear ranking-based models and non-linear deep learning models to learn projecting image features and text features into a joint space  ... 
doi:10.1007/978-3-030-45439-5_31 fatcat:fkiblwkcb5dtzorakuszz3vj6i

A Survey on Accuracy-oriented Neural Recommendation: From Collaborative Filtering to Information-rich Recommendation [article]

Le Wu, Xiangnan He, Xiang Wang, Kun Zhang, Meng Wang
2021 arXiv   pre-print
In this survey paper, we conduct a systematic review on neural recommender models from the perspective of recommendation modeling with the accuracy goal, aiming to summarize this field to facilitate researchers  ...  Influenced by the great success of deep learning in computer vision and language understanding, research in recommendation has shifted to inventing new recommender models based on neural networks.  ...  These models apply CNNs to extract image visual information, and content embedding models to get textual embedding.  ... 
arXiv:2104.13030v3 fatcat:7bzwaxcarrgbhe36teik2rhl6e

Advances in Emotion Recognition: Link to Depressive Disorder [chapter]

Xiaotong Cheng, Xiaoxia Wang, Tante Ouyang, Zhengzhi Feng
2020 Mental Disorders [Working Title]  
text on social network platform.  ...  Emotion recognition enables real-time analysis, tagging, and inference of cognitive affective states from human facial expression, speech and tone, body posture and physiological signal, as well as social  ...  With the advent of social media, social media platforms are becoming a rich source of multimodal affective information, including text, videos, images, and audios. One of them is textual analysis.  ... 
doi:10.5772/intechopen.92019 fatcat:jmss4llbpnfrxcue6bzebsgmby

Learning Social Image Embedding with Deep Multimodal Attention Networks

Feiran Huang, Xiaoming Zhang, Zhoujun Li, Tao Mei, Yueying He, Zhonghua Zhao
2017 Proceedings of the on Thematic Workshops of ACM Multimedia 2017 - Thematic Workshops '17  
In this paper, we propose a novel social image embedding approach called Deep Multimodal Attention Networks (DMAN), which employs a deep model to jointly embed multimodal contents and link information.  ...  With the joint deep model, the learnt embedding can capture both the multimodal contents and the nonlinear network information.  ...  the challenges of combing content and links for embedding learning, where two models are proposed to capture multimodal contents and network structure respectively, with a deep model to integrate them  ... 
doi:10.1145/3126686.3126720 dblp:conf/mm/HuangZLMHZ17 fatcat:uuj6zj2ahjhlnexauciynqslya

Linking brain structure, activity and cognitive function through computation

Katrin Amunts, Javier DeFelipe, Cyriel Pennartz, Alain Destexhe, Michele Migliore, Philippe Ryvlin, Steve Furber, Alois Knoll, Lise Bitsch, Jan G. Bjaalie, Yannis Ioannidis, Thomas Lippert (+3 others)
2022 eNeuro  
Dynamic generative multiscale models, which enable causation across scales and are guided by principles and theories of brain function, are instrumental to link brain structure and function.  ...  The novel HBP-style neuroscience is characterized by transparent domain boundaries and deep integration of highly heterogeneous data, models, and information technologies.  ...  Being on EBRAINS allows, for example, directly linking information from the atlases with models and simulation.  ... 
doi:10.1523/eneuro.0316-21.2022 pmid:35217544 pmcid:PMC8925650 fatcat:k7obyuqb5vcenossdwrlcbg4eu

On the link between conscious function and general intelligence in humans and machines [article]

Arthur Juliani, Kai Arulkumaran, Shuntaro Sasai, Ryota Kanai
2022 arXiv   pre-print
With this insight, we turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to  ...  model.  ...  Acknowledgments We would like to thank Leonardo Barbosa, Hiro Hamada, Andrew Cohen, Laura Graesser, Yoshua Bengio, and our anonymous reviewers for their helpful feedback on earlier versions of this text  ... 
arXiv:2204.05133v2 fatcat:x6lkenutj5fxzgc4hnmpm3loki

A Multimodal Approach to Predict Social Media Popularity [article]

Mayank Meghawat, Satyendra Yadav, Debanjan Mahata, Yifang Yin, Rajiv Ratn Shah, Roger Zimmermann
2018 arXiv   pre-print
Specifically, we augment the SMPT1 dataset for social media prediction in ACM Multimedia grand challenge 2017 with image content, titles, descriptions, and tags.  ...  To the best of our knowledge, no such multimodal dataset exists for the prediction of social media photos.  ...  For this model, we needed to cut down the training input size from 432K to 200K, since images for most URL links were broken and since our model is using images as an input feature.  ... 
arXiv:1807.05959v1 fatcat:lj5vj42uvvejdm2ajhkrqjmnuq

A Fair and Comprehensive Comparison of Multimodal Tweet Sentiment Analysis Methods [article]

Gullal S. Cheema and Sherzod Hakimov and Eric Müller-Budack and Ralph Ewerth
2021 arXiv   pre-print
In addition, we investigate different textual and visual feature embeddings that cover different aspects of the content, as well as the recently introduced multimodal CLIP embeddings.  ...  Opinion and sentiment analysis is a vital task to characterize subjective information in social media posts.  ...  With the evolution of the Internet, social media sites, in particular, have become multimodal in nature with content including text, audio, images, and videos to engage different senses of a user.  ... 
arXiv:2106.08829v1 fatcat:aoi3caa3t5gopelhlofadhyayy

The Joint Framework for Dynamic Topic Semantic Link Network Prediction

Anping Zhao, Lingling Zhao, Yu Yu
2019 IEEE Access  
INDEX TERMS Semantic link network, Bayesian network, Gaussian mixture models.  ...  The proposed framework joints the Gaussian mixture model and the Bayesian network to conduct inference and prediction of topic relationships of dynamic topic semantic link network.  ...  [6] proposed a mixture model-based semantic image segmentation (OBSIS) approach to partition images into non-overlapping regions. Liu et al.  ... 
doi:10.1109/access.2018.2889993 fatcat:dpqh5aducffwbi6ni44jrvrozi

Multimodal Categorization of Crisis Events in Social Media

Mahdi Abavisani, Liwei Wu, Shengli Hu, Joel Tetreault, Alejandro Jaimes
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
In addition, we employ a multimodal graph-based approach to stochastically transition between embeddings of different multimodal pairs during training to better regularize the learning process as well  ...  Recent developments in image classification and natural language processing, coupled with the rapid growth in social media usage, have enabled fundamental advances in detecting breaking events around the  ...  [41] propose a multimodal deep learning framework to identify damage related information on social media posts with texts, images, and video.  ... 
doi:10.1109/cvpr42600.2020.01469 dblp:conf/cvpr/AbavisaniWHTJ20 fatcat:ioj4qpl7ynd2rmehadhaqfkdwq

Fake News Detection via Knowledge-driven Multimodal Graph Convolutional Networks

Youze Wang, Shengsheng Qian, Jun Hu, Quan Fang, Changsheng Xu
2020 Proceedings of the 2020 International Conference on Multimedia Retrieval  
In recent years, there are more and more models with deep neural networks to learn feature representations from multiple aspects.  ...  Graph Construction of Multimodal Content An undirected graph is created in our model for each post to model its multimodal content information.  ... 
doi:10.1145/3372278.3390713 dblp:conf/mir/WangQHFX20 fatcat:bdtdwo3pwbhm5bbipy7w3x6idi

Multimodal Meme Dataset (MultiOFF) for Identifying Offensive Content in Image and Text

Shardul Suryawanshi, Bharathi Raja Chakravarthi, Mihael Arcan, Paul Buitelaar
2020 Zenodo  
We use an early fusion technique to combine the image and text modality and compare it with a text- and an image-only baseline to investigate its effectiveness.  ...  Since there was no publicly available dataset for multimodal offensive meme content detection, we leveraged the memes related to the 2016 U.S. presidential election and created the MultiOFF multimodal  ...  Acknowledgements This publication has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289 P2, co-funded by the European  ... 
doi:10.5281/zenodo.3899867 fatcat:b23qzaqo5ba7lfzpo47hlew4fe

ETMA: Efficient Transformer Based Multilevel Attention framework for Multimodal Fake News Detection [article]

Ashima Yadav, Shivani Gaba, Ishan Budhiraja, Neeraj Kumar
2022 arXiv   pre-print
Each component utilizes the different forms of attention mechanism and uniquely deals with multimodal data to detect fraudulent content.  ...  In recent times, fake news content on social media has become one of the major challenging problems for society.  ...  Thus, the model classifies them as fake samples. On the other hand, in (c), words like supporters, protest, rally are more focused by the model as they are directly linked with the image content.  ... 
arXiv:2206.07331v1 fatcat:fuv4ofqwvzez7bhzqmnlkkvz6a

Multimodal Hate Speech Detection from Bengali Memes and Texts [article]

Md. Rezaul Karim and Sumon Kanti Dey and Tanhim Islam and Bharathi Raja Chakravarthi
2022 arXiv   pre-print
Like English, Bengali social media content also includes images along with texts (e.g., multimodal contents are posted by embedding short texts into images on Facebook), only the textual data is not enough  ...  Numerous works have been proposed to employ machine learning (ML) and deep learning (DL) techniques to utilize textual data from social media for anti-social behavior analysis such as cyberbullying, fake  ...  Further, like English Bengali social media content also include images along with texts (e.g., in Twitter, multimodal tweets are formed by images with short texts embedded into), only the textual data  ... 
arXiv:2204.10196v1 fatcat:ryboqcmyqfd6dcv3q5qjpiz23q

Detection of Propaganda Techniques in Visuo-Lingual Metaphor in Memes [article]

Sunil Gundapu, Radhika Mamidi
2022 arXiv   pre-print
To detect propaganda in Internet memes, we propose a multimodal deep learning fusion system that fuses the text and image feature representations and outperforms individual models based solely on either  ...  Internet memes are one of the most popular contents used on social media, and they can be in the form of images with a witty, catchy, or satirical text description.  ...  For text data, we have developed various Machine Learning (ML) and Deep Learning (DL) models with different word embeddings.  ... 
arXiv:2205.02937v1 fatcat:la3cealfpnfedfxog75beofb2y
« Previous Showing results 1 — 15 out of 6,408 results