1,238 Hits in 2.3 sec

Multimodal Entity Linking for Tweets [chapter]

Omar Adjali, Romaric Besançon, Olivier Ferret, Hervé Le Borgne, Brigitte Grau
2020 Lecture Notes in Computer Science  
In this paper, we address the task of multimodal entity linking (MEL), an emerging research field in which textual and visual information is used to map an ambiguous mention to an entity in a knowledge  ...  First, we propose a method for building a fully annotated Twitter dataset for MEL, where entities are defined in a Twitter KB.  ...  Conclusion We explore a novel approach that makes use of text and image information for the entity linking task applied to tweets.  ... 
doi:10.1007/978-3-030-45439-5_31 fatcat:fkiblwkcb5dtzorakuszz3vj6i

QuTI! Quantifying Text-Image Consistency in Multimodal Documents [article]

Matthias Springstein and Eric Müller-Budack and Ralph Ewerth
2021 arXiv   pre-print
For example, the system can help users to explore multimodal articles more efficiently, or can assist human assessors and fact-checking efforts in the verification of the credibility of news stories, tweets  ...  , or other multimodal documents.  ...  For each named entity recognized by spaCy [9] we select the linked entity candidate with the highest PageRank according to Wikifier [4] for the corresponding text span.  ... 
arXiv:2104.13748v1 fatcat:2zoemtp5qzeivco2avabnznzeq

Knowledge will Propel Machine Understanding of Content: Extrapolating from Current Examples [article]

Amit Sheth, Sujan Perera, Sanjaya Wijeratne
2019 arXiv   pre-print
In this paper, we focus on discussing the indispensable role of knowledge for deeper understanding of complex text and multimodal data in situations where (i) large amounts of training data (labeled/unlabeled  ...  Using the early results in several diverse situations - both in data types and applications - we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal  ...  The implicit entity linking algorithms are designed to carefully use the knowledge encoded in these models to identify implicit entities in the text.  ... 
arXiv:1610.07708v2 fatcat:pjiibdsxabfdhchx6xnpuaej4m

News rover

Hongzhi Li, Brendan Jou, Jospeh G. Ellis, Daniel Morozoff, Shih-Fu Chang
2013 Proceedings of the 21st ACM international conference on Multimedia - MM '13  
The novelty of our work includes the linking of multi-source, multimodal news content to extracted entities and topical structures for contextual understanding, and visualized in intuitive active and passive  ...  The system utilizes these many multimodal sources to link and organize content by topics, events, persons and time.  ...  We are also currently developing a multimodal algorithm for extracting quotes associated with each of these named persons (i.e., extracting "who said what") to provide additional linking modes and measure  ... 
doi:10.1145/2502081.2502263 dblp:conf/mm/LiJEMC13 fatcat:easvgscftnbtxdehu5kkhnva7m

Knowledge will propel machine understanding of content

Amit Sheth, Sujan Perera, Sanjaya Wijeratne, Krishnaprasad Thirunarayan
2017 Proceedings of the International Conference on Web Intelligence - WI '17  
Using diverse examples, we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal data and continued incorporation of knowledge in learning techniques  ...  , (e.g., implicit entities and highly subjective content), and (iii) applications need to use complementary or related data in multiple modalities/media.  ...  kHealth: Semantic Multisensory Mobile Approach to Personalized Asthma Care" and the National Science Foundation (NSF) award: EAR 1520870: "Hazards SEES: Social and Physical Sensing Enabled Decision Support for  ... 
doi:10.1145/3106426.3109448 pmid:29962511 pmcid:PMC6021355 dblp:conf/webi/ShethPWT17 fatcat:rirrbhctsreilbuyy32uzmuyhy

Structured exploration of who, what, when, and where in heterogeneous multimedia news sources

Brendan Jou, Hongzhi Li, Joseph G. Ellis, Daniel Morozoff-Abegauz, Shih-Fu Chang
2013 Proceedings of the 21st ACM international conference on Multimedia - MM '13  
We visualize these peaks in trending news topics using automatically extracted keywords and iconic images, and introduce a novel multimodal algorithm for naming speakers in the news.  ...  We also present several intuitive navigation interfaces for interacting with these complex topic structures over different news sources.  ...  Given our gathered topics and their associated articles and tweets, we then need to link in the video news stories.  ... 
doi:10.1145/2502081.2508118 dblp:conf/mm/JouLEMC13 fatcat:r6nivcak2nbzzmm6rbwgnh3cqe

Event detection using Twitter and structured semantic query expansion

Heather S. Packer, Sina Samangooei, Jonathon S. Hare, Nicholas Gibbins, Paul H. Lewis
2012 Proceedings of the 1st international workshop on Multimodal crowd sensing - CrowdSens '12  
Structured data related to entities can provide additional context to tweets.  ...  Twitter is a popular tool for publishing potentially interesting information about people's opinions, experiences and news. Mobile devices allow people to publish tweets during real-time events.  ...  The first author is grateful to EPSRC for funding through a doctoral award.  ... 
doi:10.1145/2390034.2390039 fatcat:vqq66tse2ndn5hm5dzo2nplkni

MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis

Paul Buitelaar, Ian D. Wood, Sapna Negi, Mihael Arcan, John P. McCrae, Andrejs Abele, Cecile Robin, Vladimir Andryushechkin, Housam Ziad, Hesam Sagha, Maximilian Schmitt, Bjorn W. Schuller (+14 others)
2018 IEEE transactions on multimedia  
The MixedEmotions Toolbox leverages the need for such functionalities by providing tools for text, audio, video, and linked data processing within an easily integrable plug-and-play platform.  ...  tracking, emotion recognition, facial landmark localization, head pose estimation, face alignment, and body pose estimation, and (iv) for linked data: knowledge graph integration.  ...  It also includes entity linking and knowledge graph technologies for semantic-level emotion information aggregation and integration.  ... 
doi:10.1109/tmm.2018.2798287 fatcat:4wdqaqqvpfcuhf4letl5z7tm2a

Can images help recognize entities? A study of the role of images for Multimodal NER [article]

Shuguang Chen, Gustavo Aguilar, Leonardo Neves, Thamar Solorio
2021 arXiv   pre-print
Multimodal named entity recognition (MNER) requires to bridge the gap between language understanding and visual context.  ...  We also study the use of captions as a way to enrich the context for MNER.  ...  We would like to thank the members from the RiT-UAL lab at the University of Houston for their invaluable feedback. We also thank the anonymous W-NUT reviewers for their valuable suggestions.  ... 
arXiv:2010.12712v2 fatcat:2lyjfpgaurchdihzeudl5l6vyq

Automatic Rumor Detection on Microblogs: A Survey [article]

Juan Cao, Junbo Guo, Xirong Li, Zhiwei Jin, Han Guo, Jintao Li
2018 arXiv   pre-print
We give our suggestions for future rumors detection on microblogs as a conclusion.  ...  We also give an introduction on existing datasets for rumor detection which would benefit following researches in this area.  ...  for each entity.  ... 
arXiv:1807.03505v1 fatcat:kvwukm7kofhyfd3yjlajagoxce

Automatic Entity Recognition and Typing in Massive Text Corpora

Xiang Ren, Ahmed El-Kishky, Chi Wang, Jiawei Han
2016 Proceedings of the 25th International Conference Companion on World Wide Web - WWW '16 Companion  
We demonstrate on real datasets including news articles and yelp reviews how these typed entities aid in knowledge discovery and management.  ...  These methods can automatically identify token spans as entity mentions in text and label their types (e.g., people, product, food) in a scalable way.  ...  Multimodal Information Access and Synthesis at UIUC.  ... 
doi:10.1145/2872518.2891065 dblp:conf/www/RenEWH16 fatcat:nhwxdfwwpbdgpm2mhuitgnmtkm

MM-Claims: A Dataset for Multimodal Claim Detection in Social Media [article]

Gullal S. Cheema, Sherzod Hakimov, Abdul Sittar, Eric Müller-Budack, Christian Otto, Ralph Ewerth
2022 arXiv   pre-print
The dataset contains roughly 86000 tweets, out of which 3400 are labeled manually by multiple annotators for the training and evaluation of multimodal models.  ...  For this purpose, we introduce a novel dataset, MM-Claims, which consists of tweets and corresponding images over three topics: COVID-19, Climate Change and broadly Technology.  ...  Although we did not provide external links of reliable sources for the content in the tweet, we highlighted named entities that pop-up with the text and image description.  ... 
arXiv:2205.01989v1 fatcat:douwradlv5e2hedrp5obm7v3si

Interactive Search and Exploration in Online Discussion Forums Using Multimodal Embeddings [article]

Iva Gornishka, Stevan Rudinac, Marcel Worring
2019 arXiv   pre-print
We show these representations to be useful not only for categorizing users, but also for automatically generating user and community profiles.  ...  In this paper we present a novel interactive multimodal learning system, which facilitates search and exploration in large networks of social multimedia users.  ...  Entity linking.  ... 
arXiv:1905.02430v1 fatcat:4sy7tdnknjct5ksuqag5ygdoxq

The Value of Using Big Data Technologies in Computational Social Science [article]

Eugene Ch'ng
2014 arXiv   pre-print
This article presents topical, multimodal, and longitudinal social media datasets from the integration of various scalable open source technologies.  ...  The article demonstrated the feasibility and value of using scalable open source technologies for acquiring massive, connected datasets for research in the social sciences.  ...  Relationality of Data Entities Mapping multimodal activities within Twitter is much more valuable than the common follower-followee network, which has low activities as most users are inactive.  ... 
arXiv:1408.3170v1 fatcat:xxjjav65yjelzmre62zjlxywfq

Ontology-Enabled Emotional Sentiment Analysis on COVID-19 Pandemic-Related Twitter Streams

Senthil Kumar Narayanasamy, Kathiravan Srinivasan, Saeed Mian Qaisar, Chuan-Yu Chang
2021 Frontiers in Public Health  
Second, the potential entities present in the tweet can be analyzed for semantic associativity.  ...  In this regard, our proposed Ontology-Based Sentiment Analysis provides two novel approaches: First, the emotion extraction on tweets related to COVID-19 is carried out by a well-formed taxonomy that comprises  ...  Huang et al. (48) 2019 Visual and semantic Tweets Deep Multimodal Attentive The fine-granularity 0.769 attention mechanism Fusion for multimodal sentiment relation between image analysis of 10 million  ... 
doi:10.3389/fpubh.2021.798905 pmid:34938715 pmcid:PMC8685242 fatcat:guyh2xpuvvhaflftlousjkcax4
« Previous Showing results 1 — 15 out of 1,238 results