Filters








5,283 Hits in 1.5 sec

Titelei/Inhaltsverzeichnis [chapter]

Jens Lehmann, Frank Lüttig
2017 Die letzten NS-Verfahren  
Rommel Verfahrensbeteiligte 111 113 Jens Lehmann 5 Nebenkläger im Prozess.  ...  Nachkriegsdemokratie und die Aufarbeitung der NS-Verbrechen 69 Bernd Busemann Der Beitrag der Zentralen Stelle der Landesjustizverwaltungen zur Aufklärung nationalsozialistischer Verbrechen 81 Jens  ... 
doi:10.5771/9783845288086-1 fatcat:zk5phdkwfbawzofhlcyi6c5cmu

Linked Data Reasoning [chapter]

Jens Lehmann, Lorenz Bühmann
2014 Linked Enterprise Data  
Zusammenfassung. In diesem Kapitel beschreiben wir die Grundlagen des Reasonings in RDF/OWL-Wissensbasen. Wir gehen darauf ein welche unterschiedlichen Arten des Reasonings es gibt, geben einenÜberblick uber verwendete Verfahren und beschreiben deren Einsatz im Linked Data Web.
doi:10.1007/978-3-642-30274-9_9 fatcat:nnhct4gqwbawxgcae5jsofm5gy

Survey on English Entity Linking on Wikidata [article]

Cedric Möller, Jens Lehmann, Ricardo Usbeck
2021 arXiv   pre-print
Wikidata is a frequently updated, community-driven, and multilingual knowledge graph. Hence, Wikidata is an attractive basis for Entity Linking, which is evident by the recent increase in published papers. This survey focuses on four subjects: (1) Which Wikidata Entity Linking datasets exist, how widely used are they and how are they constructed? (2) Do the characteristics of Wikidata matter for the design of Entity Linking datasets and if so, how? (3) How do current Entity Linking approaches
more » ... inking approaches exploit the specific characteristics of Wikidata? (4) Which Wikidata characteristics are unexploited by existing Entity Linking approaches? This survey reveals that current Wikidata-specific Entity Linking datasets do not differ in their annotation scheme from schemes for other knowledge graphs like DBpedia. Thus, the potential for multilingual and time-dependent datasets, naturally suited for Wikidata, is not lifted. Furthermore, we show that most Entity Linking approaches use Wikidata in the same way as any other knowledge graph missing the chance to leverage Wikidata-specific characteristics to increase quality. Almost all approaches employ specific properties like labels and sometimes descriptions but ignore characteristics such as the hyper-relational structure. Hence, there is still room for improvement, for example, by including hyper-relational graph embeddings or type information. Many approaches also include information from Wikipedia, which is easily combinable with Wikidata and provides valuable textual information, which Wikidata lacks.
arXiv:2112.01989v1 fatcat:mq4w5jhwmrdvxdxduppab2pfwa

Message Passing for Hyper-Relational Knowledge Graphs [article]

Mikhail Galkin, Priyansh Trivedi, Gaurav Maheshwari, Ricardo Usbeck, Jens Lehmann
2020 arXiv   pre-print
Hyper-relational knowledge graphs (KGs) (e.g., Wikidata) enable associating additional key-value pairs along with the main triple to disambiguate, or restrict the validity of a fact. In this work, we propose a message passing based graph encoder - StarE capable of modeling such hyper-relational KGs. Unlike existing approaches, StarE can encode an arbitrary number of additional information (qualifiers) along with the main triple while keeping the semantic roles of qualifiers and triples intact.
more » ... nd triples intact. We also demonstrate that existing benchmarks for evaluating link prediction (LP) performance on hyper-relational KGs suffer from fundamental flaws and thus develop a new Wikidata-based dataset - WD50K. Our experiments demonstrate that StarE based LP model outperforms existing approaches across multiple benchmarks. We also confirm that leveraging qualifiers is vital for link prediction with gains up to 25 MRR points compared to triple-based representations.
arXiv:2009.10847v1 fatcat:cuicoygdzvc6pc2sytjdbeo7z4

The Query Translation Landscape: a Survey [article]

Mohamed Nadjib Mami, Damien Graux, Harsh Thakkar, Simon Scerri, Sören Auer, Jens Lehmann
2019 arXiv   pre-print
Whereas the availability of data has seen a manyfold increase in past years, its value can be only shown if the data variety is effectively tackled ---one of the prominent Big Data challenges. The lack of data interoperability limits the potential of its collective use for novel applications. Achieving interoperability through the full transformation and integration of diverse data structures remains an ideal that is hard, if not impossible, to achieve. Instead, methods that can simultaneously
more » ... can simultaneously interpret different types of data available in different data structures and formats have been explored. On the other hand, many query languages have been designed to enable users to interact with the data, from relational, to object-oriented, to hierarchical, to the multitude emerging NoSQL languages. Therefore, the interoperability issue could be solved not by enforcing physical data transformation, but by looking at techniques that are able to query heterogeneous sources using one uniform language. Both industry and research communities have been keen to develop such techniques, which require the translation of a chosen 'universal' query language to the various data model specific query languages that make the underlying data accessible. In this article, we survey more than forty query translation methods and tools for popular query languages, and classify them according to eight criteria. In particular, we study which query language is a most suitable candidate for that 'universal' query language. Further, the results enable us to discover the weakly addressed and unexplored translation paths, to discover gaps and to learn lessons that can benefit future research in the area.
arXiv:1910.03118v1 fatcat:wsww5vhlwjhqjlouibetb6mo3a

Soft Marginal TransE for Scholarly Knowledge Graph Completion [article]

Mojtaba Nayyeri, Sahar Vahdati, Jens Lehmann, Hamed Shariat Yazdi
2019 arXiv   pre-print
Knowledge graphs (KGs), i.e. representation of information as a semantic graph, provide a significant test bed for many tasks including question answering, recommendation, and link prediction. Various amount of scholarly metadata have been made vailable as knowledge graphs from the diversity of data providers and agents. However, these high-quantities of data remain far from quality criteria in terms of completeness while growing at a rapid pace. Most of the attempts in completing such KGs are
more » ... eting such KGs are following traditional data digitization, harvesting and collaborative curation approaches. Whereas, advanced AI-related approaches such as embedding models - specifically designed for such tasks - are usually evaluated for standard benchmarks such as Freebase and Wordnet. The tailored nature of such datasets prevents those approaches to shed the lights on more accurate discoveries. Application of such models on domain-specific KGs takes advantage of enriched meta-data and provides accurate results where the underlying domain can enormously benefit. In this work, the TransE embedding model is reconciled for a specific link prediction task on scholarly metadata. The results show a significant shift in the accuracy and performance evaluation of the model on a dataset with scholarly metadata. The newly proposed version of TransE obtains 99.9% for link prediction task while original TransE gets 95%. In terms of accuracy and Hit@10, TransE outperforms other embedding models such as ComplEx, TransH and TransR experimented over scholarly knowledge graphs
arXiv:1904.12211v1 fatcat:ucspgggcu5azhkxea2phy3cccu

Distantly Supervised Question Parsing [article]

Hamid Zafar, Maryam Tavakol, Jens Lehmann
2020 arXiv   pre-print
The emergence of structured databases for Question Answering (QA) systems has led to developing methods, in which the problem of learning the correct answer efficiently is based on a linking task between the constituents of the question and the corresponding entries in the database. As a result, parsing the questions in order to determine their main elements, which are required for answer retrieval, becomes crucial. However, most datasets for QA systems lack gold annotations for parsing, i.e.,
more » ... for parsing, i.e., labels are only available in the form of (question, formal-query, answer). In this paper, we propose a distantly supervised learning framework based on reinforcement learning to learn the mentions of entities and relations in questions. We leverage the provided formal queries to characterize delayed rewards for optimizing a policy gradient objective for the parsing model. An empirical evaluation of our approach shows a significant improvement in the performance of entity and relation linking compared to the state of the art. We also demonstrate that a more accurate parsing component enhances the overall performance of QA systems.
arXiv:1909.12566v2 fatcat:yu3tzekjvzd5plww3vzfqugowu

DistLODStats: Distributed Computation of RDF Dataset Statistics

Gezim Sejdiu, Ivan Ermilov, Jens Lehmann, Mohamed Nadjib Mami
2018 Zenodo  
. Over the last years, the Semantic Web has been growing steadily. To- count more than 10,000 datasets made available online following Se- eb standards. Nevertheless, many applications, such as data integration, nd interlinking, may not take the full advantage of the data without hav- ori statistical information about its internal structure and coverage. In e are already a number of tools, which offer such statistics, providing ormation about RDF datasets and vocabularies. However, those
more » ... wever, those usually ere deficiencies in terms of performance once the dataset size grows he capabilities of a single machine. In this paper, we introduce a soft- mponent for statistical calculations of large RDF datasets, which scales sters of machines. More specifically, we describe the first distributed in- approach for computing 32 different statistical criteria for RDF datasets ache Spark. The preliminary results show that our distributed approach upon a previous centralized approach we compare against and provides ately linear horizontal scale-up. The criteria are extensible beyond the t criteria, is integrated into the larger SANSA framework and employed four major usage scenarios beyond the SANSA community.
doi:10.5281/zenodo.3567965 fatcat:24tntp6einggrjawhjwo5c5aj4

DBpedia Live Extraction [chapter]

Sebastian Hellmann, Claus Stadler, Jens Lehmann, Sören Auer
2009 Lecture Notes in Computer Science  
The DBpedia project extracts information from Wikipedia, interlinks it with other knowledge bases, and makes this data available as RDF. So far the DBpedia project has succeeded in creating one of the largest knowledge bases on the Data Web, which is used in many applications and research prototypes. However, the heavy-weight extraction process has been a drawback. It requires manual effort to produce a new release and the extracted information is not up-to-date. We extended DBpedia with a live
more » ... DBpedia with a live extraction framework, which is capable of processing tens of thousands of changes per day in order to consume the constant stream of Wikipedia updates. This allows direct modifications of the knowledge base and closer interaction of users with DBpedia. We also show how the Wikipedia community itself is now able to take part in the DBpedia ontology engineering process and that an interactive roundtrip engineering between Wikipedia and DBpedia is made possible.
doi:10.1007/978-3-642-05151-7_33 fatcat:pi5rcp7v7zhwzpvn6eu3h7wf2a

Pattern Based Knowledge Base Enrichment [chapter]

Lorenz Bühmann, Jens Lehmann
2013 Lecture Notes in Computer Science  
Although an increasing number of RDF knowledge bases are published, many of those consist primarily of instance data and lack sophisticated schemata. Having such schemata allows more powerful querying, consistency checking and debugging as well as improved inference. One of the reasons why schemata are still rare is the effort required to create them. In this article, we propose a semi-automatic schemata construction approach addressing this problem: First, the frequency of axiom patterns in
more » ... xiom patterns in existing knowledge bases is discovered. Afterwards, those patterns are converted to SPARQL based pattern detection algorithms, which allow to enrich knowledge base schemata. We argue that we present the first scalable knowledge base enrichment approach based on real schema usage patterns. The approach is evaluated on a large set of knowledge bases with a quantitative and qualitative result analysis.
doi:10.1007/978-3-642-41335-3_3 fatcat:6ssupffhmfa3tcai2szsinw3c4

SPIRIT: A Semantic Transparency and Compliance Stack

Patrick Westphal, Javier D. Fernández, Sabrina Kirrane, Jens Lehmann
2019 Zenodo  
The European General Data Protection Regulation (GDPR) sets new precedents for the processing of personal data. In this paper, we propose an architecture that provides an automated means to enable transparency with respect to personal data processing and sharing transactions and compliance checking with respect to data subject usage policies and GDPR legislative obligations.
doi:10.5281/zenodo.3567866 fatcat:twa4rpplhnesrlffl5coja2x4u

Improving the Long-Range Performance of Gated Graph Neural Networks [article]

Denis Lukovnikov, Jens Lehmann, Asja Fischer
2020 arXiv   pre-print
Many popular variants of graph neural networks (GNNs) that are capable of handling multi-relational graphs may suffer from vanishing gradients. In this work, we propose a novel GNN architecture based on the Gated Graph Neural Network with an improved ability to handle long-range dependencies in multi-relational graphs. An experimental analysis on different synthetic tasks demonstrates that the proposed architecture outperforms several popular GNN models.
arXiv:2007.09668v1 fatcat:sl6imgti3zcgba3tto5vwkev74

A Scalable Framework for Quality Assessment of RDF Datasets [article]

Gezim Sejdiu, Anisa Rula, Jens Lehmann, Hajira Jabeen
2020 arXiv   pre-print
Over the last years, Linked Data has grown continuously. Today, we count more than 10,000 datasets being available online following Linked Data standards. These standards allow data to be machine readable and inter-operable. Nevertheless, many applications, such as data integration, search, and interlinking, cannot take full advantage of Linked Data if it is of low quality. There exist a few approaches for the quality assessment of Linked Data, but their performance degrades with the increase
more » ... with the increase in data size and quickly grows beyond the capabilities of a single machine. In this paper, we present DistQualityAssessment -- an open source implementation of quality assessment of large RDF datasets that can scale out to a cluster of machines. This is the first distributed, in-memory approach for computing different quality metrics for large RDF datasets using Apache Spark. We also provide a quality assessment pattern that can be used to generate new scalable metrics that can be applied to big data. The work presented here is integrated with the SANSA framework and has been applied to at least three use cases beyond the SANSA community. The results show that our approach is more generic, efficient, and scalable as compared to previously proposed approaches.
arXiv:2001.11100v1 fatcat:azwjqvmwu5bzvlgcqrjaoaik54

Wikidata through the Eyes of DBpedia [article]

Ali Ismayilov and Dimitris Kontokostas and Sören Auer and Jens Lehmann and Sebastian Hellmann
2015 arXiv   pre-print
DBpedia is one of the first and most prominent nodes of the Linked Open Data cloud. It provides structured data for more than 100 Wikipedia language editions as well as Wikimedia Commons, has a mature ontology and a stable and thorough Linked Data publishing lifecycle. Wikidata, on the other hand, has recently emerged as a user curated source for structured information which is included in Wikipedia. In this paper, we present how Wikidata is incorporated in the DBpedia ecosystem. Enriching
more » ... tem. Enriching DBpedia with structured information from Wikidata provides added value for a number of usage scenarios. We outline those scenarios and describe the structure and conversion process of the DBpediaWikidata dataset.
arXiv:1507.04180v1 fatcat:7ey3jxzeqrdltbiwegcuhetyxi

Training Multimodal Systems for Classification with Multiple Objectives [article]

Jason Armitage, Shramana Thakur, Rishi Tripathi, Jens Lehmann, Maria Maleshkova
2020 arXiv   pre-print
We learn about the world from a diverse range of sensory information. Automated systems lack this ability as investigation has centred on processing information presented in a single form. Adapting architectures to learn from multiple modalities creates the potential to learn rich representations of the world - but current multimodal systems only deliver marginal improvements on unimodal approaches. Neural networks learn sampling noise during training with the result that performance on unseen
more » ... formance on unseen data is degraded. This research introduces a second objective over the multimodal fusion process learned with variational inference. Regularisation methods are implemented in the inner training loop to control variance and the modular structure stabilises performance as additional neurons are added to layers. This framework is evaluated on a multilabel classification task with textual and visual inputs to demonstrate the potential for multiple objectives and probabilistic methods to lower variance and improve generalisation.
arXiv:2008.11450v1 fatcat:j24tenlmzbddfms6sza3dchex4
« Previous Showing results 1 — 15 out of 5,283 results