A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
Learning of OWL Class Descriptions on Very Large Knowledge Bases
2009
International Journal on Semantic Web and Information Systems (IJSWIS)
Large knowledge bases such as DBpedia, OpenCyc, Gov-Track, and others are emerging and are freely available as Linked Data and SPARQL endpoints. ...
We describe how we leverage existing techniques to achieve scalability on large knowledge bases available as SPARQL endpoints or Linked Data. ...
Although advancements have been made in approximate reasoning for OWL, it is not feasible to load very large knowledge bases like DBpedia, OpenCyc, and others into a reasoner. ...
doi:10.4018/jswis.2009040102
fatcat:zy3ujppp5vcapjqhc6ua7b5poy
Subjective Knowledge Acquisition and Enrichment Powered By Crowdsourcing
[article]
2017
arXiv
pre-print
conflict between large scale knowledge facts and limited crowdsourcing resource. ...
To address this challenge, in this work, we define knowledge inference rules and then select the seed knowledge judiciously for crowdsourcing to maximize the inference power under the resource constraint ...
real large-scale knowledge base and crowdsourcing platform and verify the effectiveness of CoSKA system. ...
arXiv:1705.05720v1
fatcat:r4es3mqs5jdtjh2j32tawzlwc4
CokeBERT: Contextual Knowledge Selection and Embedding towards Enhanced Pre-Trained Language Models
2021
AI Open
Besides the performance improvements, the dynamically selected knowledge in Coke can describe the semantics of text-related knowledge in a more interpretable form than the conventional PLMs. ...
In this paper, we propose a novel framework named Coke to dynamically select contextual knowledge and embed knowledge context according to textual context for PLMs, which can avoid the effect of redundant ...
Because improvements of downstream tasks come from the better knowledge selection component and various PLMs like BERT BASE , RoBERTa BASE , and RoBERTa LARGE . ...
doi:10.1016/j.aiopen.2021.06.004
fatcat:ajnkkebuhrcntkkbvfjuqgpx6i
Section 5: Decision Support, Knowledge Representation and Management: Decision support, Knowledge Representation and Management: A broad methodological spectrum
2006
IMIA Yearbook of Medical Informatics
SummaryTo summarize current excellent research in the field of decision support, knowledge management and representation.Synopsis of the articles selected for the IMIA Yearbook 2006.Decision Support, Knowledge ...
on natural language processing, the evaluation of large semantic networks, and a comprehensive ontology for a randomised controlled trial database to support evidence-based practise.The best paper selection ...
Acknowledgement We greatly acknowledge the support of Martina Hutter and of the reviewers in the selection process of the IMIA Yearbook. ...
doi:10.1055/s-0038-1638472
fatcat:skqrj5qrmvf35gdieasuokfeku
Can I Be of Further Assistance? Using Unstructured Knowledge Access to Improve Task-oriented Conversational Modeling
[article]
2021
arXiv
pre-print
Our approach works in a pipelined manner with knowledge-seeking turn detection, knowledge selection, and response generation in sequence. ...
We introduce novel data augmentation methods for the first two steps and demonstrate that the use of information extracted from dialogue context improves the knowledge selection and end-to-end performances ...
Knowledge Selection For knowledge selection, the baseline system predicts the relevance between a given dialogue context and every candidate in the whole knowledge base, which is very time-consuming especially ...
arXiv:2106.09174v1
fatcat:jesri2izlbcxvedyllqta7nqwe
WinoWhy: A Deep Diagnosis of Essential Commonsense Knowledge for Answering Winograd Schema Challenge
[article]
2020
arXiv
pre-print
In this paper, we present the first comprehensive categorization of essential commonsense knowledge for answering the Winograd Schema Challenge (WSC). ...
., what kind of knowledge cannot be effectively represented or inferred with existing methods) and shed some light on the commonsense knowledge that we need to acquire in the future for better commonsense ...
Names, definitions, and examples of selected knowledge types are shown in Table 1 . ...
arXiv:2005.05763v1
fatcat:jqngr4agbbh6dpq73tkrimvtuu
Prediction, Selection, and Generation: Exploration of Knowledge-Driven Conversation System
[article]
2021
arXiv
pre-print
In this paper, we combine the knowledge bases and pre-training model to propose a knowledge-driven conversation system. ...
In open-domain conversational systems, it is important but challenging to leverage background knowledge. ...
In the response generation stage, the Topic Prediction Model and the Knowledge Matching Model use the large model and the base model respectively. ...
arXiv:2104.11454v3
fatcat:7xz7besymfgiteq2ytexntuina
Selection Strategies for Commonsense Knowledge
[article]
2022
arXiv
pre-print
Selection strategies are broadly used in first-order logic theorem proving to select those parts of a large knowledge base that are necessary to proof a theorem at hand. ...
In knowledge bases with commonsense knowledge, symbol names are usually chosen to have a meaning and this meaning provides valuable information for selection strategies. ...
The task of selection strategies is: For a large knowledge base and a problem, determine a (preferably small) subset of the knowledge base for which a proof for the problem can be found. ...
arXiv:2202.09163v2
fatcat:jvy6shgegjcj5ibrijdyxfybci
Page 328 of American Society of Civil Engineers. Collected Journals Vol. 118, Issue CO2
[page]
1992
American Society of Civil Engineers. Collected Journals
Main Knowledge Base
"MAINESO1
an
Small Medium fi Extra-Large Knowledge Knowledge Knowledge Knowledge Base Bae Base Base “
“MAINES?” ...
The corresponding knowledge bases are labeled small, medium, large, and extra-large, respectively. ...
Careful Selection of Knowledge to Solve Open Book Question Answering
2019
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
In this paper we address QA with respect to the OpenBookQA dataset and combine state of the art language models with abductive information retrieval (IR), information gain based re-ranking, passage selection ...
Open book question answering is a type of natural language based QA (NLQA) where questions are expected to be answered with respect to a given set of open book facts, and common knowledge about a topic ...
In Question Answering, our system uses P ij to answer the questions using a BERT Large based MCQ model, similar to its use in solving SWAG (Zellers et al., 2018) . ...
doi:10.18653/v1/p19-1615
dblp:conf/acl/BanerjeePMB19
fatcat:knzbjyynrjhvrgp7ggwfltqcee
Incorporating Connections Beyond Knowledge Embeddings: A Plug-and-Play Module to Enhance Commonsense Reasoning in Machine Reading Comprehension
[article]
2021
arXiv
pre-print
However, they make limited use of a large number of connections between nodes in Knowledge Graphs (KG), which could be pivotal cues to build the commonsense reasoning chains. ...
Experimental results on ReCoRD, a large-scale public MRC dataset requiring commonsense reasoning, show that PIECER introduces stable performance improvements for four representative base MRC models, especially ...
For the external commonsense knowledge, we select a large-scale commonsense knowledge graph ConceptNet (Speer et al., 2017) . ...
arXiv:2103.14443v1
fatcat:cnmfvwdfe5e4pb5hgwldjrn3dq
Combining RDR-Based Machine Learning Approach and Human Expert Knowledge for Phishing Prediction
[chapter]
2016
Lecture Notes in Computer Science
The result shows the improvements in prediction accuracy of the knowledge acquired by machine learning. ...
Three models were included in comparison: RDR with machine learning and human knowledge, RDR machine learning only and J48 machine learning only. ...
A large number of knowledge-based systems are built for acquiring and maintaining the knowledge for detecting and predicting the phishing website. ...
doi:10.1007/978-3-319-42911-3_7
fatcat:36eqlmo4mvgsfmmokrc2gyzm6y
ExBERT: An External Knowledge Enhanced BERT for Natural Language Inference
[article]
2021
arXiv
pre-print
Neural language representation models such as BERT, pre-trained on large-scale unstructured corpora lack explicit grounding to real-world commonsense knowledge and are often unable to remember facts required ...
Extensive experiments on the challenging SciTail and SNLI benchmarks demonstrate the effectiveness of ExBERT: in comparison to the previous state-of-the-art, we obtain an accuracy of 95.9% on SciTail and ...
The selection process retrieves a large number of KG triples which are not all relevant to the context of the premise. We filter the selected triples in the ranking step. ...
arXiv:2108.01589v1
fatcat:ccllx3bznvgfjhlop2mcimcgny
CokeBERT: Contextual Knowledge Selection and Embedding towards Enhanced Pre-Trained Language Models
[article]
2020
arXiv
pre-print
Besides the performance improvements, the dynamically selected knowledge in Coke can describe the semantics of text-related knowledge in a more interpretable form than the conventional PLMs. ...
In this paper, we propose a novel framework named Coke to dynamically select contextual knowledge and embed knowledge context according to textual context for PLMs, which can avoid the effect of redundant ...
Besides, we choose BERT BASE (Devlin et al., 2019) , RoBERTa BASE , and RoBERTa LARGE as our base models. ...
arXiv:2009.13964v4
fatcat:c6bdnapvyrda3mbdl2e5gifr5q
Page 186 of American Society of Civil Engineers. Collected Journals Vol. 118, Issue CO1
[page]
1992
American Society of Civil Engineers. Collected Journals
At least two interviews were conducted for approximately 70% of the formwork systems contained in the knowledge base. ...
Inaccessibility of Cost Data
Cost is a major factor in selecting a particular forming system for build- ings. ...
« Previous
Showing results 1 — 15 out of 3,730,644 results