A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Low-resource Learning with Knowledge Graphs: A Comprehensive Survey
[article]
2022
arXiv
pre-print
appeared in training, and few-shot learning (FSL) where new classes for prediction have only a small number of labeled samples that are available. ...
Among all the low-resource learning studies, many prefer to utilize some auxiliary information in the form of Knowledge Graph (KG), which is becoming more and more popular for knowledge representation, ...
There have been quite a few benchmarks that can be used for evaluating KG-aware zero-shot and few-shot knowledge extraction methods. ...
arXiv:2112.10006v5
fatcat:vxl5hnqe5jaafgwuznbz556fmm
Language models show human-like content effects on reasoning
[article]
2022
arXiv
pre-print
For example, humans reason much more reliably about logical rules that are grounded in everyday situations than arbitrary rules about abstract attributes. ...
We explored this hypothesis across three logical reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task (Wason, 1968). ...
We did not find substantial differences if we used other entity types in the few-shot prompts (Appx. B.8). ...
arXiv:2207.07051v1
fatcat:yolokzfko5fvfnh2e2qgfqvusm
Meta-Learning with Adaptive Hyperparameters
[article]
2020
arXiv
pre-print
The experimental results validate that the Adaptive Learning of hyperparameters for Fast Adaptation (ALFA) is the equally important ingredient that was often neglected in the recent few-shot learning approaches ...
Consequently, we propose a new weight update rule that greatly enhances the fast adaptation process. ...
Few-shot regression We study the generalizability of the proposed weight-update rule through experiments on few-shot regression. ...
arXiv:2011.00209v2
fatcat:xxyot3nwejgjnco22lcacefwfu
Simple Questions Generate Named Entity Recognition Datasets
[article]
2022
arXiv
pre-print
In few-shot NER, we outperform the previous best model by 5.2 F1 score on three benchmarks and achieve new state-of-the-art performance. ...
Recent named entity recognition (NER) models often rely on human-annotated datasets requiring the vast engagement of professional knowledge on the target domain and entities. ...
Results Table 3 Results Table 4 shows the performance of few-shot NER models and GeNER. Note that all baselines use few-shot examples during training. ...
arXiv:2112.08808v2
fatcat:j6puqsiyvrhqjfbhdj7kw7v4be
Few-Shot Learning with Intra-Class Knowledge Transfer
[article]
2020
arXiv
pre-print
First, a regressor trained only on the many-shot classes is used to evaluate the few-shot class means from only a few samples. ...
Second, superclasses are clustered, and the statistical mean and feature variance of each superclass are used as transferable knowledge inherited by the children few-shot classes. ...
It then uses transferred knowledge together with the few-shot samples to augment the few-shot data. ...
arXiv:2008.09892v1
fatcat:tmmlddvj5vfsxcvbtihdxwmf4i
Subgraph-aware Few-Shot Inductive Link Prediction via Meta-Learning
[article]
2021
arXiv
pre-print
In this way, we find the model can quickly adapt to few-shot relationships using only a handful of known facts with inductive settings. ...
Moreover, we introduce a large-shot relation update procedure to traditional meta-learning to ensure that our model can generalize well both on few-shot and large-shot relations. ...
a competitive few-shot inductive KG embedding model, Meta-iKG, that fits the few-shot nature of knowledge graph and can naturally generalize to the unseen entities. ...
arXiv:2108.00954v1
fatcat:2hqxng3djzcdxdcd4hrdu5ybda
From Examples to Rules: Neural Guided Rule Synthesis for Information Extraction
[article]
2022
arXiv
pre-print
Further, we show that without training the synthesis algorithm on the specific domain, our synthesized rules achieve state-of-the-art performance on the 1-shot scenario of a task that focuses on few-shot ...
We use a transformer-based architecture to guide an enumerative search, and show that this reduces the number of steps that need to be explored before a rule is found. ...
from the train partition of the Few-Shot TACRED. ...
arXiv:2202.00475v1
fatcat:doeidlwo5rhe3nd4fpu75renxq
Cross-Domain Few-Shot Learning by Representation Fusion
[article]
2021
arXiv
pre-print
In order to quickly adapt to new data, few-shot learning aims at learning from few examples, often by using already acquired knowledge. ...
Ablation studies show that representation fusion is a decisive factor to boost cross-domain few-shot learning. ...
For few-shot learning, we perform a grid search to determine the best hyperparameter setting for each of the datasets and each of the 1-shot and 5-shot settings, using the loss on the vertical validation ...
arXiv:2010.06498v2
fatcat:g6u7prbqtbaoljduv62jnz4wz4
Optimization as a Model for Few-Shot Learning
2017
International Conference on Learning Representations
Here, we propose an LSTMbased meta-learner model to learn the exact optimization algorithm used to train another learner neural network classifier in the few-shot regime. ...
We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning. ...
FEW-SHOT LEARNING The best performing methods for few-shot learning have been mainly metric learning methods. ...
dblp:conf/iclr/RaviL17
fatcat:dq3izbjd7rgjvhil2kf2m5hrkm
Learning to Model the Tail
2017
Neural Information Processing Systems
This knowledge is encoded with a meta-network that operates on the space of model parameters, that is trained to predict many-shot model parameters from few-shot model parameters. ...
That is, we transfer knowledge in a gradual fashion, regularizing meta-networks for few-shot regression with those trained with more training data. ...
This certainly would not be useful to apply on few-shot models. Similarly, a low threshold may not be useful when regressing from many-shot models. ...
dblp:conf/nips/WangRH17
fatcat:62h7orilzzektez4cmxuftztay
LOGEN: Few-shot Logical Knowledge-Conditioned Text Generation with Self-training
[article]
2021
arXiv
pre-print
To this end, this paper proposes a unified framework for logical knowledge-conditioned text generation in the few-shot setting. ...
Experimental results demonstrate that our approach can obtain better few-shot performance than baselines. ...
Our work relates to the few-shot NLG. [4] propose TableGPT, which focuses on generating high-fidelity text for the tableto-text generation using limited training pairs. [40] propose a few-shot NLG approach ...
arXiv:2112.01404v1
fatcat:x7rdl3dahnhq7fyuni5jkks6li
Meta-Learning with Variational Semantic Memory for Word Sense Disambiguation
[article]
2021
arXiv
pre-print
This inspired recent research on few-shot WSD using meta-learning. ...
We show our model advances the state of the art in few-shot WSD, supports effective learning in extremely data scarce (e.g. one-shot) scenarios and produces meaning prototypes that capture similar senses ...
For Monte Carlo sampling, we set different L Z and L M for the each embedding function and |S|, which are chosen using the validation set. ...
arXiv:2106.02960v1
fatcat:r7dwmqgczbghlcknke2zgc7kv4
Few-Shot Sequence Labeling with Label Dependency Transfer and Pair-wise Embedding
[article]
2019
arXiv
pre-print
When applying CRF in the few-shot scenarios, the discrepancy of label sets among different domains makes it hard to use the label dependency learned in prior domains. ...
While few-shot classification has been widely explored with similarity based methods, few-shot sequence labeling poses a unique challenge as it also calls for modeling the label dependencies. ...
Few-shot Learning Few-shot learning is built upon the assumption that prior knowledge is potentially helpful. Traditional methods depend highly on hand-crafted features Fink 2005) . ...
arXiv:1906.08711v3
fatcat:s3ts6jlm4bakxguhxpjvuhrila
Inductive Relation Prediction by BERT
[article]
2021
arXiv
pre-print
Meanwhile, it demonstrates strong generalization capability in few-shot learning and is explainable. ...
Relation prediction in knowledge graphs is dominated by embedding based methods which mainly focus on the transductive setting. ...
We use a list of test triples with 10% size of train-graph. In a few-shot setting, we reuse the few-shot train-graph used in the inductive setting and tested on the aforementioned test links. ...
arXiv:2103.07102v1
fatcat:a4ntvdq7bvedtbwfzqmss7yxcy
An Empirical Evaluation of Argumentation in Explaining Inconsistency-Tolerant Query Answering
2017
International Workshop on Description Logics
tolerant semantics in existential rules knowledge bases?" ...
In this paper we answer empirically the following research question: "Are dialectical explanation methods more effective than one-shot explanation methods for Intersection of Closed Repairs inconsistency ...
Introduction We place ourselves in a logical based setting where we consider inconsistent knowledge bases expressed using existential rules [9] . ...
dblp:conf/dlog/HechamASC17
fatcat:eccztgdivfh33igyp5wgfjwczi
« Previous
Showing results 1 — 15 out of 69,357 results