A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Logic-Guided Data Augmentation and Regularization for Consistent Question Answering
[article]
2020
arXiv
pre-print
Our method leverages logical and linguistic knowledge to augment labeled training data and then uses a consistency-based regularizer to train the model. ...
This paper addresses the problem of improving the accuracy and consistency of responses to comparison questions by integrating logic rules and neural models. ...
We thank Antoine Bosselut, Tim Dettmers, Rik Koncel-Kedziorski, Sewon Min, Keisuke Sakaguchi, David Wadden, Yizhong Wang, the members of UW NLP group and AI2, and the anonymous reviewers for their insightful ...
arXiv:2004.10157v2
fatcat:5hbhcyawirdbbhqxic7qlqb6ou
Logic-Guided Data Augmentation and Regularization for Consistent Question Answering
2020
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
unpublished
Our method leverages logical and linguistic knowledge to augment labeled training data and then uses a consistency-based regularizer to train the model. ...
This paper addresses the problem of improving the accuracy and consistency of responses to comparison questions by integrating logic rules and neural models. ...
We thank Antoine Bosselut, Tim Dettmers, Rik Koncel-Kedziorski, Sewon Min, Keisuke Sakaguchi, David Wadden, Yizhong Wang, the members of UW NLP group and AI2, and the anonymous reviewers for their insightful ...
doi:10.18653/v1/2020.acl-main.499
fatcat:6zdjzhxuk5e6hlpnpzdhy63u74
Logically Consistent Loss for Visual Question Answering
[article]
2020
arXiv
pre-print
Given an image, a back-ground knowledge, and a set of questions about an object, human learners answer the questions very consistently regardless of question forms and semantic tasks. ...
To demonstrate usefulness of this proposal, we train and evaluate MAC-net based VQA machines with and without the proposed logically consistent loss and the proposed data organization. ...
[17] use data-augmentation approach to enforce consistency between pairs of questions and answers, Selvaraju et al. ...
arXiv:2011.10094v1
fatcat:mjiflqckvbd33plsq6ldpv4qty
Augmenting Neural Networks with First-order Logic
2019
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
In this paper, we present a novel framework for introducing declarative knowledge to neural network architectures in order to guide training and prediction. ...
Our experiments show that knowledge-augmented networks can strongly improve over baselines, especially in low-data regimes. ...
Acknowledgements We thank members of the NLP group at the University of Utah for their valuable insights and suggestions; and reviewers for pointers to related works, corrections, and helpful comments. ...
doi:10.18653/v1/p19-1028
dblp:conf/acl/LiS19
fatcat:fd267yehgnggphq7jtznfzi2la
GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing
[article]
2021
arXiv
pre-print
We pre-train our model on the synthetic data using a novel text-schema linking objective that predicts the syntactic role of a table field in the SQL for each question-SQL pair. ...
We present GraPPa, an effective pre-training approach for table semantic parsing that learns a compositional inductive bias in the joint representations of textual and tabular data. ...
However, data augmentation becomes more complex and less beneficial if we want to apply it to generate data for a random domain. ...
arXiv:2009.13845v2
fatcat:dzy5bu2cbna2lg7m6zun4lavhm
Augmenting Neural Networks with First-order Logic
[article]
2020
arXiv
pre-print
In this paper, we present a novel framework for introducing declarative knowledge to neural network architectures in order to guide training and prediction. ...
Our experiments show that knowledge-augmented networks can strongly improve over baselines, especially in low-data regimes. ...
Acknowledgements We thank members of the NLP group at the University of Utah for their valuable insights and suggestions; and reviewers for pointers to related works, corrections, and helpful comments. ...
arXiv:1906.06298v3
fatcat:o4i7fbgvmrc5hlmi5zhoawfg7q
Learning from Lexical Perturbations for Consistent Visual Question Answering
[article]
2020
arXiv
pre-print
regularization tool for VQA models. ...
Existing Visual Question Answering (VQA) models are often fragile and sensitive to input variations. ...
To summarize, our main contributions are: • A novel VQA consistency regularization method that augments questions and enforces similar answers and reasoning steps for the original and augmented questions ...
arXiv:2011.13406v2
fatcat:poyfejkn4nbx3h3t6pkzdlbzly
Iterative Search for Weakly Supervised Semantic Parsing
2019
Proceedings of the 2019 Conference of the North
We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones. ...
that coincidentally evaluate to the correct answer. ...
Acknowledgments We would like to thank Jonathan Berant and Noah Smith for comments on earlier drafts and Chen Liang for helping us with implementation details of MAPO. ...
doi:10.18653/v1/n19-1273
dblp:conf/naacl/Dasigi0MZH19
fatcat:ow7ygq7sqbdwjdmcw2bbwgbdqa
Compositional Semantic Parsing on Semi-Structured Tables
2015
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Two important aspects of semantic parsing for question answering are the breadth of the knowledge source and the depth of logical compositionality. ...
We propose a logical-form driven parsing algorithm guided by strong typing constraints and show that it obtains significant improvements over natural baselines. ...
Additionally, code, data, and experiments for this paper are available on the CodaLab platform at https://www.codalab.org/worksheets/ 0xf26cd79d4d734287868923ad1067cf4c/. ...
doi:10.3115/v1/p15-1142
dblp:conf/acl/PasupatL15
fatcat:d5xqyxrcmrfshoxka7uaed4yku
Compositional Semantic Parsing on Semi-Structured Tables
[article]
2015
arXiv
pre-print
Two important aspects of semantic parsing for question answering are the breadth of the knowledge source and the depth of logical compositionality. ...
We propose a logical-form driven parsing algorithm guided by strong typing constraints and show that it obtains significant improvements over natural baselines. ...
Additionally, code, data, and experiments for this paper are available on the CodaLab platform at https://www.codalab.org/worksheets/ 0xf26cd79d4d734287868923ad1067cf4c/. ...
arXiv:1508.00305v1
fatcat:iyqdr3ikjrfmli5wqzxomqznqm
A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
[article]
2021
arXiv
pre-print
Visual Content Manipulation; and (iv) Answer Distribution Shift. ...
Differing from previous studies focused on one specific type of robustness, Mango is task-agnostic, and enables universal performance lift for pre-trained models over diverse tasks designed to evaluate ...
It consists of two datasets: VQA-LOL Compose (logical combinations of multiple closed binary questions about the same image in VQA v2) and VQA-LOL Supplement (logical combinations of additional questions ...
arXiv:2012.08673v2
fatcat:orl3dt3r3fg3xjac2rt4xwqxxu
Leveraging Declarative Knowledge in Text and First-Order Logic for Fine-Grained Propaganda Detection
[article]
2020
arXiv
pre-print
The former refers to the logical consistency between coarse- and fine-grained predictions, which is used to regularize the training process with propositional Boolean expressions. ...
Specifically, we leverage the declarative knowledge expressed in both first-order logic and natural language. ...
Acknowledgments This work is partically supported by National Natural Science Foundation of China (No. 71991471), Science and Technology Commission of Shanghai Municipality Grant (No.20dz1200600, No.18DZ1201000 ...
arXiv:2004.14201v2
fatcat:vfscs2s2vzb35o3gwlkqq6kwfu
Neuro-Symbolic Entropy Regularization
[article]
2022
arXiv
pre-print
Such a large output space makes learning hard and requires vast amounts of labeled data. Different approaches leverage alternate sources of supervision. ...
and more likely to be valid. ...
Neuro-symbolic entropy-regularization guides the network to valid and confident predictions (d). unlabeled points, thereby supplementing scarce labeled data with abundant unlabeled data. ...
arXiv:2201.11250v1
fatcat:6li4zcdrknaxzc3whcnlkp27qm
Neural Programmer: Inducing Latent Programs with Gradient Descent
[article]
2016
arXiv
pre-print
However, this success has not been translated to applications like question answering that may involve complex arithmetic and logic reasoning. ...
In this work, we propose Neural Programmer, an end-to-end differentiable neural network augmented with a small set of basic arithmetic and logic operations. ...
Acknowledgements We sincerely thank Greg Corrado, Andrew Dai, Jeff Dean, Shixiang Gu, Andrew McCallum, and Luke Vilnis for their suggestions and the Google Brain team for the support. ...
arXiv:1511.04834v3
fatcat:h4rbj7uhvrc5bi2z7qyrgv5yfm
On Incorporating Semantic Prior Knowledge in Deep Learning Through Embedding-Space Constraints
[article]
2019
arXiv
pre-print
We illustrate the method on the task of visual question answering to exploit various auxiliary annotations, including relations of equivalence and of logical entailment between questions. ...
Existing methods to use these annotations, including auxiliary losses and data augmentation, cannot guarantee the strict inclusion of these relations into the model since they require a careful balancing ...
They used it for data augmentation while ensuring that all generated versions lead to the same answer, i.e. enforcing cycle consistency. ...
arXiv:1909.13471v2
fatcat:ccmpn7grkzakfc3nb2tdkzksma
« Previous
Showing results 1 — 15 out of 29,167 results