1,761 Hits in 4.9 sec

Learning Explainable Linguistic Expressions with Neural Inductive Logic Programming for Sentence Classification

Prithviraj Sen, Marina Danilevsky, Yunyao Li, Siddhartha Brahma, Matthias Boehm, Laura Chiticariu, Rajasekar Krishnamurthy
2020 Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)   unpublished
We present RuleNN, a neural network architecture for learning transparent models for sentence classification.  ...  The models are in the form of rules expressed in first-order logic, a dialect with well-defined, human-understandable semantics.  ...  Lasecki, Eser Kandogan (for help building the UI which made the user study possible), Diman Ghazi, Poornima Chozhiyath Raman, Ramiya Venkatachalam, Vinitha Yaski and Sneha Srinivasan (for help with the  ... 
doi:10.18653/v1/2020.emnlp-main.345 fatcat:44sq35plbbd43g7ducts4owf7q

HEIDL: Learning Linguistic Expressions with Deep Learning and Human-in-the-Loop

Prithviraj Sen, Yunyao Li, Eser Kandogan, Yiwei Yang, Walter Lasecki
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations  
We demonstrate HEIDL, a prototype HITL-ML system that exposes the machine-learned model through high-level, explainable linguistic expressions formed of predicates representing semantic structure of text  ...  In HEIDL, human's role is elevated from simply evaluating model predictions to interpreting and even updating the model logic directly by enabling interaction with rule predicates themselves.  ...  Two powerful and extremely general formulations that can learn from noisy labeled data virtually any kind of logic program, including linguistic expressions for classification, are TensorLog and ∂ILP  ... 
doi:10.18653/v1/p19-3023 dblp:conf/acl/SenLKYL19 fatcat:dyk3ntokeffxpkwiytu6np4bha

Three levels of inductive inference [chapter]

Peter Gärdenfors
1995 Studies in Logic and the Foundations of Mathematics  
My work with this article has been supported by the Swedish Council for Research in the Humanities and Social Sciences.  ...  Churchland (1986) and Gärdenfors (1992) for a discussion of the conflict between the two approaches. Peter Williams, and the Cognitive Science group in Lund for helpful discussions.  ...  They then define a sentence # to be an inductive conclusion if and only if (1) # is consistent with ! " and (2) the hypothesis # explains the data in the sense that !  ... 
doi:10.1016/s0049-237x(06)80055-8 fatcat:t5fyqgr4lfa2xgdsvzpom2m63u

Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually) [article]

Alex Warstadt, Yian Zhang, Haau-Sing Li, Haokun Liu, Samuel R. Bowman
2020 arXiv   pre-print
We find that models can learn to represent linguistic features with little pretraining data, but require far more data to learn to prefer linguistic generalizations over surface ones.  ...  We conclude that while self-supervised pretraining is an effective way to learn helpful inductive biases, there is likely room to improve the rate at which models learn which features matter.  ...  Deep Learning using Latent Structure), by Intuit, Inc., and in-kind support by the NYU High-Performance Computing Center and by NVIDIA Corporation (with the donation of a Titan V GPU).  ... 
arXiv:2010.05358v1 fatcat:ijwvxsrhq5elnfkwys2qgz5ooe

Implementation and Evaluation of Evolutionary Connectionist Approaches to Automated Text Summarization

2010 Journal of Computer Science  
These three approaches to text summarization are based on semantic nets, fuzzy logic and evolutionary programming respectively. All the three represent approaches to achieve connectionism.  ...  Conclusion: Our first approach used WordNet, a lexical database for English.  ...  Figure 1 explains a typical connectionist learning environment.  ... 
doi:10.3844/jcssp.2010.1366.1376 fatcat:o4wdtgmhozfwvc7s4f4ncaxx4e

Machine learning of syntactic parse trees for search and classification of text

Boris Galitsky
2013 Engineering applications of artificial intelligence  
We build an open-source toolkit which implements deterministic learning to support search and text classification tasks.  ...  Nearest neighbor machine learning is then applied to relate a sentence to a semantic class.  ...  Acknowledgements We are grateful to our colleagues SO Kuznetsov, B Kovalerchuk and others for valuable discussions, to the anonymous reviewers for their suggestions.  ... 
doi:10.1016/j.engappai.2012.09.017 fatcat:2o7p5nhixbb2zl363ppqx4jqgq

A Survey on Neural-symbolic Systems [article]

Dongran Yu, Bo Yang, Dayou Liu, Hui Wang
2021 arXiv   pre-print
Combining the fast computation ability of neural systems and the powerful expression ability of symbolic systems, neural-symbolic systems can perform effective learning and reasoning in multi-domain tasks  ...  In this case, an ideal intelligent system--a neural-symbolic system--with high perceptual and cognitive intelligence through powerful learning and reasoning capabilities gains a growing interest in the  ...  [38] proposed neural logic inductive learning (NLIL), which can reasoning complex logic rules(such as tree and conjunctive rules, etc.), explaining the patterns hidden in data through learned FOL, and  ... 
arXiv:2111.08164v1 fatcat:bc33afiitnb73bmjtrfbdgkwpy

Sentiment Analysis [chapter]

2017 Encyclopedia of Machine Learning and Data Mining  
Inductive logic programming is a research field at the intersection of machine learning and logic programming.  ...  Association for computational linguistics, New York, pp 439-446 Zelle JM, Mooney RJ (1996) Learning to parse database queries using inductive logic program- ming.  ...  It might then be possible to learn, for example, that taking action action234 in state state42 is worth 6. 2 and leads to state state654321.  ... 
doi:10.1007/978-1-4899-7687-1_100512 fatcat:ce4yyqo2czftzcx2kbauglh3fu

Spike-Timing-Dependent Plasticity [chapter]

2017 Encyclopedia of Machine Learning and Data Mining  
Inductive logic programming is a research field at the intersection of machine learning and logic programming.  ...  Association for computational linguistics, New York, pp 439-446 Zelle JM, Mooney RJ (1996) Learning to parse database queries using inductive logic program- ming.  ...  It might then be possible to learn, for example, that taking action action234 in state state42 is worth 6. 2 and leads to state state654321.  ... 
doi:10.1007/978-1-4899-7687-1_774 fatcat:2jprihjaxfbtpb3ttwuuz3u34y

DeepProbLog: Neural Probabilistic Logic Programming [article]

Robin Manhaeve, Sebastijan Dumančić, Angelika Kimmig, Thomas Demeester, Luc De Raedt
2018 arXiv   pre-print
We introduce DeepProbLog, a probabilistic logic programming language that incorporates deep learning by means of neural predicates.  ...  Our experiments demonstrate that DeepProbLog supports both symbolic and subsymbolic representations and inference, 1) program induction, 2) probabilistic (logic) programming, and 3) (deep) learning from  ...  ; (ii) program induction; and (iii) both probabilistic logic programming and deep learning.  ... 
arXiv:1805.10872v2 fatcat:vfybzoabxfd4vazuyizhfxnqfy

Some Remarks on the Nature of Linguistic Theory [and Discussion]

L. J. Cohen, R. B. L. Page
1981 Philosophical Transactions of the Royal Society of London. Biological Sciences  
Instead it concerns the variety and specificity of the learning programs with which one needs to suppose that the neonate is equipped; four different modes of investigating this problem are currently distinguishable  ...  Nevertheless native speakers' intuitions of grammaticalness have provided an indispensable source of data for linguistic description, and recent criticisms of this source have failed to construe correctly  ...  -in which children mature with experience, that a general program for inductive learning can achieve no worthwhile degree of simplification compared with an assembly of more specific programs.  ... 
doi:10.1098/rstb.1981.0136 fatcat:lkwrsnzzmrabdiaycg4zzwc6wi

The redundancy of recursion and infinity for natural language

Erkki Luuk, Hendrik Luuk
2010 Cognitive Processing  
First, we question the need for recursion in human cognitive processing by arguing that a generally simpler and less resource demanding process-iteration-is sufficient to account for human natural language  ...  We argue that the only motivation for recursion, the infinity in natural language and arithmetic competence, is equally approachable by iteration and recursion.  ...  Erkki Luuk was supported by the target-financed theme No. 0180078s08, the National Programme for Estonian Language Technology project ''Semantic analysis of simple sentences 2'', and the European Regional  ... 
doi:10.1007/s10339-010-0368-6 pmid:20652723 fatcat:wxgfyb7f4nbkxl66ghzpqxnipa

A Framework for Employee Appraisals Based on Inductive Logic Programming and Data Mining Methods [chapter]

Darah Aqel, Sunil Vadera
2013 Lecture Notes in Computer Science  
There are many machine learning methods, such as neural networks (McCulloch and Pitts, 1943) , decision tree induction (Quinlan, 1993) , and inductive logic programming (Muggleton and De Raedt, 1994  ...  Inductive Logic Programming Inductive logic programming (ILP) (Muggleton and De Raedt, 1994; Muggleton, 1999; De Raedt, 2008 negative examples subset N.  ...  Appendix A A1 The Grammar Rules Learned by ALEPH from the First Corpus The following presents the grammar rules for SMART objectives learned by ALEPH from the corpus of objectives related to the sales  ... 
doi:10.1007/978-3-642-38824-8_49 fatcat:3bcsstk5tnhobijhl2gmpdi2jy

Symbolic, Conceptual and Subconceptual Representations [chapter]

Peter Gärdenfors
1997 Human and Machine Perception  
Again, conceptual representations should not be seen as competing with symbolic or connectionistic representations. Rather, the three kinds can be seen as three levels of  ...  The other goal is constructive: By building artifacts like chess-playing programs, robots, animats, etc, one attempts to construct systems that can solve various cognitive tasks.  ...  This kind of level problem is ubiquitous in applications of neural networks for learning purposes.  ... 
doi:10.1007/978-1-4615-5965-8_18 fatcat:rwvmbjizijgmxmhfwhmt4oxbiy

Inductive Logic Programming: Issues, results and the challenge of Learning Language in Logic

Stephen Muggleton
1999 Artificial Intelligence  
Inductive Logic Programming (ILP) is the area of AI which deals with the induction of hypothesised predicate definitions from examples and background knowledge.  ...  Logic programs are used as a single representation for examples, background knowledge and hypotheses.  ...  This work was supported partly by the Esprit Long Term Research Action ILP II (project 20237), EPSRC grant GR/K57985 on Experiments with Distribution-based Machine Learning and an EPSRC Advanced Research  ... 
doi:10.1016/s0004-3702(99)00067-3 fatcat:6fbihina7zba3neogm7knq45qe
« Previous Showing results 1 — 15 out of 1,761 results