Filters








19,909 Hits in 4.2 sec

Artificial grammar recognition using spiking neural networks

Philip Cavaco, Baran Çürüklü, Karl Petersson
2009 BMC Neuroscience  
Building on work in [1] the model is designed to categorize symbol strings as belonging to a Reber grammar [2] .  ...  Discussion The results from two strings are presented here. One belonging to the grammar '#MTVT#' and one string not belonging to the grammar '#MTRT#'.  ... 
doi:10.1186/1471-2202-10-s1-p352 fatcat:p2flyrpixzgbbgvl2pjfjw7rl4

Connecting First and Second Order Recurrent Networks with Deterministic Finite Automata [article]

Qinglong Wang, Kaixuan Zhang, Xue Liu, C. Lee Giles
2019 arXiv   pre-print
Our evaluation shows that the unified recurrent network has improved performance in learning grammars, and demonstrates comparable performance on a real-world dataset with more complicated models.  ...  We introduce an entropy value that categorizes all regular grammars into three classes with different levels of complexity, and show that several existing recurrent networks match grammars from either  ...  On each grammar, we varied the sizes of the hidden layer of SRN in {10, 30, 100}.  ... 
arXiv:1911.04644v1 fatcat:4e6dgwflrvevpnxs4tbv3qahfy

Page 2064 of Mathematical Reviews Vol. , Issue 99c [page]

1991 Mathematical Reviews  
Summary: “In a parallel communicating grammar system, several grammars work together, synchronously, on their own sentential forms, and communicate on request.  ...  More- over, we allow the communicated string to be a part (any one, a prefix, a maximal or a minimal one, etc.) of the string of the communicating component.  ... 

A Comparative Study of Rule Extraction for Recurrent Neural Networks [article]

Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles
2018 arXiv   pre-print
On grammars of lower complexity, most recurrent networks obtain desirable extraction performance.  ...  Then we empirically evaluate different recurrent networks for their performance of DFA extraction on all Tomita grammars.  ...  These models were selected based on whether they were frequently adopted either in previous work on DFA extraction or in recent work on processing sequential data.  ... 
arXiv:1801.05420v2 fatcat:bg7tmfevxzff3f4dnp6ff7tw74

An Entropy Metric for Regular Grammar Classification and Learning with Recurrent Neural Networks

Kaixuan Zhang, Qinglong Wang, C. Lee Giles
2021 Entropy  
To obtain a better understanding of the internal structures of regular grammars and their corresponding complexity, we focus on categorizing regular grammars by using both theoretical analysis and empirical  ...  Based on the entropy metric, we categorized regular grammars into three disjoint subclasses: the polynomial, exponential and proportional classes.  ...  Section 2 provides preliminary and background material on recurrent neural networks and regular grammars. Section 3 surveys relevant work on complexity and classification of regular grammars.  ... 
doi:10.3390/e23010127 pmid:33478020 fatcat:b3uctqmskzegvdqlk2f4uu7i2q

Generalized LR Parsing for Grammars with Contexts [chapter]

Mikhail Barash, Alexander Okhotin
2015 Lecture Notes in Computer Science  
The Generalized LR parsing algorithm for context-free grammars is notable for having a decent worst-case running time (cubic in the length of the input string), as well as much better performance on "good  ...  Okhotin, "An extension of context-free grammars with one-sided context specifications", Inform.  ...  On an LR(k) grammar, a GLR parser always works in linear time; it may work slower on other grammars, though, when carefully implemented, its running time is at most cubic in the length of the input [4  ... 
doi:10.1007/978-3-319-20297-6_5 fatcat:rmpromqx5varhjhqkl7g2jayd4

Artificial grammar recognition using two spiking neural networks

Philip Cavaco, Baran Curuklu, Magnus Karl-Petersson
1970 Frontiers in Neuroinformatics  
Future work to improve the performance of the networks is discussed.  ...  In this paper we explore the feasibility of artificial (formal) grammar recognition (AGR) using spiking neural networks.  ...  ACKNOWLEDGEMENTS We thank Julia Uddén for helpful comments on earlier drafts of this manuscript and Christian Forkstam for support throughout the project.  ... 
doi:10.3389/conf.neuro.11.2009.08.096 fatcat:ue6kzuxd7nbtdbjheeuy6el4sa

Grammar Filtering For Syntax-Guided Synthesis [article]

Kairo Morton, William Hallahan, Elven Shum, Ruzica Piskac, Mark Santolucito
2020 arXiv   pre-print
On the other hand, the machine learning approaches utilize the fact that when working with program code, it is possible to generate arbitrarily large training datasets.  ...  At its core, the automated reasoning approach relies on highly domain specific knowledge of programming languages.  ...  Acknowledgments This work was supported in part by NSF grants CCF-1302327, CCF-1715387, and CCF-1553168.  ... 
arXiv:2002.02884v1 fatcat:6wsufsxfknccdppr4ps56fy4mi

Page 647 of Mathematical Reviews Vol. , Issue 2000a [page]

2000 Mathematical Reviews  
working on layered strings.  ...  This op- eration is investigated as an abstract operation on strings, then it is used in building a variant of grammar systems with the com- ponent grammars working on the layers of an array of strings  ... 

Grammar Filtering for Syntax-Guided Synthesis

Kairo Morton, William Hallahan, Elven Shum, Ruzica Piskac, Mark Santolucito
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
On the other hand, the machine learning approaches utilize the fact that when working with program code, it is possible to generate arbitrarily large training datasets.  ...  At its core, the automated reasoning approach relies on highly domain specific knowledge of programming languages.  ...  We evaluated GRT by running it on the Sy-GuS Competition Benchmarks from 2019 in the PBE Strings track.  ... 
doi:10.1609/aaai.v34i02.5522 fatcat:dimp6tb27bhkzm3baymjh3e4aa

An Empirical Evaluation of Rule Extraction from Recurrent Neural Networks [article]

Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles
2018 arXiv   pre-print
Here, we study the extraction of rules from second-order recurrent neural networks trained to recognize the Tomita grammars.  ...  They also appear to be a standard for much work on learning grammars.  ...  Most of this work can be viewed as roughly following one general DFA extraction process: 1. Collect the hidden activations of the RNN when processing every string at every time step.  ... 
arXiv:1709.10380v5 fatcat:3yqz4aky7fet7h2nirzdogfrty

Learning Subsequential Structure in Simple Recurrent Networks

David Servan-Schreiber, Axel Cleeremans, James L. McClelland
1988 Neural Information Processing Systems  
When the network is trained with strings from a particular finite-state grammar, it can learn to be a perfect finite-state recognizer for the grammar.  ...  Cluster analyses of the hidden-layer patterns of activation showed that they encode prediction-relevant information about the entire path traversed through the network.  ...  The small fmite-state grammar (Reber. 1967) Training. On each of 60.000 training trials. a string was generated from the grammar. starting with 'B'.  ... 
dblp:conf/nips/Servan-SchreiberCM88 fatcat:veussj7ykncrxcbd7auow5jc44

Higher Order Recurrent Networks and Grammatical Inference

C. Lee Giles, Guo-Zheng Sun, Hsing-Hen Chen, Yee-Chun Lee, Dong Chen
1989 Neural Information Processing Systems  
A higher order single layer recursive network easily learns to simulate a deterministic finite state machine and recognize regular grammars.  ...  a gradient descent learning rule derived from the common error function, the hybrid network learns to effectively use the stack actions to manipUlate the stack memory and to learn simple contextfree grammars  ...  ), For the parenthesis grammar, the net architecture consisted of a 2nd order fully interconnected single layer net with 3 state neurons, 3 input neurons, and 2 action neurons (one for push & one for pop  ... 
dblp:conf/nips/GilesSCLC89 fatcat:hwkmwvmo5fe67czrnuzfiji3gq

First-Order Recurrent Neural Networks and Deterministic Finite State Automata

Peter Manolios, Robert Fanelli
1994 Neural Computation  
We consider the Tomita grammars over {0,1}*, since work in this area has tended to focus on these grammars.  ...  Note that layers H1 and H2 of Figure Ic have the same connections and can be combined into one hidden layer, and similarly, layers X and Y can be combined into a single state-output layer.  ... 
doi:10.1162/neco.1994.6.6.1155 fatcat:6fqclnuzivg2lkuk7xtzvvkiaa

Confluent Orthogonal Drawings of Syntax Diagrams [article]

Michael J. Bannister and David A. Brown and David Eppstein
2015 arXiv   pre-print
We provide a pipeline for generating syntax diagrams (also called railroad diagrams) from context free grammars.  ...  graphical representation of a context free language, which we formalize abstractly as a set of mutually recursive nondeterministic finite automata and draw by combining elements from the confluent drawing, layered  ...  A context-free grammar for the language of S-expressions in LISP 1.5 [3] . a nonterminal symbol A in the current string with a string β such that A → β is a production rule in the grammar.  ... 
arXiv:1509.00818v1 fatcat:otyi7amxgrd4jofbpt4kz523xm
« Previous Showing results 1 — 15 out of 19,909 results