Filters








3,513,930 Hits in 6.4 sec

General Value Function Networks

Matthew Schlegel, Andrew Jacobsen, Zaheer Abbas, Andrew Patterson, Adam White, Martha White
2021 The Journal of Artificial Intelligence Research  
We formulate a novel RNN architecture, called a General Value Function Network (GVFN), where each internal state component corresponds to a prediction about the future represented as a value function.  ...  A general purpose strategy for state construction is to learn the state update using a Recurrent Neural Network (RNN), which updates the internal state using the current internal state and the most recent  ...  General value functions provide a rich language for encoding predictive knowledge.  ... 
doi:10.1613/jair.1.12105 fatcat:5afww5z5rzerdjwkqxsagav7xm

General Value Function Networks [article]

Matthew Schlegel, Andrew Jacobsen, Muhammad Zaheer, Andrew Patterson, Adam White, Martha White
2020 arXiv   pre-print
We formulate a novel RNN architecture, called a General Value Function Network (GVFN), where each internal state component corresponds to a prediction about the future represented as a value function.  ...  A general purpose strategy for state construction is to learn the state update using a Recurrent Neural Network (RNN), which updates the internal state using the current internal state and the most recent  ...  Step (10 4 ) General Value Function Networks Step ( The (bottom) row shows learning curves for p = 4 for prediction accuracy.  ... 
arXiv:1807.06763v3 fatcat:4sqz4sncfrc4lk6argt46uwfha

Topologies of strategically formed social networks based on a generic value function—Allocation rule model

Ramasuri Narayanam, Y. Narahari
2011 Social Networks  
We choose a specific value function and a generic allocation rule and derive several interesting topological results in the network formation context.  ...  Our model is based on a well known model, the value function -allocation rule model.  ...  They basically consider a value function and an allocation rule model where the value function defines a value to each network and the allocation rule distributes this value to the nodes in the network  ... 
doi:10.1016/j.socnet.2010.10.004 fatcat:qpd4llhy7jg3jnhmfbo2qjcgxm

Multi-tissue Analysis of Co-expression Networks by Higher-Order Generalized Singular Value Decomposition Identifies Functionally Coherent Transcriptional Modules

Xiaolin Xiao, Aida Moreno-Moral, Maxime Rotival, Leonardo Bottolo, Enrico Petretto, Greg Gibson
2014 PLoS Genetics  
Building on the Higher-Order Generalized Singular Value Decomposition, we introduce a new algorithmic approach for efficient, parameter-free and reproducible identification of network-modules simultaneously  ...  Leveraging these data is especially important for network-based approaches to human disease, for instance to identify coherent transcriptional modules (subnetworks) that can inform functional disease mechanisms  ...  Generalized Singular Value Decomposition (GSVD) can be used to identify sub-network structures and for comparative analysis of genomic datasets across two conditions [11, 23] .  ... 
doi:10.1371/journal.pgen.1004006 pmid:24391511 pmcid:PMC3879165 fatcat:ctqcyzeqbnacbeundthcwcvhly

OGGN: A Novel Generalized Oracle Guided Generative Architecture for Modelling Inverse Function of Artificial Neural Networks [article]

Mohammad Aaftab V, Mansi Sharma
2021 arXiv   pre-print
This paper presents a novel Generative Neural Network Architecture for modelling the inverse function of an Artificial Neural Network (ANN) either completely or partially.  ...  On the other hand, partially modelling the inverse function means generating the values of a subset of features and fixing the remaining feature values.  ...  The values of c 1 and c 2 are decided after checking the range of outputs from the generator network, trained without using constraint functions.  ... 
arXiv:2104.03935v1 fatcat:wcod2xxbhbb5vbvhqqysg6gc4m

OGGN: A Novel Generalized Oracle Guided Generative Architecture for Modelling Inverse Function of Artificial Neural Networks

Mohammad Aaftab V, Mansi Sharma
2021 Zenodo  
This paper presents a novel Generative Neural Network Architecture for modelling the inverse function of an Artificial Neural Network (ANN) either completely or partially.  ...  On the other hand, partially modelling the inverse function means generating the values of a subset of features and fixing the remaining feature values.  ...  The values of c 1 and c 2 are decided after checking the range of outputs from the generator network, trained without using constraint functions.  ... 
doi:10.5281/zenodo.5645543 fatcat:ypabzqv5grfrpmhqshbrunadci

Control Double Inverted Pendulum by Reinforcement Learning with Double CMAC Network

Yu Zheng, Siwei Luo, Ziang Lv
2006 18th International Conference on Pattern Recognition (ICPR'06)  
To solve the problems of tradeoff between the generalization and accuracy in reinforcement learning, we represent state-action value by two CMAC networks with different generalization parameters.  ...  However function approximation reduces the accuracy of state value, and brings difficulty in the convergence.  ...  Usually, Agent adopts Q value of accuracy CMAC network, and the Q value of generalization CMAC network is adopted only when the Q value of accuracy CMAC network equals to the initial value of zero.  ... 
doi:10.1109/icpr.2006.416 dblp:conf/icpr/ZhengLL06 fatcat:dao3kuztxzbcdfvc2a3swlnuze

Reformulated radial basis neural networks trained by gradient descent

N.B. Karayiannis
1999 IEEE Transactions on Neural Networks  
The form of the RBF's is determined by a generator function.  ...  Experiments involving a variety of reformulated RBF networks generated by linear and exponential generator functions indicate that gradient descent learning is simple, easily implementable, and produces  ...  Compared with reformulated RBF networks generated by linear generator functions, reformulated RBF networks generated by exponential generator functions produced a higher percentage of classification errors  ... 
doi:10.1109/72.761725 pmid:18252566 fatcat:fr7sn52odvgjvmn4wfco4dp3fy

Improving the Generalization Properties of Radial Basis Function Neural Networks

Chris Bishop
1991 Neural Computation  
Too small a value leads to a hidden unit response which is highly localized, making it difficult to generate smooth network functions. At too large a value, the matrix M becomes ill-conditioned.  ...  The minimum value of the test error is close to the value 0.004 obtained by comparing the test data with the original function in equation 4.1, showing that the network has good generalization properties  ... 
doi:10.1162/neco.1991.3.4.579 fatcat:4zlnc4jfaraindc5it7sab26gq

A Generative Network Model of the Human Brain Normal Aging Process

Xiao Liu, Shuaizong Si, Bo Hu, Hai Zhao, Jian Zhu
2020 Symmetry  
We use LNBE, as well as published generative network models to simulate the aging process of the functional brain network, to construct artificial brain networks and to reveal the generative mechanisms  ...  We present a novel generative network model of the human functional brain network, which is the hybrid of the local naïve Bayes model and the anatomical similarity correction (LNBE).  ...  By proposing a novel generative network model, LNBE, for human functional brain networks, we simulated the ageing process of functional brain networks, generated similar enough artificial brain networks  ... 
doi:10.3390/sym12010091 fatcat:6ft4w5mc2zesfd2wclwvziszgq

STRIP - a strip-based neural-network growth algorithm for learning multiple-valued functions

A. Ngom, I. Stojmenovic, V. Milutinovic
2001 IEEE Transactions on Neural Networks  
We construct two neural networks based on these hidden units and show that they correctly compute the given but arbitrary multiple-valued function.  ...  We consider the problem of synthesizing multiple-valued logic functions by neural networks. A genetic algorithm (GA) which finds the longest strip in is described.  ...  synthesizing multiple-valued logic functions by neural networks.  ... 
doi:10.1109/72.914519 pmid:18244379 fatcat:ohwx3vafybeaphciww5ewsm7my

blockmodeling

Miha Matjašič, Marjan Cugmas, Aleš Žiberna
2020 Metodološki zvezki. Advances in methodology and statistics  
The R package blockmodeling implements several approaches for the generalized blockmodeling of binary and valued networks.  ...  This paper presents the R package blockmodeling which is primarily meant as an implementation of generalized blockmodeling (more broadly blockmodeling) for valued networks where the values of the ties  ...  This research was financially supported by the Slovenian Research Agency (http: //www.arrs.si) within the research program P5-0168 and the research project J7-8279 (Blockmodeling multilevel and temporal networks  ... 
doi:10.51936/uhir1119 fatcat:3hjupyjgqvelzaoshoc764yfrq

Reinforcement learning of multiple tasks using parametric bias

Leszek Rybicki, Yuuya Sugita, Jun Tani
2009 2009 International Joint Conference on Neural Networks  
While exploring a task, the agent builds its internal model of the environment and approximates a state value function.  ...  This mapping of the task set to parametric bias space can later be used to generate novel behaviors of the agent.  ...  The world model network and value function network are not trained while generating data for the policy network. V.  ... 
doi:10.1109/ijcnn.2009.5178868 dblp:conf/ijcnn/RybickiST09 fatcat:hh456v66tzbk3j2fgj2nx2t3vy

RCC Cannot Compute Certain FSA, Even with Arbitrary Transfer Functions

Mark B. Ring
1997 Neural Information Processing Systems  
The proof given here shows that for any finite, discrete transfer function used by the units of an RCC network, there are finite-state automata (FSA) that the network cannot model, no matter how many units  ...  The proof also applies to continuous transfer functions with a finite number of fixed-points, such as sigmoid and radial-basis functions.  ...  transfer function that generates these values.  ... 
dblp:conf/nips/Ring97 fatcat:rsu7fantcjga7huoyjqvmg653q

Improving the Performance of PieceWise Linear Separation Incremental Algorithms for Practical Hardware Implementations [article]

Alejandro Chinea Manrique De Lara, Juan Manuel Moreno, Arostegui Jordi Madrenas, Joan Cabestany
2007 arXiv   pre-print
complexity and generalization capabilities) offered by the networks generated by these incremental models.  ...  This function is evaluated periodically as the network structure evolves, and will permit, as we shall show through exhaustive benchmarks, to considerably improve the performance(measured in terms of network  ...  If the function J c presents a peak value, then calculate the current value of the function I.  ... 
arXiv:0712.3654v1 fatcat:txvwcpwplzalreoaaifqvzk2vu
« Previous Showing results 1 — 15 out of 3,513,930 results