Filters








31,818 Hits in 3.4 sec

Generalized perceptron learning rule and its implications for photorefractive neural networks

Chau-Jern Cheng, Pochi Yeh, Ken Yuh Hsu
1994 Journal of the Optical Society of America. B, Optical physics  
A mathematical proof is given that shows the conditional convergence of the learning algorithm.  ...  We consider the properties of a generalized perceptron learning network, taking into account the decay or the gain of the weight vector during the training stages.  ...  ACKNOWLEDGMENTS The research is supported by the National Science Council, Taiwan  ... 
doi:10.1364/josab.11.001619 fatcat:q4yonpdoy5detede4niqcirrya

On-line Learning of Dichotomies

N. Barkai, H. Sebastian Seung, Haim Sompolinsky
1994 Neural Information Processing Systems  
The learning curve, or generalization error as a function of P, depends on the schedule at which the learning rate is lowered.  ...  The performance of on-line algorithms for learning dichotomies is studied. In on-line learning, the number of examples P is equivalent to the learning time, since each example is presented only once.  ...  HS is partially supported by the Fund for Basic Research of the Israeli Academy of Arts and Sciences.  ... 
dblp:conf/nips/BarkaiSS94 fatcat:2zmdtrdst5fx7n2opihirf2fvi

Learning rate and attractor size of the single-layer perceptron

Martin S. Singleton, Alfred W. Hübler
2007 Physical Review E  
We study the simplest possible order one single-layer perceptron with two inputs, using the delta rule with online learning, in order to derive closed form expressions for the mean convergence rates.  ...  We also demonstrate that the learning rate is determined by the attractor size, and that the attractors of a single-layer perceptron with N inputs partition R N R N .  ...  ACKNOWLEDGMENTS The authors acknowledge fruitful discussions with B. Reznick, G. Foster, and P. Fleck. This research was supported by National Science Foundation Grant No.  ... 
doi:10.1103/physreve.75.026704 pmid:17358448 fatcat:h3oyd5x4s5asbaglc6kfcxx26y

Evidence that Incremental Delta-Bar-Delta Is an Attribute-Efficient Linear Learner [chapter]

Harlan D. Harris
2002 Lecture Notes in Computer Science  
The Winnow class of on-line linear learning algorithms [10, 11] was designed to be attribute-efficient.  ...  When learning with many irrelevant attributes, Winnow makes a number of errors that is only logarithmic in the number of total attributes, compared to the Perceptron algorithm, which makes a nearly linear  ...  versions of this paper.  ... 
doi:10.1007/3-540-36755-1_12 fatcat:vqyv23s4mjhitfdk3ubtmtb5di

On-line AdaTron learning of unlearnable rules

Jun-ichi Inoue, Hidetoshi Nishimori
1997 Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics  
We study the on-line AdaTron learning of linearly non-separable rules by a simple perceptron.  ...  Optimization of the learning rate is shown to greatly improve the performance of the AdaTron algorithm, leading to the best possible generalization error for a wide range of the parameter which controls  ...  ACKNOWLEDGMENTS The authors would like to thank Dr. Yoshiyuki Kabasahima for helpful suggestions and comments. One of the authors ͑J.I.͒ thanks Dr. Siegfried Bo ¨s for several useful comments.  ... 
doi:10.1103/physreve.55.4544 fatcat:lbgaefgufrcqnhm6r5sfe3wxsu

On-line Gibbs Learning

J. W. Kim, H. Sompolinsky
1996 Physical Review Letters  
The asymptotic rate of convergence is similar to that of batch learning. Constructing a general model of on-line learning is an important challenge in the theory of learning and its application.  ...  For a sufficiently small learning rate, it converges to a local minimum of e g ͑w͒, but not necessarily to the global one.  ...  For a sufficiently small learning rate, it converges to a local minimum of e g ͑w͒, but not necessarily to the global one.  ... 
doi:10.1103/physrevlett.76.3021 pmid:10060850 fatcat:3ornu7voorgzngqmxripsfoide

On-Chip Compensation of Device-Mismatch Effects in Analog VLSI Neural Networks

Miguel E. Figueroa, Seth Bridges, Chris Diorio
2004 Neural Information Processing Systems  
Our techniques enable large-scale analog VLSI neural networks with learning performance on the order of 10 bits.  ...  We demonstrate our techniques on a 64-synapse linear perceptron learning with the Least-Mean-Squares (LMS) algorithm, and fabricated in a 0.35µm CMOS process.  ...  Acknowledgements This work was financed in part by the Chilean government through FONDECYT grant #1040617. We fabricated our chips through MOSIS.  ... 
dblp:conf/nips/FigueroaBD04 fatcat:pi2zsa2nsfdd5mpaibhwfjakvm

On Herding and the Perceptron Cycling Theorem

Andrew Gelfand, Yutian Chen, Laurens van der Maaten, Max Welling
2010 Neural Information Processing Systems  
It is shown that both algorithms can be viewed as an application of the perceptron cycling theorem.  ...  perceptron and the discriminative RBM.  ...  LvdM acknowledges support by the Netherlands Organisation for Scientific Research (grant no. 680.50.0908) and by EU-FP7 NoE on Social Signal Processing (SSPNet).  ... 
dblp:conf/nips/GelfandCMW10 fatcat:wnlmsamijrhtdobq3poymiu4ze

On-line Learning of Perceptron from Noisy Data by One and Two Teachers

Tatsuya Uezu, Yoshiko Maeda, Sachi Yamaguchi
2006 Journal of the Physical Society of Japan  
We analyze the on-line learning of a Perceptron from signals produced by a single Perceptron suffering from external noise or by two independent Perceptrons without noise.  ...  In the single-teacher case, in order to improve the learning when it does not succeed in the sense that the student vector does not converge to the teacher vector, we use two methods: a method based on  ...  On the other hand, in the Perceptron and AdaTron rules, learning fails, but using the optimal learning rate, we proved that ! ! 1 as t ! 1 in the three learning rules.  ... 
doi:10.1143/jpsj.75.114007 fatcat:owuo4bnlf5aofkcotmoxu2xsvm

Learning curves of the clipped Hebb rule for networks with binary weights

M Golea, M Marchand
1993 Journal of Physics A: Mathematical and General  
In particular, the generalization rates converge extremely rapidly, oflen exponentially, to perfect genedimion.  ...  These results are very encouraging given the simplicity of the learning rule, The analytic expression of the leaming curves are in excellent agreement with the numerical simulations  ...  We thank the anonymous referees for their helpful comments. MG would like to thank Sara Solla for helpful suggestions.  ... 
doi:10.1088/0305-4470/26/21/015 fatcat:tlrlbrghancbtfagbwnygpsfje

Alternate Learning Algorithm on Multilayer Perceptrons [chapter]

Bumghi Choi, Ju-Hong Lee, Tae-Su Park
2006 Lecture Notes in Computer Science  
Multilayer perceptrons have been applied successfully to solve some difficult and diverse problems with the backpropagation learning algorithm.  ...  However, the algorithm is known to have slow and false convergence aroused from flat surface and local minima on the cost function.  ...  This research was supported by the Ministry of Information and Communication, Korea, under the Information Technology Research Center support program supervised by the Institute of Information Technology  ... 
doi:10.1007/11758501_13 fatcat:mkygfugjzvb65olzuk22gavg7a

Towards Easier and Faster Sequence Labeling for Natural Language Processing: A Search-based Probabilistic Online Learning Framework (SAPO) [article]

Xu Sun, Shuming Ma, Yi Zhang, Xuancheng Ren
2018 arXiv   pre-print
The other is the search-based learning methods such as structured perceptron and margin infused relaxed algorithm (MIRA), which have fast training but also drawbacks: low accuracy, no probabilistic information  ...  One is the probabilistic gradient-based methods such as conditional random fields (CRF) and neural networks (e.g., RNN), which have high accuracy but drawbacks: slow training, and no support of search-based  ...  Acknowledgements We thank the anonymous reviewers for their thoughtful comments. This work was supported in part by National Natural Science Foundation of China (No. 61673028).  ... 
arXiv:1503.08381v4 fatcat:c22t6qkfdza3bjn3czm2f4mi4e

Matrix updates for perceptron training of continuous density hidden Markov models

Chih-Chieh Cheng, Fei Sha, Lawrence K. Saul
2009 Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09  
We experiment with several forms of updates, systematically comparing the effects of different matrix factorizations, initializations, and averaging schemes on phone accuracies and convergence rates.  ...  Our results show that certain types of perceptron training yield consistently significant and rapid reductions in phone error rates.  ...  Fei Sha is partially supported by the Charles Lee Powell Foundation. We thank the reviewers for many useful comments.  ... 
doi:10.1145/1553374.1553394 dblp:conf/icml/ChengSS09 fatcat:fy66c5mkcngw3lnfbwxmd4hkhq

Average case analysis of the clipped Hebb rule for nonoverlapping perception networks

Mostefa Golea, Mario Marchand
1993 Proceedings of the sixth annual conference on Computational learning theory - COLT '93  
We find that the learning curves converge exponentially rapidly to perfect generalization. These results are very encouraging given the simplicity of the learning rule.  ...  Using the central limit theorem and very simple counting arguments, we calculate exactly its learning curves (i.e. the generalization rates as a function of the number of training examples) in the limit  ...  Specifically, the generalization rates converge exponentially to perfect generalization as a function of the number of training examples.  ... 
doi:10.1145/168304.168323 dblp:conf/colt/GoleaM93 fatcat:epd6qrwbxzabvljfhkueoboldy

Second-order asymmetric BAM design with a maximal basin of attraction

Jyh-Yeong Chang, Chien-Wen Cho
2003 IEEE transactions on systems, man and cybernetics. Part A. Systems and humans  
He was a Research and Design Engineer and Manager in the CNC field for five years with Victor Machinery Co. in Taiwan.  ...  Comparison of recall rates of SOABAM design by the adaptive local rule and the adaptive perceptron learning are summarized in Table VI.  ...  constant step size learning one, for example, perceptron learning [9] .  ... 
doi:10.1109/tsmca.2003.811505 fatcat:wlugi2i5lvgyvock6c4xoqqleq
« Previous Showing results 1 — 15 out of 31,818 results