2,683 Hits in 2.7 sec

The perceptron strikes back

R. Beigel, N. Reingold, D. Spielman
[1991] Proceedings of the Sixth Annual Structure in Complexity Theory Conference  
In particular, every language recognized by a depth-d ACo circuit is decidable by a probabilistic perceptron of size 2'(lopdn) and order 0 that uses 0 (log3n) probabilistic bits.  ...  We show that circuits composed of a symmetric gate at the root with AND-OR subcircuits of constant depth can be simulated by probabilistic depth-2 circuits with essentially the same symmetric gate at the  ...  Thus we vindicate the reputation of the much maligned perceptron. Highly contrasting results are known for deterministic perceptrons.  ... 
doi:10.1109/sct.1991.160270 dblp:conf/coco/BeigelRS91 fatcat:s4colxrncbhdjnwzn4ffyalrpi

Adaptive Neural Net Preprocessing for Signal Detection in Non-Gaussian Noise

Richard Lippmann, Paul Beckman
1988 Neural Information Processing Systems  
Experiments were performed to determine whether the correct clipping nonlinearity could be provided by a single-input singleoutput multi-layer perceptron trained with back propagation.  ...  It was found that a multi-layer perceptron with one input and output node, 20 nodes in the first hidden layer, and 5 nodes in the second hidden layer could be trained to provide a clipping nonlinearity  ...  Initial experiments were performed to determine the difficulty of learning complex mappings using multi-layer perceptrons trained using back-propagation.  ... 
dblp:conf/nips/LippmannB88 fatcat:c6qwcrxalrd2pgt4aqsej55kt4

Alternative Neural Network Approach for Option Pricing and Hedging

Andrew P. Carverhill, Terry H. F. Cheuk
2003 Social Science Research Network  
Instead, it puts up a formula with a set of unknown parameters and let the optimization routine search for the parameters best fitted to the desired results.  ...  In order to use Black-Scholes to price any option, one needs to know the implied volatility surface. The existence of such surface is an evidence of misspecification of the model.  ...  X is the strike level. It is added here for the scaling. As the model uses futures/strike as an input, the resulting vega need to be re-scaled back to the strike level.  ... 
doi:10.2139/ssrn.480562 fatcat:mz463mwuebcdjk2pjvail372oa

A Cost Function for Internal Representations

Anders Krogh, C. I. Thorbergsson, John A. Hertz
1989 Neural Information Processing Systems  
The learning problem can then be formulated as two simple perceptrons and a search for internal representations. Back-propagation is recovered as a limit.  ...  The frequency of successful solutions is better for this algorithm than for back-propagation when weights and hidden units are updated on the same timescale i.e. once every learning step.  ...  The learning problem for a two-layer perceptron is reduced to learning in two simple perceptrons and the search for internal representations.  ... 
dblp:conf/nips/KroghTH89 fatcat:eia6qjn4w5bf5lbrqspb76ezaq

Learning algorithms and probability distributions in feed-forward and feed-back networks

J. J. Hopfield
1987 Proceedings of the National Academy of Sciences of the United States of America  
Learning algorithms have been used both on feed-forward deterministic networks and on feed-back statistical networks to capture input-output relations and do pattern classification.  ...  In simple but nontrivial networks the two learning rules are closely related. Under some circumstances the learning problem for the statistical networks can be solved without Monte Carlo procedures.  ...  The disagreement between the network and the perfect expert is a more striking measure of how well the problem has been solved because it is first-order in the difference between a particular network and  ... 
doi:10.1073/pnas.84.23.8429 pmid:16593901 pmcid:PMC299557 fatcat:5nk27pi35rejph7rxzafqo7i2a

Effect of Synthetic Emotions on Agents' Learning Speed and Their Survivability [chapter]

Šarūnas Raudys
2005 Lecture Notes in Computer Science  
A difference between targets of the perceptron corresponding to objects of the first and second categories is associated with stimulation strength.  ...  The paper considers supervised learning algorithm of nonlinear perceptron with dynamic targets adjustment which assists in faster learning and cognition.  ...  Acknowledgments The author thanks Prof. Viktoras Justickis and Vanda Dovlias for useful and challenging discussions.  ... 
doi:10.1007/11553090_1 fatcat:v5g44gnns5efhgvojj4yirejom

Separability is a Learner's Best Friend [chapter]

Chris Thornton
1998 4th Neural Computation and Psychology Workshop, London, 9–11 April 1997  
Geometric separability is a generalisation of linear separability, familiar to many from Minsky and Papert's analysis of the Perceptron learning method.  ...  They assume, in effect, that the function is 'smooth ' [Rendell and Seshu, 1990] and that input points with the same target output will therefore tend to cluster together in the same region of the input  ...  This led many researchers to turn their backs on the perceptron method and on neural network methods in general.  ... 
doi:10.1007/978-1-4471-1546-5_4 dblp:conf/ncpw/Thornton97 fatcat:tzvf4njysvfddcdlmxd654ogga

Neural networks in petroleum geology as interpretation tools

Tomislav Malvić, Josipa Velić, Janina Horváth, Marko Cvetković
2010 Central European Geology  
In all three neural models some of the mentioned inputs were used for analyzing data collected from three different oil fields in the Croatian part of the Pannonian Basin.  ...  The results of these studies indicate that this method is capable of providing better understanding of some clastic Neogene reservoirs in the Croatian part of the Pannonian Basin.  ...  The first results of the resulting analysis were presented at the 12th Congress of Hungarian Geomathematics and the 1st Congress of Croatian and Hungarian Geomathematics in Mórahalom, Hungary (29-31 May  ... 
doi:10.1556/ceugeol.53.2010.1.6 fatcat:m5n5goigfrazvhufxwwlmfolme

Optoelectronic multilayer network

Alan A. Yamamura, Seiji Kobayashi, Mark A. Neifeld, Demetri Psaltis, Raymond Arrathoon
1990 Digital Optical Computing II  
ACKNOWLEDGEMENTS This research is supported by a grant from the Army Research Office and in part by a grant from the Defense Advanced Research Projects Agency.  ...  Thanks to Robert Snapp for his help in working on the binary multilayer network training algorithm. Alan Yamamura is supported by a fellowship from the Fannie and John Hertz Foundation. REFERENCES  ...  the neuron outputs back to the synaptic inputs.  ... 
doi:10.1117/12.18100 fatcat:5al5uo6funea5pw6cepocgkamq

Foot Plantar Pressure Estimation Using Artificial Neural Networks [chapter]

Elias Xidias, Zoi Koutkalaki, Panagiotis Papagiannis, Paraskevas Papanikos, Philip Azariadis
2016 IFIP Advances in Information and Communication Technology  
This research has been co-financed by the European Union (European Social Fund -ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic  ...  the back propagation algorithm and making the proposed method faster than the traditional one [27] .  ...  Overview A Multi-Layer Perceptron (MLP) [21] has been adopted to estimate the maximum plantar pressure on the foot surface.  ... 
doi:10.1007/978-3-319-33111-9_3 fatcat:7ymvcc3v5jhzna576glwjlg3zu

Estimating Risk Pressure Factor (RPF) with Artificial Neural Network (ANN) to Locate Search and Rescue (SAR) Team Station

2019 Turkish Journal of Forecasting  
As a result of the study, the missing parameter of the mathematical model will be found in the estimation of a parameter belonging to the proposed mathematical model.  ...  One of these criteria is the Risk Pressure Factor (RPF) used in determining the priorities of the risk areas.  ...  Striking on timeline After the disaster, t0 point starts the critical hours to recover the survivors.  ... 
doi:10.34110/forecasting.484765 fatcat:u3a2hmomzvb2jj4i76wmt7digu

On Arabic Character Recognition Employing Hybrid Neural Network

Al-Amin Bhuiyan, Fawaz Waselallah
2017 International Journal of Advanced Computer Science and Applications  
with back propagation learning algorithm.  ...  The method is based on local image sampling of each character to a selected feature matrix and feeding these matrices into a Bidirectional Associative Memory followed by Multilayer Perceptron (BAMMLP)  ...  ACKNOWLEDGMENT The authors would like to express their gratitude to the Deanship of Scientific Research, King Faisal University, Saudi Arabia, for the financial support of the project No 150169.  ... 
doi:10.14569/ijacsa.2017.080612 fatcat:nz7toy7uq5e5zjgytxqvswt3ba

Two-Stage Approach to Image Classification by Deep Neural Networks

Gennady Ososkov, Pavel Goncharov, Gh. Adam, J. Buša, M. Hnatič, D. Podgainy
2018 EPJ Web of Conferences  
The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying.  ...  Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders.  ...  To construct all filters of convolutional layers our CNN must be trained by a labeled sample with the back-prop method [7] .  ... 
doi:10.1051/epjconf/201817301009 fatcat:esfxfjstmnac5jrv7m3hrcbw24


Utpal Srivastav, Vikas Thada, Amit Kumar, Maulik Garach, Adit Paliwal
2020 International Journal of Innovative Research in Computer Science & Technology  
For the situation where G and D are characterized by multilayer perceptions, the whole framework can be prepared with back propagation.  ...  Investigations illustrate the capability of the system through subjective and quantitative assessment of the produced tests.  ...  a multilayer perceptron.  ... 
doi:10.21276/ijircst.2020.8.3.24 fatcat:kvru5b4q2vezjpkuj44b75trle

Quantum-Driven Energy-Efficiency Optimization for Next-Generation Communications Systems

Su Fong Chien, Heng Siong Lim, Michail Alexandros Kourtis, Qiang Ni, Alessio Zappone, Charilaos C. Zarakovitis
2021 Energies  
The computed results show that our QNN algorithm can be indeed trainable and that it can lead to solution convergence during the training phase.  ...  The advent of deep-learning technology promises major leaps forward in addressing the ever-enduring problems of wireless resource control and optimization, and improving key network performances, such  ...  Acknowledgments: The authors would like to thank Robert Salzmann at the Center for Quantum Information and Foundations, University of Cambridge, for his technical assistance.  ... 
doi:10.3390/en14144090 fatcat:vpd5ln6qrnc3pamrh4eol5vuie
« Previous Showing results 1 — 15 out of 2,683 results