Filters








3,334 Hits in 2.0 sec

Learning Binary Perceptrons Perfectly Efficiently

Shao C. Fang, Santosh S. Venkatesh
1996 Journal of computer and system sciences (Print)  
The majority rule algorithm for learning binary weights for a perceptron is analysed under the uniform distribution on inputs.  ...  Particular consequences are that the algorithm PAC-learns majority functions in linear time from small samples and that, while the general variant of binary integer programming embodied here is NPcomplete  ...  n. 375 LEARNING BINARY PERCEPTRONS PERFECTLY EFFICIENTLY 2 Pitt and Valiant [12] proved the NP-completeness of learning Boolean threshold functions (i.e., perceptrons with 0Â1 weights and integer  ... 
doi:10.1006/jcss.1996.0028 fatcat:gtrw3o7aofah7a5mji3673vxri

Quantum Neuron with Separable-State Encoding [article]

London A. Cavaletto, Luca Candelori, Alex Matos-Abiague
2022 arXiv   pre-print
We develop a hybrid (quantum-classical) training procedure for simulating the learning process of the QP and test their efficiency.  ...  The proposed QP uses an N-ary encoding of the binary input data characterizing the patterns.  ...  The fidelity constitutes a measure of the ability of the perceptron to adjust its weight to the target weight value. When f = 1, the perceptron can perfectly recognize the given pattern.  ... 
arXiv:2202.08306v1 fatcat:rslhvvdhubh7lkfqym4vdm6hqu

Implementing perceptron models with qubits [article]

R.C. Wiersema, H.J. Kappen
2019 arXiv   pre-print
We propose a method for learning a quantum probabilistic model of a perceptron.  ...  We show that this allows us to better capture noisyness in data compared to a classical perceptron. By considering entangled qubits we can learn nonlinear separation boundaries, such as XOR.  ...  We show that the problem can be learned perfectly with two qubits. V.  ... 
arXiv:1905.06728v1 fatcat:gstubszcnvcxhphneybncncn7e

Implementing perceptron models with qubits

R. C. Wiersema, H. J. Kappen
2019 Physical Review A  
We propose a method for learning a quantum probabilistic model of a perceptron.  ...  We show that this allows us to better capture noisiness in data compared to a classical perceptron. By considering entangled qubits we can learn nonlinear separation boundaries, such as XOR.  ...  If the problem is linearly separable, the classical perceptron converges to a solution where the two classes are perfectly separated.  ... 
doi:10.1103/physreva.100.020301 fatcat:foc2ytirmzbhrclqo53wopnspa

A Capacity Scaling Law for Artificial Neural Networks [article]

Gerald Friedland, Mario Krell
2018 arXiv   pre-print
We derive the calculation of two critical numbers predicting the behavior of perceptron networks. First, we derive the calculation of what we call the lossless memory (LM) dimension.  ...  The LM dimension is a generalization of the Vapnik--Chervonenkis (VC) dimension that avoids structured data and therefore provides an upper bound for perfectly fitting almost any training data.  ...  The output of the encoder are the weights of a perceptron. The decoder receives the (perfectly learned) weights over a lossless channel.  ... 
arXiv:1708.06019v3 fatcat:7cbzywnqmfazld6nrl37wj2qge

A Practical Approach to Sizing Neural Networks [article]

Gerald Friedland, Alfredo Metere, Mario Krell
2018 arXiv   pre-print
This allows the comparison of the efficiency of different network architectures independently of a task.  ...  Based on MacKay's information theoretic model of supervised machine learning, this article discusses how to practically estimate the maximum size of a neural network given a training data set.  ...  The output of the encoder are the weights of a perceptron. The decoder receives the (perfectly learned) weights over a lossless channel.  ... 
arXiv:1810.02328v1 fatcat:f4zh2nkitrhoff6oun7woe6ua4

Applying Machine Learning Techniques to the Audit of Antimicrobial Prophylaxis

Zhi-Yuan Shi, Jau-Shin Hon, Chen-Yang Cheng, Hsiu-Tzy Chiang, Hui-Mei Huang
2022 Applied Sciences  
The purpose of this study is to develop accurate and efficient machine learning models for auditing appropriate surgical antimicrobial prophylaxis.  ...  The efficient models developed by machine learning can be used to assist the antimicrobial stewardship team in the audit of surgical antimicrobial prophylaxis.  ...  Therefore, it is helpful to develop efficient models by machine learning to analyze the big medical data associated with antimicrobial prophylaxis.  ... 
doi:10.3390/app12052586 fatcat:l7yoi4pnfbc53oiqe4bj53oyrm

Polyhedrons and Perceptrons Are Functionally Equivalent [article]

Daniel Crespin
2013 arXiv   pre-print
The various constructions and results are among several steps required for algorithms that replace incremental and statistical learning with more efficient, direct and exact geometric methods for calculation  ...  of perceptron architecture and weights.  ...  It is a fact, however, that direct and efficient calculation of architecture and weights of DNF perceptron networks that perfectly recognize given data -and maintains margins, preset at will up to largest  ... 
arXiv:1311.1090v1 fatcat:tmpbamvfl5gkhb4cqd32s3gywa

Page 5143 of Mathematical Reviews Vol. , Issue 97H [page]

1997 Mathematical Reviews  
binary perceptrons perfectly efficiently.  ...  Summary: “The majority rule algorithm for learning binary weights for a perceptron is analysed under the uniform distri- bution on inputs.  ... 

Optimal colored perceptrons

D. Bollé, P. Kozłowski
2001 Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics  
Ashkin-Teller type perceptron models are introduced. Their maximal capacity per number of couplings is calculated within a first-step replica-symmetry-breaking Gardner approach.  ...  This would parallel the situation for Hebb learning [13] . For model II we have for κ = 0 that 2.26 ≤ α c ≤ 2.28, which is larger than the maximal capacity of the standard binary perceptron.  ...  In the light of these results an interesting question is whether such a coloured perceptron can still be more efficient than the standard perceptron.  ... 
doi:10.1103/physreve.64.011915 pmid:11461296 fatcat:qv7wgm7w6rfzjkc63u5f2kzhsi

ENTROPY-GUIDED FEATURE GENERATION FOR LARGE MARGIN STRUCTURED LEARNING

ERALDO LUIS REZENDE FERNANDES, RUY LUIZ MILIDIU
2018 Monografias em Ciência da Computação  
Structured learning consists in learning a mapping from inputs to structured outputs by means of a sample of correct input-output pairs. Many important problems fit in this setting.  ...  Feature generation is an important subtask of structured learning modeling.  ...  Structured Perceptron The structured perceptron algorithm [3] is analogous to its binary counterpart.  ... 
doi:10.17771/pucrio.dimcc.24327 fatcat:dtlkcg3m7bcbfigsrzvotok7tq

Modeling commonality among related classes in relation extraction

Zhou GuoDong, Su Jian, Zhang Min
2006 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL - ACL '06  
For each class in the hierarchy either manually predefined or automatically clustered, a linear discriminative function is determined in a topdown way using a perceptron algorithm with the lower-level  ...  This paper proposes a novel hierarchical learning strategy to deal with the data sparseness problem in relation extraction by modeling the commonality among related classes.  ...  Multi-Class Classification Basically, the perceptron algorithm is only for binary classification.  ... 
doi:10.3115/1220175.1220191 dblp:conf/acl/ZhouSZ06 fatcat:osjjbcmdyzb5xf3dxomelek2v4

Compressive neural representation of sparse, high-dimensional probabilities [article]

Xaq Pitkow
2012 arXiv   pre-print
Interestingly, functions satisfying the requirements of compressive sensing can be implemented as simple perceptrons.  ...  If we use perceptrons as a simple model of feedforward computation by neurons, these results show that the mean activity of a relatively small number of neurons can accurately represent a high-dimensional  ...  Here I have proposed an alternative mechanism by which the brain could efficiently represent probabilities: random perceptrons.  ... 
arXiv:1206.1800v1 fatcat:rwj4ybondbethcgojxubopfz4q

Social interaction as a heuristic for combinatorial optimization problems

José F. Fontanari
2010 Physical Review E  
input patterns of size F by a Boolean Binary Perceptron.  ...  Perceptron, given a fixed probability of success.  ...  THE BOOLEAN BINARY PERCEPTRON The Boolean Binary Perceptron is a single-layer neural network whose weights are constrained to take on binary values only.  ... 
doi:10.1103/physreve.82.056118 pmid:21230556 fatcat:q3qyjnhfybgpbhuncuotrs3pr4

An Automatic System for Heart Disease Prediction using Perceptron Model and Gradient Descent Algorithm

2019 International Journal of Engineering and Advanced Technology  
The learning algorithm of Perceptron model which calculates the result and performs binary classification is ∑_(i=0)^n wixi >= b.  ...  Many deep learning neural network-based models had been proposed for the prediction of heart diseases but there is no model accessing all 13 features directly from the dataset & feeding them to the 'Perceptron  ...  It is an algorithm for supervised learning of binary classifiers. This Perceptron algorithm allows the neurons for learning and processing elements in the training set one after the other.  ... 
doi:10.35940/ijeat.a1278.109119 fatcat:slsxfdkfqbedjjv2ndma2ry2za
« Previous Showing results 1 — 15 out of 3,334 results