A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is
In Sec. 4, we introduce the ε-Insensitive Hebbian Learning Rule which was derived from gradient ascent on a pdf. ... The use of the ε-insensitive Hebbian learning rule allows the identification of all the independent causes (Fig. 4 ). ...doi:10.1142/s0218001403002915 fatcat:vxzbw5gckvck3bvbkaeyloei3e
The Journal of Neuroscience
endorphin-specific epsilon receptor. ... Nock B, Giordano AL, Cicero TJ, O’Connor LH (1990) Affinity of drugs and peptides for U-69,593-sensitive and —insensitive kappa opiate binding sites: the U-69,593-insensitive site appears to be the beta ...
This work is explained as a self organizing network, which expands incrementally by placing suitable nodes in a cluster and is achieved by Hebbian learning rule. ... However, manhattan distance is insensitive to noise and can handle correlations between the data points. ...doi:10.22266/ijies2018.0630.13 fatcat:inzbmbejo5eirnlfy2l37kyk2e
noise; absolute er- ror; Epsilon insensitive, 34779 histamine; learning; memory; L-histidine; R-alpha-methyl- histamine; spatial reference memory; rats, 13741 human contingency learning; Pavlovian conditioning ... pain; pain coping; absenteeism; social insurance personnel; disability pension; health care con- sumption, 8860 Hebbian learning; data set; principal component analysis; artificial neural networks; Gaussian ...
Abstract Prototype-based machine learning methods such as learning vector quantization (LVQ) offer flexible classification tools, which represent a classification in terms of typical prototypes. ... Standard LVQ (referred to as LVQ1) relies on the heuristics of Hebbian learning: given a training point (x i , y i ), the winner, i.e. closest prototype w J(x i ) is determined and adapted by the rule ... Leakage due to insensitivity of GLVQ output The fact that the vanilla GLVQ algorithm leaks information is not necessarily surprising since a useful algorithm always exhibits some degree of sensitivity ...doi:10.1016/j.neucom.2018.11.095 fatcat:vdu4yg3zincbfpbvac6snkqksq
Standard graph-theoretic procedures and recent algorithms from manifold learning theory are subsequently applied to this graph. ... Recently, there has been a renewed interest in dimensionality reduction techniques that evolved into a distinct branch of data analysis, namely the manifold learning theory     . ... With the third example, we meant to demonstrate the insensitivity to noise. ...doi:10.1016/j.patcog.2008.02.005 fatcat:gr6nvqhp6bahvgb5662jb22kwe
Encyclopedia of Artificial Intelligence
INTRODUCTION Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. ... Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? ... for robustness to outliers and faster convergence, such as different L p -norm induced error measures (Sayed, 2005) , the epsilon-insensitive error measure (Scholkopf & Smola, 2001 ), Huber's robust ...doi:10.4018/978-1-59904-849-9.ch133 fatcat:jocgphp2nfeo5azw733sparwlu
Encyclopedia of Artificial Intelligence
INTRODUCTION Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. ... Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? ... for robustness to outliers and faster convergence, such as different L p -norm induced error measures (Sayed, 2005) , the epsilon-insensitive error measure (Scholkopf & Smola, 2001 ), Huber's robust ...doi:10.4018/9781599048499.ch133 fatcat:5d5h4kfgjjgbpovthty5xkekfu
Machine Learning (ML) has been enjoying an unprecedented surge in applications that solve problems and enable automation in diverse domains. ... In this way, readers will benefit from a comprehensive discussion on the different learning paradigms and ML techniques applied to fundamental problems in networking, including traffic prediction, routing ... Like the neuron unit, Hebbian learning greatly influenced the progress of NN. ...doi:10.1186/s13174-018-0087-2 fatcat:jvwpewceevev3n4keoswqlcacu
The PTP epsilon dependent increase in Kv1.2 current was not observed when a substrate-trapping mutant of PTP epsilon was used. ... is dominated by an identical Hebbian STDP rule with axo-somatic Na spikes. ... As previously shown, rats require 6-8 consecutive training days to learn to distinguish between a pair of odours, but to learn a second pair of odors only requires 1-2 training days (rule learning). ...doi:10.1155/2007/73079 fatcat:6ttewkzqh5cmzb433ffyv3t5xi
Hebbian learning) are not considered. ... Connections between LGN and simple cells are learned based on Hebbian and anti-Hebbian plasticity, similar to that in our previous work  . ... Classically, plasticity is thought to be Hebbian, i.e., a local phenomenon in which changes in synaptic weights depend solely on pre-and post-synaptic quantities  . ...doi:10.1186/s12868-019-0538-0 fatcat:3pt5qvsh45awzbpwhqwbzrg4su
Thus, whereas conventional STDP is useful as a form of competitive Hebbian learning, anti-STDP may be a useful homeostatic mechanism for eliminating the location-dependence of synapses. ... Formally, the rule that we use to update weights is equivalent to the Contrastive Hebbian Learning (CHL) rule, except the sign of the rule depends on the phase of the inhibitory oscillation. ... It is during the application of the transformation that learning also occurs. ...doi:10.1002/j.1552-146x.1988.tb03932.x fatcat:bkotcyah2ngbfdk4kbx3xnf5v4
will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn ... This new class of extremely low-power and lowlatency artificial intelligence systems could, In a world where power-hungry deep learning techniques are becoming a commodity, and at the same time, environmental ... To achieve unsupervised learning, there have been multiple efforts implementing biologically plausible Spike Timing Dependent Plasticity (STDP)-variants and Hebbian learning using neuromorphic processors ...arXiv:2105.05956v3 fatcat:pqir5infojfpvdzdwgmwdhsdi4
In this community review report, we discuss applications and techniques for fast machine learning (ML) in science—the concept of integrating powerful ML methods into the real-time experimental data processing ... STDP is a time-dependent specialization of Hebbian learning. ... Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. arXiv preprint arXiv:2102.00554. Holdom, B. (1986). Two U(1)'s and epsilon charge shifts. ...doi:10.3389/fdata.2022.787421 pmid:35496379 pmcid:PMC9041419 fatcat:5w2exf7vvrfvnhln7nj5uppjga
The sharpening process proposed to result from competitive self-organization (arising from Hebbian learning and inhibitory competition) was arguably enhanced in the intra-relative to the inter-item condition ... The Greenhouse-Geisser correction for nonsphericity was used whenever appropriate and epsilon-corrected p values are reported together with uncorrected degrees of freedom. ...doi:10.1016/j.neuron.2006.09.013 pmid:17088218 fatcat:hsga4hkxljbplbgseu56vut3wa
« Previous Showing results 1 — 15 out of 24 results