Filters








24 Hits in 3.3 sec

CONNECTIONIST TECHNIQUES FOR THE IDENTIFICATION AND SUPPRESSION OF INTERFERING UNDERLYING FACTORS

EMILIO CORCHADO, COLIN FYFE
2003 International journal of pattern recognition and artificial intelligence  
In Sec. 4, we introduce the ε-Insensitive Hebbian Learning Rule which was derived from gradient ascent on a pdf.  ...  The use of the ε-insensitive Hebbian learning rule allows the identification of all the independent causes (Fig. 4 ).  ... 
doi:10.1142/s0218001403002915 fatcat:vxzbw5gckvck3bvbkaeyloei3e

Page 819 of The Journal of Neuroscience Vol. 14, Issue 2 [page]

1994 The Journal of Neuroscience  
endorphin-specific epsilon receptor.  ...  Nock B, Giordano AL, Cicero TJ, O’Connor LH (1990) Affinity of drugs and peptides for U-69,593-sensitive and —insensitive kappa opiate binding sites: the U-69,593-insensitive site appears to be the beta  ... 

A Novel Density Based Clustering Algorithm by Incorporating Mahalanobis Distance

Margaret Sangeetha, Velumani Padikkaramu, Rajakumar Chellan
2018 International Journal of Intelligent Engineering and Systems  
This work is explained as a self organizing network, which expands incrementally by placing suitable nodes in a cluster and is achieved by Hebbian learning rule.  ...  However, manhattan distance is insensitive to noise and can handle correlations between the data points.  ... 
doi:10.22266/ijies2018.0630.13 fatcat:inzbmbejo5eirnlfy2l37kyk2e

Page 718 of Psychological Abstracts Vol. 89, Issue Subject Index [page]

Psychological Abstracts  
noise; absolute er- ror; Epsilon insensitive, 34779 histamine; learning; memory; L-histidine; R-alpha-methyl- histamine; spatial reference memory; rats, 13741 human contingency learning; Pavlovian conditioning  ...  pain; pain coping; absenteeism; social insurance personnel; disability pension; health care con- sumption, 8860 Hebbian learning; data set; principal component analysis; artificial neural networks; Gaussian  ... 

Differential privacy for learning vector quantization

Johannes Brinkrolf, Christina Göpfert, Barbara Hammer
2019 Neurocomputing  
Abstract Prototype-based machine learning methods such as learning vector quantization (LVQ) offer flexible classification tools, which represent a classification in terms of typical prototypes.  ...  Standard LVQ (referred to as LVQ1) relies on the heuristics of Hebbian learning: given a training point (x i , y i ), the winner, i.e. closest prototype w J(x i ) is determined and adapted by the rule  ...  Leakage due to insensitivity of GLVQ output The fact that the vanilla GLVQ algorithm leaks information is not necessarily surprising since a useful algorithm always exhibits some degree of sensitivity  ... 
doi:10.1016/j.neucom.2018.11.095 fatcat:vdu4yg3zincbfpbvac6snkqksq

Beyond FCM: Graph-theoretic post-processing algorithms for learning and representing the data structure

Nikolaos A. Laskaris, Stefanos P. Zafeiriou
2008 Pattern Recognition  
Standard graph-theoretic procedures and recent algorithms from manifold learning theory are subsequently applied to this graph.  ...  Recently, there has been a renewed interest in dimensionality reduction techniques that evolved into a distinct branch of data analysis, namely the manifold learning theory [2] [3] [4] [5] .  ...  With the third example, we meant to demonstrate the insensitivity to noise.  ... 
doi:10.1016/j.patcog.2008.02.005 fatcat:gr6nvqhp6bahvgb5662jb22kwe

Information Theoretic Learning [chapter]

Deniz Erdogmus, Jose C. Principe
Encyclopedia of Artificial Intelligence  
INTRODUCTION Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms.  ...  Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what?  ...  for robustness to outliers and faster convergence, such as different L p -norm induced error measures (Sayed, 2005) , the epsilon-insensitive error measure (Scholkopf & Smola, 2001 ), Huber's robust  ... 
doi:10.4018/978-1-59904-849-9.ch133 fatcat:jocgphp2nfeo5azw733sparwlu

Information Theoretic Learning [chapter]

Deniz Erdogmus, Jose C. Principe
Encyclopedia of Artificial Intelligence  
INTRODUCTION Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms.  ...  Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what?  ...  for robustness to outliers and faster convergence, such as different L p -norm induced error measures (Sayed, 2005) , the epsilon-insensitive error measure (Scholkopf & Smola, 2001 ), Huber's robust  ... 
doi:10.4018/9781599048499.ch133 fatcat:5d5h4kfgjjgbpovthty5xkekfu

A comprehensive survey on machine learning for networking: evolution, applications and research opportunities

Raouf Boutaba, Mohammad A. Salahuddin, Noura Limam, Sara Ayoubi, Nashid Shahriar, Felipe Estrada-Solano, Oscar M. Caicedo
2018 Journal of Internet Services and Applications  
Machine Learning (ML) has been enjoying an unprecedented surge in applications that solve problems and enable automation in diverse domains.  ...  In this way, readers will benefit from a comprehensive discussion on the different learning paradigms and ML techniques applied to fundamental problems in networking, including traffic prediction, routing  ...  Like the neuron unit, Hebbian learning greatly influenced the progress of NN.  ... 
doi:10.1186/s13174-018-0087-2 fatcat:jvwpewceevev3n4keoswqlcacu

Abstracts of the 15th Annual Meeting of the Israel Society for Neuroscience (Eilat, Israel, December 3–5, 2006)

2007 Neural Plasticity  
The PTP epsilon dependent increase in Kv1.2 current was not observed when a substrate-trapping mutant of PTP epsilon was used.  ...  is dominated by an identical Hebbian STDP rule with axo-somatic Na spikes.  ...  As previously shown, rats require 6-8 consecutive training days to learn to distinguish between a pair of odours, but to learn a second pair of odors only requires 1-2 training days (rule learning).  ... 
doi:10.1155/2007/73079 fatcat:6ttewkzqh5cmzb433ffyv3t5xi

28th Annual Computational Neuroscience Meeting: CNS*2019

2019 BMC Neuroscience  
Hebbian learning) are not considered.  ...  Connections between LGN and simple cells are learned based on Hebbian and anti-Hebbian plasticity, similar to that in our previous work [3] .  ...  Classically, plasticity is thought to be Hebbian, i.e., a local phenomenon in which changes in synaptic weights depend solely on pre-and post-synaptic quantities [2] .  ... 
doi:10.1186/s12868-019-0538-0 fatcat:3pt5qvsh45awzbpwhqwbzrg4su

CORPORATE CONTRIBUTORS

1988 The Hastings center report  
Thus, whereas conventional STDP is useful as a form of competitive Hebbian learning, anti-STDP may be a useful homeostatic mechanism for eliminating the location-dependence of synapses.  ...  Formally, the rule that we use to update weights is equivalent to the Contrastive Hebbian Learning (CHL) rule, except the sign of the rule depends on the phase of the inhibitory oscillation.  ...  It is during the application of the transformation that learning also occurs.  ... 
doi:10.1002/j.1552-146x.1988.tb03932.x fatcat:bkotcyah2ngbfdk4kbx3xnf5v4

2022 Roadmap on Neuromorphic Computing and Engineering [article]

Dennis V. Christensen, Regina Dittmann, Bernabé Linares-Barranco, Abu Sebastian, Manuel Le Gallo, Andrea Redaelli, Stefan Slesazeck, Thomas Mikolajick, Sabina Spiga, Stephan Menzel, Ilia Valov, Gianluca Milano (+47 others)
2022 arXiv   pre-print
will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn  ...  This new class of extremely low-power and lowlatency artificial intelligence systems could, In a world where power-hungry deep learning techniques are becoming a commodity, and at the same time, environmental  ...  To achieve unsupervised learning, there have been multiple efforts implementing biologically plausible Spike Timing Dependent Plasticity (STDP)-variants and Hebbian learning using neuromorphic processors  ... 
arXiv:2105.05956v3 fatcat:pqir5infojfpvdzdwgmwdhsdi4

Applications and Techniques for Fast Machine Learning in Science

Allison McCarn Deiana, Nhan Tran, Joshua Agar, Michaela Blott, Giuseppe Di Guglielmo, Javier Duarte, Philip Harris, Scott Hauck, Mia Liu, Mark S. Neubauer, Jennifer Ngadiuba, Seda Ogrenci-Memik (+35 others)
2022 Frontiers in Big Data  
In this community review report, we discuss applications and techniques for fast machine learning (ML) in science—the concept of integrating powerful ML methods into the real-time experimental data processing  ...  STDP is a time-dependent specialization of Hebbian learning.  ...  Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. arXiv preprint arXiv:2102.00554. Holdom, B. (1986). Two U(1)'s and epsilon charge shifts.  ... 
doi:10.3389/fdata.2022.787421 pmid:35496379 pmcid:PMC9041419 fatcat:5w2exf7vvrfvnhln7nj5uppjga

Intra- and Inter-Item Associations Doubly Dissociate the Electrophysiological Correlates of Familiarity and Recollection

Theodor Jäger, Axel Mecklinger, Kerstin H. Kipp
2006 Neuron  
The sharpening process proposed to result from competitive self-organization (arising from Hebbian learning and inhibitory competition) was arguably enhanced in the intra-relative to the inter-item condition  ...  The Greenhouse-Geisser correction for nonsphericity was used whenever appropriate and epsilon-corrected p values are reported together with uncorrected degrees of freedom.  ... 
doi:10.1016/j.neuron.2006.09.013 pmid:17088218 fatcat:hsga4hkxljbplbgseu56vut3wa
« Previous Showing results 1 — 15 out of 24 results