Filters








358,975 Hits in 6.3 sec

Machine learning, waveform preprocessing and feature extraction methods for classification of acoustic startle waveforms

Timothy J. Fawcett, Chad S. Cooper, Ryan J. Longenecker, Joseph P. Walton
2020 MethodsX  
Machine learning models utilizing methods from different families of algorithms were individually trained and then ensembled together, resulting in an extremely robust model.  ...  •ASR waveforms were normalized using the mean and standard deviation computed before the startle elicitor was presented•9 machine learning algorithms from 4 different families of algorithms were individually  ...  The authors would like to acknowledge the use of the services provided by Research Computing at the University of South Florida .  ... 
doi:10.1016/j.mex.2020.101166 pmid:33354518 pmcid:PMC7744771 fatcat:nyco4zhidbhktp557mnzvhqqsy

Uniform Learning in a Deep Neural Network via "Oddball" Stochastic Gradient Descent [article]

Andrew J.R. Simpson
2015 arXiv   pre-print
Or, to restate, it is assumed that the training error will be uniformly distributed across the training examples. Based on these assumptions, each training example is used an equal number of times.  ...  However, this assumption may not be valid in many cases.  ...  In order to robustly enforce uniformity of learning via oddball SGD [1] , we raise the error magnitudes (across the training set) to a large power prior to normalised application as selection probability  ... 
arXiv:1510.02442v1 fatcat:6szvh2jpirbldpnsxew4ybcuda

Simplified learning in complex situations: Knowledge partitioning in function learning

Stephan Lewandowsky, Michael Kalish, S. K. Ngang
2002 Journal of experimental psychology. General  
In 4 experiments, using a function learning paradigm, a binary context variable was paired with the continuous stimulus variable of a to-be-learned function.  ...  Because context did not predict function values, it is suggested that people use context to gate separate learning of simpler partial functions.  ...  Empirically, we showed that (a) in a function learning task, people create partitioned knowledge whenever possible, unless the to-be-learned function is of the simplest possible form.  ... 
doi:10.1037//0096-3445.131.2.163 pmid:12049238 fatcat:6akdaxj6svcgnddbp6xp2g4pqm

Simplified learning in complex situations: Knowledge partitioning in function learning

Stephan Lewandowsky, Michael Kalish, S. K. Ngang
2002 Journal of experimental psychology. General  
In 4 experiments, using a function learning paradigm, a binary context variable was paired with the continuous stimulus variable of a to-be-learned function.  ...  Because context did not predict function values, it is suggested that people use context to gate separate learning of simpler partial functions.  ...  Empirically, we showed that (a) in a function learning task, people create partitioned knowledge whenever possible, unless the to-be-learned function is of the simplest possible form.  ... 
doi:10.1037/0096-3445.131.2.163 fatcat:57z44v4cine2pfd6wzyddlzura

Towards Geo-Distributed Machine Learning [article]

Ignacio Cano, Markus Weimer, Dhruv Mahajan, Carlo Curino, Giovanni Matteo Fumarola
2016 arXiv   pre-print
On the other hand, many machine learning applications require a global view of such data in order to achieve the best results.  ...  These types of applications form a new class of learning problems, which we call Geo-Distributed Machine Learning (GDML).  ...  In Figure 4b , the efficient distributed approach (distributed-fadl ) performs at least 1 order of magnitude better than centralized in every scenario, achieving the biggest difference (2 orders of magnitude  ... 
arXiv:1603.09035v1 fatcat:skmpf6odwzemdick6oncj6wnoe

Graph Frequency Analysis of Brain Signals

Weiyu Huang, Leah Goldsberry, Nicholas F. Wymbs, Scott T. Grafton, Danielle S. Bassett, Alejandro Ribeiro
2016 IEEE Journal on Selected Topics in Signal Processing  
brain graph frequencies associated with different levels of spatial smoothness across the brain regions.  ...  We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning.  ...  the learning rates across subjects.  ... 
doi:10.1109/jstsp.2016.2600859 pmid:28439325 pmcid:PMC5400112 fatcat:beibmmqjlfbutkflfpe5bjcb74

Bootstrapped Adaptive Threshold Selection for Statistical Model Selection and Estimation [article]

Kristofer E. Bouchard
2015 arXiv   pre-print
However, by imposing priors, structured regularizers can make it difficult to interpret learned model parameters.  ...  A central goal of neuroscience is to understand how activity in the nervous system is related to features of the external world, or to features of the nervous system itself.  ...  For optimizing regularization parameters for the structured regularizers, first a broad sweep across several orders of magnitude was performed.  ... 
arXiv:1505.03511v1 fatcat:zrapa2hzszeh3izl4b4dqzkkoy

How fine can fine-tuning be? Learning efficient language models [article]

Evani Radiya-Dixit, Xin Wang
2020 arXiv   pre-print
Given a language model pre-trained on massive unlabeled text corpora, only very light supervised fine-tuning is needed to learn a task: the number of fine-tuning steps is typically five orders of magnitude  ...  Further, we find that there are surprisingly many good solutions in the set of sparsified versions of the pre-trained model.  ...  Appendix D Correlation of parameter distance with fine-tuning steps In order to understand how distance in parameter space increases as a function of fine-tuning steps, we study this relationship across  ... 
arXiv:2004.14129v1 fatcat:qdp5iu4nbraghgpiu5yclbcici

On the Adequacy of Untuned Warmup for Adaptive Optimization

Jerry Ma, Denis Yarats
2021 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
In this work, we refute this analysis and provide an alternative explanation for the necessity of warmup based on the magnitude of the update term, which is of greater relevance to training stability.  ...  Adaptive optimization algorithms such as Adam (Kingma and Ba, 2014) are widely used in deep learning. The stability of such algorithms is often improved with a warmup schedule for the learning rate.  ...  Linear warmup of Adam's learning rate over 2 • (1 − β 2 ) −1 iterations is functionally equivalent to RAdam across a wide range of settings.  ... 
doi:10.1609/aaai.v35i10.17069 fatcat:uztrbydudbhzbmxvltsudvpcce

Population of Linear Experts: Knowledge Partitioning and Function Learning

Michael L. Kalish, Stephan Lewandowsky, John K. Kruschke
2004 Psychological review  
This article presents a theory of function learning (the population of linear experts model-POLE) that assumes people partition their knowledge whenever they are presented with a complex task.  ...  The authors show that POLE is a general model of function learning that accommodates both benchmark results and recent data on knowledge partitioning.  ...  order or numeric magnitude, responses in function learning are explicitly ordered along a magnitude axis.  ... 
doi:10.1037/0033-295x.111.4.1072 pmid:15482074 fatcat:tehdeyasgfeblazcpmjo66h344

Direct design of biquad filter cascades with deep learning by sampling random polynomials [article]

Joseph T. Colonel, Christian J. Steinmetz, Marcus Michelen, Joshua D. Reiss
2022 arXiv   pre-print
In this work, we address some of these limitations by learning a direct mapping from the target magnitude response to the filter coefficient space with a neural network trained on millions of random filters  ...  transfer functions and guitar cabinets as case studies.  ...  Since IIRNet is trained to estimate filters of a fixed order, we evaluated how performance changed as a function of the estimation order.  ... 
arXiv:2110.03691v2 fatcat:uzvhsyhc45gjzplstyqbi7fneq

The origins and structure of quantitative concepts

Cory D. Bonn, Jessica F. Cantlon
2012 Cognitive Neuropsychology  
among dimensions as a function of learning.  ...  If many magnitude representations emerged from modification of the functional substrates that code for space, we might expect functional parallels between the spatial module and its evolutionary descendants  ... 
doi:10.1080/02643294.2012.707122 pmid:22966853 pmcid:PMC3894054 fatcat:gllswof67va47kfbsd4yxuf5nq

How Data Drive Early Word Learning: A Cross-Linguistic Waiting Time Analysis

Francis Mollica, Steven T. Piantadosi
2017 Open Mind  
With high statistical certainty, words require on the order of ∼ 10 learning instances, which occur on average once every two months.  ...  information across multiple situations.  ...  ACKNOWLEDGMENTS The authors thank Dick Aslin, Elika Bergelson, Celeste Kidd, and anonymous reviewers for comments on early drafts of this article.  ... 
doi:10.1162/opmi_a_00006 fatcat:4muqqke2nrcu5fosn3adeyutoi

The Human Motor System Supports Sequence-Specific Representations over Multiple Training-Dependent Timescales

Nicholas F. Wymbs, Scott T. Grafton
2014 Cerebral Cortex  
Importantly, many motor areas show changes involving more than 1 of these 3 timescales, underscoring the capacity of the motor system to flexibly represent a sequence based on the amount of prior experience  ...  Motor sequence learning is associated with increasing and decreasing motor system activity.  ...  differences of learning rates.  ... 
doi:10.1093/cercor/bhu144 pmid:24969473 pmcid:PMC4747644 fatcat:unhv45dz3rh4hevykvbjtqjnja

A Hierarchical Bayesian Model for Learning Nonlinear Statistical Regularities in Nonstationary Natural Signals

Yan Karklin, Michael S. Lewicki
2005 Neural Computation  
The model is a generalization of ICA in which the basis function coefficients are no longer assumed to be independent; instead, the dependencies in their magnitudes are captured by a set of density components  ...  Learning Density Components Y Karklin and M S Lewicki  ...  This work was supported by a Dept. of Energy Computational Science Graduate Fellowship to YK and National Science Foundation grant no. 0238351 to MSL.  ... 
doi:10.1162/0899766053011474 pmid:15720773 fatcat:5ki4q6wr7zecnfzrxqyc22bgju
« Previous Showing results 1 — 15 out of 358,975 results