Filters








9,742 Hits in 6.0 sec

Paradox in Deep Neural Networks: Similar yet Different while Different yet Similar [article]

Arash Akbarinia, Karl R. Gegenfurtner
2019 arXiv   pre-print
Pairs of networks whose kernels' weights are over 99.9% correlated can exhibit significantly different performances, yet other pairs with no correlation can reach quite compatible levels of performance  ...  Machine learning is advancing towards a data-science approach, implying a necessity to a line of investigation to divulge the knowledge learnt by deep neuronal networks.  ...  Figure 11 Figure 6 .Figure 7 . 1167 is the difference between these two measures.Paradox in Deep Neural Networks: Similar yet Different while Different yet Similar N01 N02 N03 N04 N05 N06 N07 N08 N09 N10  ... 
arXiv:1903.04772v1 fatcat:or3sqdv4fvafhjbyz5zb3jpp3i

Dropout in Neural Networks Simulates the Paradoxical Effects of Deep Brain Stimulation on Memory [article]

Shawn Zheng Kai Tan, Richard Du, Jose Angelo Udal Perucho, Shauhrat S Chopra, Varut Vardhanabhut, Lee Wei Lim
2020 bioRxiv   pre-print
We used a convolutional neural network to classify handwritten digits and letters, applying dropout at different stages to simulate DBS effects on engrams.  ...  We further showed that transfer learning of neural networks with dropout had increased accuracy and rate of learning.  ...  In this paper, we propose that DBS causes dropout in neural nodes that "forces" the activation of new pathways and creates more robust networks, similar to how dropout enhanced the neural networks.  ... 
doi:10.1101/2020.05.01.073486 fatcat:pskcr2ceoveebiqlkjcofwlzza

Dropout in Neural Networks Simulates the Paradoxical Effects of Deep Brain Stimulation on Memory

Shawn Zheng Kai Tan, Richard Du, Jose Angelo Udal Perucho, Shauhrat S. Chopra, Varut Vardhanabhuti, Lee Wei Lim
2020 Frontiers in Aging Neuroscience  
We used a convolutional neural network (CNN) to classify handwritten digits and letters and applied dropout at different stages to simulate DBS effects on engrams.  ...  We further showed that transfer learning of neural networks with dropout had increased the accuracy and rate of learning.  ...  In this article, we propose that DBS causes dropout in neural nodes that ''forces'' the activation of new pathways and creates more robust networks, similar to how dropout enhanced the neural networks.  ... 
doi:10.3389/fnagi.2020.00273 pmid:33093830 pmcid:PMC7521073 fatcat:iumyssczzndxdozr64gs566liy

Artificial Neural Network [chapter]

2017 Encyclopedia of GIS  
The problem is that there are orders of magnitude more mathematical functions than possible networks to approximate them. And yet deep neural networks somehow get the right answer.  ...  [13] The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe Nobody understands why deep neural networks are so good at solving complex problems.  ... 
doi:10.1007/978-3-319-17885-1_100075 fatcat:eiv3jpa6kvdfjp3sto3t7mw2gy

Doing the impossible: Why neural networks can be trained at all [article]

Nathan O. Hodas, Panos Stinis
2018 arXiv   pre-print
Similar questions arise in protein folding, spin glasses and biological neural networks.  ...  As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them.  ...  The work of NOH was supported by PNNL's LDRD Analysis in Motion Initiative and Deep Learning for Scientific Discovery Initiative.  ... 
arXiv:1805.04928v2 fatcat:6uyd3t7khfalfpjk6yfzy5ek2m

EikoNet: Solving the Eikonal equation with Deep Neural Networks [article]

Jonathan D. Smith, Kamyar Azizzadenesheli, Zachary E. Ross
2020 arXiv   pre-print
In doing so, the method exploits the differentiability of neural networks to calculate the spatial gradients analytically, meaning the network can be trained on its own without ever needing solutions from  ...  a finite difference algorithm.  ...  We would like to thank Jack Muir for interesting discussions about finite-difference methods and limitations.  ... 
arXiv:2004.00361v3 fatcat:v5zzexg4nncodogluvbxqsee4u

When Neural Networks Using Different Sensors Create Similar Features [article]

Hugues Moreau
2021 arXiv   pre-print
We draw from the well-developed analysis of similarity to provide an example of a problem where neural networks are trained from different sensors, and where the features extracted from these sensors still  ...  carry similar information.  ...  Similarity of different neural networks The similarity between two neural networks is a well-studied subject. The closest publications to our work are the ones from Roeder et al. [13] .  ... 
arXiv:2111.02732v1 fatcat:7d6svd5ombesbcbgmby4cijaae

Inhibition stabilization is a widespread property of cortical networks

Alessandro Sanzeni, Bradley Akitake, Hannah C Goldbach, Caitlin E Leedy, Nicolas Brunel, Mark H Histed
2020 eLife  
Yet it has been experimentally unclear whether inhibition-stabilized network (ISN) models describe cortical function well across areas and states.  ...  Here, we test several ISN predictions, including the counterintuitive (paradoxical) suppression of inhibitory firing in response to optogenetic inhibitory stimulation.  ...  Thus, the deep-layer data shows no evidence for a different pattern of responses than seen in the upper layers – deep-layer inhibitory cells show paradoxical suppression to excitatory stimulation.  ... 
doi:10.7554/elife.54875 pmid:32598278 fatcat:qvrj66git5h5ninhsdwen4ydhy

Dream Formulations and Deep Neural Networks: Humanistic Themes in the Iconology of the Machine-Learned Image [article]

Emily L. Spratt
2018 arXiv   pre-print
This paper addresses the interpretability of deep learning-enabled image recognition processes in computer vision science in relation to theories in art history and cognitive psychology on the vision-related  ...  analysis and psychologist Eleanor Rosch's theory of graded categorization according to prototypes, finds that there are surprising similarities between the two that suggest that researchers in the arts  ...  Gatys and others reach an astonishingly similar conclusion in their research on deep neural networks and the ability of algorithms to replicate established artistic styles.  ... 
arXiv:1802.01274v1 fatcat:ndfkatx74zeqnbfzklwzy2k4my

Deep Convolutional Neural Networks in the Face of Caricature: Identity and Image Revealed [article]

Matthew Q. Hill, Y. Ivette Colon , Alice J. O'Toole University of Maryland
2018 arXiv   pre-print
Deep convolutional neural networks (DCNNs) also create generalizable face representations, but with cascades of simulated neurons.  ...  Deep networks produce face representations that solve long-standing computational problems in generalized face recognition.  ...  generalize recognition even Preprint -Deep Convolutional Neural Networks in the Face of Caricature: Identity and Image Revealed 6 further.  ... 
arXiv:1812.10902v1 fatcat:eopdln3wufdpnnraw7oye3fwya

Deep Neural Network for Musical Instrument Recognition using MFCCs [article]

Saranga Kingkor Mahanta, Abdullah Faiz Ur Rahman Khilji, Partha Pakray
2021 arXiv   pre-print
In this paper, we use an artificial neural network (ANN) model that was trained to perform classification on twenty different classes of musical instruments.  ...  The task of efficient automatic music classification is of vital importance and forms the basis for various advanced applications of AI in the musical domain.  ...  This paper proposes a deep artificial neural network model that efficiently distinguishes and recognizes 20 different classes of musical instruments, even across instruments belonging to the same family  ... 
arXiv:2105.00933v2 fatcat:yerdreqabzhlppo4xazaz2ir7q

Doing the Impossible: Why Neural Networks Can Be Trained at All

Nathan O. Hodas, Panos Stinis
2018 Frontiers in Psychology  
Similar questions arise in protein folding, spin glasses and biological neural networks.  ...  As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them.  ...  While the existence of a lot of (mostly) equivalent local minima explains the common behavior of deep neural network training observed by different researchers, we want to study in more detail the approach  ... 
doi:10.3389/fpsyg.2018.01185 pmid:30050485 pmcid:PMC6052125 fatcat:wd7pyioa2jf5vcivh6cp4opnzy

Any-Width Networks

Thanh Vu, Marc Eder, True Price, Jan-Michael Frahm
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
Despite remarkable improvements in speed and accuracy, convolutional neural networks (CNNs) still typically operate as monolithic entities at inference time.  ...  We empirically demonstrate that our proposed AWNs compare favorably to existing methods while providing maximally granular control during inference.  ...  The datasets differ in that the former has 10 classes, while the latter has 100.  ... 
doi:10.1109/cvprw50498.2020.00360 dblp:conf/cvpr/VuEPF20 fatcat:2ndegu5q2vb2lppigjpeqlrcj4

A Closer Look at Memorization in Deep Networks [article]

Devansh Arpit, Stanisław Jastrzębski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, Simon Lacoste-Julien
2017 arXiv   pre-print
In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data.  ...  While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first.  ...  Yet deep neural networks (DNNs) often achieve excellent generalization performance with massively over-parameterized models. This phenomenon is not well-understood.  ... 
arXiv:1706.05394v2 fatcat:ltnngvtq2renfcdk76yradnfha

Any-Width Networks [article]

Thanh Vu, Marc Eder, True Price, Jan-Michael Frahm
2020 arXiv   pre-print
Despite remarkable improvements in speed and accuracy, convolutional neural networks (CNNs) still typically operate as monolithic entities at inference time.  ...  We empirically demonstrate that our proposed AWNs compare favorably to existing methods while providing maximally granular control during inference.  ...  The datasets differ in that the former has 10 classes, while the latter has 100.  ... 
arXiv:2012.03153v1 fatcat:olonxqhbzjgzhf6nbsbt7bgrwy
« Previous Showing results 1 — 15 out of 9,742 results