Filters








2,085 Hits in 9.2 sec

Deep Learning in Random Neural Fields: Numerical Experiments via Neural Tangent Kernel [article]

Kaito Watanabe, Kotaro Sakamoto, Ryo Karakida, Sho Sonoda, Shun-ichi Amari
2022 arXiv   pre-print
The behavior of a randomly connected network is investigated on the basis of the key idea of the neural tangent kernel regime, a recent development in the machine learning theory of over-parameterized  ...  We numerically show that this claim also holds for our neural fields.  ...  Random neural fields in NTK regime In this section, we show by numerical experiments that neural fields follow the NTK regime (Fig. 2 ).  ... 
arXiv:2202.05254v1 fatcat:asmt4qtl4ncblnht4ud5anmkmy

Neural Jacobian Fields: Learning Intrinsic Mappings of Arbitrary Meshes [article]

Noam Aigerman, Kunal Gupta, Vladimir G. Kim, Siddhartha Chaudhuri, Jun Saito, Thibault Groueix
2022 arXiv   pre-print
The field of matrices is then projected onto the tangent bundle of the given mesh, and used as candidate jacobians for the predicted map.  ...  This paper introduces a framework designed to accurately predict piecewise linear mappings of arbitrary meshes via a neural network, enabling training and evaluating over heterogeneous collections of meshes  ...  In order to achieve accurate, high-quality results, it stands to reason to harness deep neural networks, which have proven immensely effective for complex regression tasks, and to learn mappings of 3D  ... 
arXiv:2205.02904v1 fatcat:igrv2g723vfn5f3zcbdjnskgnm

Learning Curves for Deep Neural Networks: A Gaussian Field Theory Perspective [article]

Omry Cohen, Or Malka, Zohar Ringel
2020 arXiv   pre-print
In the past decade, deep neural networks (DNNs) came to the fore as the leading machine learning algorithms for a variety of tasks.  ...  Leveraging these ideas and adopting a more physics-like approach, here we construct a versatile field-theory formalism for supervised deep learning, involving renormalization group, Feynman diagrams and  ...  predictions as a noiseless GPR with a different kernel, the neural tangent kernel (NTK), along with an additional initialization dependent term.  ... 
arXiv:1906.05301v4 fatcat:u7h2qfz6g5h2lphiunthnjlg3q

Deep advantage learning for optimal dynamic treatment regime

Shuhan Liang, Wenbin Lu, Rui Song
2018 Statistical Theory and Related Fields  
Deep neural network outperforms many existing popular methods in the field of reinforcement learning. It can also identify important covariates automatically.  ...  However few research has been done on deep advantage learning (A-learning). In this paper, we present a deep A-learning approach to estimate optimal dynamic treatment regime.  ...  Deep learning has been widely used in many fields such as game playing (Mnih et al. [2013] ), finance (Ding et al. [2015] ), robotics (Lenz et al. [2015] ), control and operations research (Mnih et  ... 
doi:10.1080/24754269.2018.1466096 pmid:30420972 pmcid:PMC6226036 fatcat:mmfasu47qbefhacee7o7sbqfau

Universal mean field upper bound for the generalisation gap of deep neural networks [article]

S. Ariosto, R. Pacelli, F. Ginelli, M. Gherardi, P. Rotondo
2022 arXiv   pre-print
Here we employ results from replica mean field theory to compute the generalisation gap of machine learning models with quenched features, in the teacher-student scenario and for regression problems with  ...  We test our predictions on a broad range of architectures, from toy fully-connected neural networks with few hidden layers to state-of-the-art deep convolutional neural networks.  ...  Also the statistical physics of kernel learning (originally started in [21] ) has undergone a revival in the last few years [22] , mainly due to the discovery of the Neural Tangent Kernel (NTK) limit  ... 
arXiv:2201.11022v2 fatcat:bkjoty6oqram7nhs7k62unliwy

Finite Versus Infinite Neural Networks: an Empirical Study [article]

Jaehoon Lee, Samuel S. Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, Jascha Sohl-Dickstein
2020 arXiv   pre-print
outperform neural tangent (NT) kernels; centered and ensembled finite networks have reduced posterior variance and behave more similarly to infinite networks; weight decay and the use of a large learning  ...  Our experiments additionally motivate an improved layer-wise scaling for weight decay which improves generalization in finite-width networks.  ...  Tangents [15] , Apache Beam [68], Tensorflow datasets [134] and Google Colaboratory [135] .  ... 
arXiv:2007.15801v2 fatcat:6ervrlzxybgeteh4cpdytu3w2q

Geodesy of irregular small bodies via neural density fields: geodesyNets [article]

Dario Izzo, Pablo Gómez
2021 arXiv   pre-print
GeodesyNets learn a three-dimensional, differentiable, function representing the body density, which we call neural density field.  ...  When the body shape information is available, geodesyNets can seamlessly exploit it and be trained to represent a high-fidelity neural density field able to give insights into the internal structure of  ...  Dawa Derksen for the interesting discussions and exchanges on neural scene representations.  ... 
arXiv:2105.13031v1 fatcat:m243bsyvefbmlac33bxnbyy4d4

A guide to constraining effective field theories with machine learning

Johann Brehmer, Kyle Cranmer, Gilles Louppe, Juan Pavez
2018 Physical Review D  
We develop, discuss, and compare several inference techniques to constrain theory parameters in collider experiments.  ...  This augmented data can be used to train neural networks that precisely estimate the likelihood ratio.  ...  , α = 2, deep 0.0010 0.0002 Baseline, α = 10, deep 0.0010 0.0003 Baseline, α = 20, deep 0.0011 0.0003 Baseline, α = 50, deep 0.0016 0.0005 Random θ, α = 5 0.0013 0.0003 Random θ, α = 5 ,  ... 
doi:10.1103/physrevd.98.052004 fatcat:jwbnqyv2orgjtl47ll37jrd5s4

Mean-Field and Kinetic Descriptions of Neural Differential Equations [article]

M. Herty, T. Trimborn, G. Visconti
2021 arXiv   pre-print
Nowadays, neural networks are widely used in many applications as artificial intelligence models for learning tasks.  ...  The mean-field description is then obtained in the limit of infinitely many input data.  ...  Numerical Experiments In this section we present two classical applications for machine learning algorithms, namely classification and regression problems.  ... 
arXiv:2001.04294v4 fatcat:kkd4zfrrwndspa4wkvzwiyxi5i

On machine learning force fields for metallic nanoparticles

Claudio Zeni, Kevin Rossi, Aldo Glielmo, Francesca Baletto
2019 Advances in Physics: X  
In this review, we first formally introduce the most commonly used machine learning algorithms for force field generation, briefly outlining their structure and properties.  ...  Machine learning algorithms have recently emerged as a tool to generate force fields which display accuracies approaching the ones of the ab-initio calculations they are trained on, but are much faster  ...  Together with big-data techniques that easily and better characterize nanoparticles' geometries during simulations [34] [35] [36] and experiments, [37] [38] [39] machine learning force fields (ML-FFs  ... 
doi:10.1080/23746149.2019.1654919 fatcat:5j44qgkxrbdk7ijzv5dqwktrkq

Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory [article]

Micah Goldblum, Jonas Geiping, Avi Schwarzschild, Michael Moeller, Tom Goldstein
2020 arXiv   pre-print
not optimal for generalization; (3) demonstrate that ResNets do not conform to wide-network theories, such as the neural tangent kernel, and that the interaction between skip connections and batch normalization  ...  In this work, we: (1) prove the widespread existence of suboptimal local minima in the loss landscape of neural networks, and we use our theory to find examples; (2) show that small-norm parameters are  ...  We focus on the Neural Tangent Kernel (NTK), developed in Jacot et al. (2018) .  ... 
arXiv:1910.00359v3 fatcat:oas2iunoyfantiepiklcz5pude

Infinite Neural Network Quantum States [article]

Di Luo, James Halverson
2021 arXiv   pre-print
Numerical experiments on finite and infinite NNQS in the transverse field Ising model and Fermi Hubbard model demonstrate excellent agreement with theory.  ...  A general framework is developed for studying the gradient descent dynamics of neural network quantum states (NNQS), using a quantum state neural tangent kernel (QS-NTK).  ...  Numerical Experiments.  ... 
arXiv:2112.00723v1 fatcat:yhydlginr5hchicdicaljz6asq

Temporal-difference learning with nonlinear function approximation: lazy training and mean field regimes [article]

Andrea Agazzi, Jianfeng Lu
2021 arXiv   pre-print
We finally give examples of our convergence results in the case of models that diverge if trained with non-lazy TD learning, and in the case of neural networks.  ...  In this regime, the parameters of the model vary only slightly during the learning process, a feature that has recently been observed in the training of neural networks, where the scaling we study arises  ...  The linear stability analysis is also considered in the recent work Achiam et al. (2019) based on the neural tangent kernel (Jacot et al., 2018) for off-policy deep Q-learning.  ... 
arXiv:1905.10917v5 fatcat:cyvmc32unza2zljqm6ag3fncfm

Reverse Engineering the Neural Tangent Kernel [article]

James B. Simon, Sajant Anand, Michael R. DeWeese
2022 arXiv   pre-print
We verify our construction numerically and demonstrate its utility as a design tool for finite fully-connected networks in several experiments.  ...  The development of methods to guide the design of neural networks is an important open challenge for deep learning theory.  ...  This research was supported in part by the U.S. Army Research Laboratory and the U.S. Army Research Office under contract W911NF-20-1-0151.  ... 
arXiv:2106.03186v4 fatcat:gqrnfhurszenphmib7oggbv2ue

Unified field theoretical approach to deep and recurrent neuronal networks [article]

Kai Segadlo, Bastian Epping, Alexander van Meegen, David Dahmen, Michael Krämer, Moritz Helias
2022 arXiv   pre-print
Numerically, we find that convergence towards the mean-field theory is typically slower for recurrent networks than for deep networks and the convergence speed depends non-trivially on the parameters of  ...  Bayesian inference on Gaussian processes has proven to be a viable approach for studying recurrent and deep networks in the limit of infinite layer width, n→∞.  ...  The dynamics of the neural-tangent kernel for deep networks with finite width has been studied in ref. [20] .  ... 
arXiv:2112.05589v3 fatcat:viagzpl22veznl6q6rphsxbj5e
« Previous Showing results 1 — 15 out of 2,085 results