Filters








131 Hits in 7.2 sec

Taylor-Lagrange Neural Ordinary Differential Equations: Toward Fast Training and Evaluation of Neural ODEs [article]

Franck Djeumou, Cyrus Neary, Eric Goubault, Sylvie Putot, Ufuk Topcu
2022
Neural ordinary differential equations (NODEs) -- parametrizations of differential equations using neural networks -- have shown tremendous promise in learning models of unknown continuous-time dynamical  ...  By contrast, we accelerate the evaluation and the training of NODEs by proposing a data-driven approach to their numerical integration.  ...  We train the neural network representing the midpoint for 1000 epochs with a mini batch of size 512 at each iteration. Hypersolver Parameterization and Training.  ... 
doi:10.48550/arxiv.2201.05715 fatcat:wy2xpudixfhozfammseu2t3nem

On Neural Differential Equations [article]

Patrick Kidger
2022 arXiv   pre-print
Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular  ...  In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin.  ...  Chapter 2 Neural Ordinary Differential Equations Introduction By far the most common neural differential equation is a neural ODE [Che+18b] : y(0) = y 0 dy dt (t) = f θ (t, y(t)), (2.1) where y 0 ∈  ... 
arXiv:2202.02435v1 fatcat:vglknmvlgfeddoe2cxohubauxm

Mean-Field and Kinetic Descriptions of Neural Differential Equations [article]

M. Herty, T. Trimborn, G. Visconti
2021 arXiv   pre-print
This assumption allows to interpret the residual neural network as a time-discretized ordinary differential equation, in analogy with neural differential equations.  ...  This leads to a Vlasov-type partial differential equation which describes the evolution of the distribution of the input data.  ...  Herty and T. Trimborn acknowledge the support by the ERS Prep Fund -Simulation and Data Science. The work was partially funded by the Excellence Initiative of the German federal and state governments.  ... 
arXiv:2001.04294v4 fatcat:kkd4zfrrwndspa4wkvzwiyxi5i

Differential equation and probability inspired graph neural networks for latent variable learning [article]

Zhuangwei Shi
2022 arXiv   pre-print
Probabilistic theory and differential equation are powerful tools for the interpretability and guidance of the design of machine learning models, especially for illuminating the mathematical motivation  ...  Inspired by probabilistic theory and differential equations, this paper proposes graph neural networks to solve state estimation and subspace learning problems.  ...  This idea can derive a series of ordinary differential equations (ODE) to describe the Laplacian-participated linear dynamic systems.  ... 
arXiv:2202.13800v1 fatcat:gv3r3sf5nrbtfgxkxcturxr37e

The Multivariate Theory of Functional Connections: An n-Dimensional Constraint Embedding Technique Applied to Partial Differential Equations [article]

Carl Leake
2021 arXiv   pre-print
In addition, comparisons with other state-of-the-art algorithms that estimate differential equations' solutions are included to showcase the advantages and disadvantages of the TFC approach.  ...  Lastly, the aforementioned concepts are leveraged to estimate solutions of differential equations from the field of flexible body dynamics.  ...  Although ordinary differential equations (ODEs) will be discussed, this section's primary focus will be on partial differential equations (PDEs).  ... 
arXiv:2105.07070v2 fatcat:ajvw37ooxfaxtfesfp3hquv3qm

Composed Physics- and Data-driven System Identification for Non-autonomous Systems in Control Engineering [article]

Ricarda-Samantha Götte, Julia Timmermann
2021 arXiv   pre-print
In this contribution we extend existing methods towards the identification of non-autonomous systems and propose a combined approach PGNN-L, which uses a PGNN and a physics-inspired loss term (-L) to successfully  ...  Recently Physics-Guided Neural Networks (PGNN) and physics-inspired loss functions separately have shown promising results to conquer these drawbacks.  ...  by only a small number of gov- dynamics by ordinary differential equations (ODEs).  ... 
arXiv:2112.08148v1 fatcat:itdlmqchjrdq3nt4u6f2xkfmd4

Neural networks for variational problems in engineering

R. Lopez, E. Balsa-Canto, E. Oñate
2008 International Journal for Numerical Methods in Engineering  
Eugenio Oñate, for his belief and support in this project. Many thanks to my Co-Director, Dr. Eva Balsa, and my Tutor, Dr. Lluìs Belanche, for their fruitful comments and suggestions.  ...  I also express my gratitude to all the people who has collaborated to any part of this PhD Thesis. They include Dr. Carlos Agelet, Begoña Carmona, Dr. Michele Chiumenti, Dr.  ...  On the other side, its evaluation might need the integration of functions, ordinary differential equations or partial differential equations.  ... 
doi:10.1002/nme.2304 fatcat:22lvukm4vncf5hhj5mr5yqrtuq

AN APPLICATION OF HAMILTONIAN NEURODYNAMICS USING PONTRYAGIN'S MAXIMUM (MINIMUM) PRINCIPLE

TAKAMASA KOSHIZEN, JOHN FULCHER
1995 International Journal of Neural Systems  
networks are here employed on differential equations which have characteristics such as admitting neurons and time dependent weight vectors .  ...  John Fulcher, for his technical suggestions and helps with this research generally as well as assistance with preparation of this thesis.  ...  In addition, the continuous time version of (3.3.10) is : dw-jt) dt = -AiV E The above system of ordinary differential equations (ODEs) is usually stiff., and the stability of numerical methods for solving  ... 
doi:10.1142/s0129065795000287 fatcat:srnmrkdnqjefjfprpmrdvutcrm

Accelerating Natural Gradient with Higher-Order Invariance [article]

Yang Song, Jiaming Song, Stefano Ermon
2018 arXiv   pre-print
In this paper, we study invariance properties from a combined perspective of Riemannian geometry and numerical differential equation solving.  ...  Experimentally, we demonstrate that invariance leads to faster optimization and our techniques improve on traditional natural gradient in deep neural network training and natural policy gradient for reinforcement  ...  This work was supported by NSF grants #1651565, #1522054, #1733686, Toyota Research Institute, Future of Life Institute, and Intel.  ... 
arXiv:1803.01273v2 fatcat:hilds6cma5fx5iz33xxylag6ii

Chaos as an interpretable benchmark for forecasting and data-driven modelling [article]

William Gilpin
2021 arXiv   pre-print
model training, and benchmarking symbolic regression algorithms.  ...  Chaotic systems inherently challenge forecasting models, and across extensive benchmarks we correlate forecasting performance with the degree of chaos present.  ...  Acknowledgments and Disclosure of Funding We thank Gautam Reddy, Samantha Petti, Brian Matejek, and Yasa Baig for helpful discussions and comments on the manuscript. W.  ... 
arXiv:2110.05266v1 fatcat:762cndr4a5bzzbfkablmkvbk7u

Physics informed machine learning with Smoothed particle hydrodynamics: Hierarchy of reduced Lagrangian models of turbulence [article]

Michael Woodward, Yifeng Tian, Criston Hyett, Chris Fryer, Daniel Livescu, Mikhail Stepanov, Michael Chertkov
2022 arXiv   pre-print
SPH is a mesh-free Lagrangian methodology for approximating equations of fluid mechanics.  ...  Starting from Neural Network (NN) based parameterization of a Lagrangian acceleration operator, this hierarchy gradually incorporates a weakly compressible and parameterized SPH framework which enforces  ...  Using the SPH formalism the partial differential equations (PDEs) of fluid dynamics can be approximated by a system of ordinary differential equations (ODEs) for each particle (indexed by i), ∀i ∈ {1,  ... 
arXiv:2110.13311v4 fatcat:fxx6klqgevetlfmnqntudgvywm

Practical Perspectives on Symplectic Accelerated Optimization [article]

Valentin Duruisseaux, Melvin Leok
2022 arXiv   pre-print
Finally, we compare the efficiency and robustness of different geometric integration techniques, and study the effects of the different parameters in the algorithms to inform and simplify tuning in practice  ...  In particular, we investigate how momentum restarting schemes ameliorate computational efficiency and robustness by reducing the undesirable effect of oscillations, and ease the tuning process by making  ...  Acknowledgments The authors were supported in part by NSF under grants DMS-1411792, DMS-1345013, DMS-1813635, CCF-2112665, by AFOSR under grant FA9550-18-1-0288, and by the DoD under grant HQ00342010023  ... 
arXiv:2207.11460v1 fatcat:vfa45nf2srhuxipkf6byj7axam

The Dynamics of Legged Locomotion: Models, Analyses, and Challenges

Philip Holmes, Robert J. Full, Dan Koditschek, John Guckenheimer
2006 SIAM Review  
Central pattern generators and proprioceptive sensing require models of spiking neurons and simplified phase oscillator descriptions of ensembles of them.  ...  Evolution has shaped the breathtaking abilities of animals, leaving us the challenge of reconstructing their targets of control and mechanisms of dexterity.  ...  some of us first met; we thank the IMA for inviting us.  ... 
doi:10.1137/s0036144504445133 fatcat:izz7o5ch2zadpa4gkyqzopspda

Few-Shot Learning by Dimensionality Reduction in Gradient Space [article]

Martin Gauch, Maximilian Beck, Thomas Adler, Dmytro Kotsur, Stefan Fiel, Hamid Eghbal-zadeh, Johannes Brandstetter, Johannes Kofler, Markus Holzleitner, Werner Zellinger, Daniel Klotz, Sepp Hochreiter (+1 others)
2022 arXiv   pre-print
A suitable subspace fulfills three criteria across the given tasks: it (a) allows to reduce the training error by gradient flow, (b) leads to models that generalize well, and (c) can be identified by stochastic  ...  and performance.  ...  The goal is to learn a neural network f θ such that it approximates a dynamical system (e.g., given by Equation ( 8 )), which yields the ordinary differential equation ẋ = f θ (x, u) (9) with initial  ... 
arXiv:2206.03483v1 fatcat:dlmthl6fvvgdnjr7c2llzi2eia

Deep Learning Theory Review: An Optimal Control and Dynamical Systems Perspective [article]

Guan-Horng Liu, Evangelos A. Theodorou
2019 arXiv   pre-print
Our framework fits nicely with supervised learning and can be extended to other learning problems, such as Bayesian learning, adversarial training, and specific forms of meta learning, without efforts.  ...  When optimization algorithms are further recast as controllers, the ultimate goal of training processes can be formulated as an optimal control problem.  ...  The framework forms the basis of most recent understandings of deep learning, by recasting DNN as an ordinary differential equation and SGD as a stochastic differential equation.  ... 
arXiv:1908.10920v2 fatcat:rimioom5ofenvdazcx2lke5gu4
« Previous Showing results 1 — 15 out of 131 results