Filters








214 Hits in 5.2 sec

Uncovering the Folding Landscape of RNA Secondary Structure with Deep Graph Embeddings [article]

Egbert Castro, Andrew Benz, Alexander Tong, Guy Wolf, Smita Krishnaswamy
2020 arXiv   pre-print
Our approach is based on the intuition that geometric scattering generates multi-resolution features with in-built invariance to deformations, but as they are unsupervised, these features may not be tuned  ...  Like proteins, RNA molecules can fold to create low energy functional structures such as hairpins, but the landscape of possible folds and fold sequences are not well visualized by existing methods.  ...  Once this loss has converged, we then refine the generator by training on the overall reconstruction of S. We show these final MSE losses for the RNA datasets in Table 5 .  ... 
arXiv:2006.06885v2 fatcat:wbckjqyl6bewbajxmwc4quph5u

Regularized linear autoencoders recover the principal components, eventually [article]

Xuchan Bao, James Lucas, Sushant Sachdeva, Roger Grosse
2021 arXiv   pre-print
simple case of linear autoencoders (LAEs).  ...  Our understanding of learning input-output relationships with neural nets has improved rapidly in recent years, but little is known about the convergence of the underlying representations, even in the  ...  Part of this research was conducted when SS and RG were visitors at the Special year on Optimization, Statistics, and Theoretical Machine Learning at the School of Mathematics, Institute for Advanced Study  ... 
arXiv:2007.06731v2 fatcat:vhhl3ulhzfg77kqijsx2jl4nn4

Chasing Collective Variables using Autoencoders and biased trajectories [article]

Zineb Belkacemi, Paraskevi Gkeka, Tony Lelièvre, Gabriel Stoltz
2021 arXiv   pre-print
However, most of these methods rely on the prior knowledge of low-dimensional slow degrees of freedom, i.e. Collective Variables (CV).  ...  Our method includes a reweighting scheme to ensure that the learning model optimizes the same loss at each iteration, and achieves CV convergence.  ...  A notable example is the elimination of rotational and translational invariances through centering and structural alignment of the configurations to a reference structure, or by using internal coordinates  ... 
arXiv:2104.11061v2 fatcat:yxtzjgapsvc4tcl4m3ke5hidem

Multi-Layer Neural Network Auto Encoders Learning Method, using Regularization for Invariant Image Recognition

Skribtsov Pavel Vyacheslavovich, Kazantsev Pavel Aleksandrovich
2016 Indian Journal of Science and Technology  
properties of the encoder including the degree of invariance of the feature extraction to input signal transformations (perturbations) greatly depend on the particular form of the regularization applied  ...  Experiments carried out on the synthetic and real pattern datasets show promising results and encourage further investigation of the proposed approach.  ...  identifier RFMEFI57614X0051) to perform the applied research on the topic: "Development of intelligent algorithms of traffic situations detection and identification for the on-board systems of the unmanned  ... 
doi:10.17485/ijst/2016/v9i27/97704 fatcat:uzgyd4tnfjanhch72vwyolnvqa

The Physics of Machine Learning: An Intuitive Introduction for the Physical Scientist [article]

Stephon Alexander, Sarah Bawabe, Batia Friedman-Shaw, Michael W. Toomey
2021 arXiv   pre-print
This serves as a foundation to understand the phenomenon of learning more generally.  ...  We begin with a review of two energy-based machine learning algorithms, Hopfield networks and Boltzmann machines, and their connection to the Ising model.  ...  ACKNOWLEDGEMENTS The authors thank Sergei Gleyzer, Zach Hemler, and Stefan Stanojevic for insightful comments on an early draft of this work.  ... 
arXiv:2112.00851v1 fatcat:ydhk5owgxjapfmv4pitjc6xqei

Learning Deep Binary Descriptor with Multi-Quantization

Yueqi Duan, Jiwen Lu, Ziwei Wang, Jianjiang Feng, Jie Zhou
2018 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Moreover, we present a similarity-aware binary encoding strategy based on the earth mover's distance of Autoencoders, so that elements that are quantized into similar Autoencoders will have smaller Hamming  ...  from severe quantization loss.  ...  In J 1 , we simultaneously minimize the reconstruction losses of the real-valued features and elements for elementwise selection of Autoencoders.  ... 
doi:10.1109/tpami.2018.2858760 pmid:30040626 fatcat:rerfa36zeraodof4tb74yxyrpu

Style transfer with variational autoencoders is a promising approach to RNA-Seq data harmonization and analysis [article]

Nikolai E. Russkikh, Denis V. Antonets, Dmitry N. Shtokalo, Alexander V. Makarov, Alexey M. Zakharov, Evgeny V. Terentev
2019 bioRxiv   pre-print
The proposed solution is based on Variational Autoencoder artificial neural network. To disentangle the style components, we trained the encoder with discriminator in an adversarial manner.  ...  The most of style transfer studies are focused on image data, and, to our knowledge, this is the first attempt to adapt this procedure to gene expression domain.  ...  Acknowledgements Authors would like to thank the Institute of Computational Technol SB RAS for providing computational resources needed for this pu tion. Conflict of Interest: none declared.  ... 
doi:10.1101/791962 fatcat:pqemdrhapzfo7hxefdchmvbp3m

Characterization of Gradient Dominance and Regularity Conditions for Neural Networks [article]

Yi Zhou, Yingbin Liang
2017 arXiv   pre-print
In this paper, we enrich the current understanding of the landscape of the square loss functions for three types of neural networks.  ...  Specifically, when the parameter matrices are square, we provide an explicit characterization of the global minimizers for linear networks, linear residual networks, and nonlinear networks with one hidden  ...  Other landscape properties of linear networks: The study of the landscape of the square loss function for linear neural networks dates back to the pioneering work Baldi and Hornik (1989) ; Baldi (1989  ... 
arXiv:1710.06910v2 fatcat:2xsjin6eybeedoy3usliy7uf4a

Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules

Rafael Gómez-Bombarelli, Jennifer N. Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, Ryan P. Adams, Alán Aspuru-Guzik
2018 ACS Central Science  
A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder and a predictor.  ...  We demonstrate our method in the domain of drug-like molecules and also in the set of molecules with fewer that nine heavy atoms.  ...  The property prediction loss was annealed in at the same time as the variational loss.  ... 
doi:10.1021/acscentsci.7b00572 pmid:29532027 pmcid:PMC5833007 fatcat:eun57eul2vcpjfikuaowor3wtu

A Survey of Inductive Biases for Factorial Representation-Learning [article]

Karl Ridgeway
2016 arXiv   pre-print
This survey brings together a wide variety of models that all touch on the problem of learning factorial representations and lays out a framework for comparing these models based on the strengths of the  ...  Unsupervised inductive biases exploit assumptions about the environment, such as the statistical distribution of factor coefficients, assumptions about the perturbations a factor should be invariant to  ...  Imagine a set of images of landscapes containing trees and sheep.  ... 
arXiv:1612.05299v1 fatcat:d6fgyd5yxrdybm6dz3wrnjipj4

OnsagerNet: Learning Stable and Interpretable Dynamics using a Generalized Onsager Principle [article]

Haijun Yu, Xinyuan Tian, Weinan E, Qianxiao Li
2021 arXiv   pre-print
For high dimensional problems with a low dimensional slow manifold, an autoencoder with metric preserving regularization is introduced to find the low dimensional generalized coordinates on which we learn  ...  We further apply this method to study Rayleigh-Benard convection and learn Lorenz-like low dimensional autonomous reduced order models that capture both qualitative and quantitative properties of the underlying  ...  One can either use linear principal component analysis (PCA) or nonlinear embedding, e.g. the autoencoder, to find a set of good latent coordinates from the high dimensional data.  ... 
arXiv:2009.02327v3 fatcat:uaoe475xcreyphbgjtsocu2pii

Physics enhanced neural networks predict order and chaos [article]

Anshul Choudhary, John F. Lindner, Elliott G. Holliday, Scott T. Miller, Sudeshna Sinha, William L. Ditto
2019 arXiv   pre-print
The power of the technique and the ubiquity of chaos suggest widespread utility.  ...  We demonstrate Hamiltonian neural networks on the canonical Henon-Heiles system, which models diverse dynamics from astrophysics to chemistry.  ...  For HNN, the loss function drops precipitously for 4 (or more) bottleneck neurons, which appear to encode a linear combination of the 4 phase space coordinates, thereby capturing the dimensionality of  ... 
arXiv:1912.01958v1 fatcat:7fhlnwkbrzgcpdquxfomrwdlxa

Multi-level Convolutional Autoencoder Networks for Parametric Prediction of Spatio-temporal Dynamics [article]

Jiayang Xu, Karthik Duraisamy
2020 arXiv   pre-print
Perspectives are provided on the present approach and its place in the landscape of model reduction.  ...  A data-driven framework is proposed towards the end of predictive modeling of complex spatio-temporal dynamics, leveraging nested non-linear manifolds.  ...  Perspectives on the present approach, and its place in the larger landscape of model reduction is presented in Sec. 5. A summary is given in Sec. 6.  ... 
arXiv:1912.11114v2 fatcat:d3ftgc3mp5davlwkcn6eyfklye

Critical Points of Neural Networks: Analytical Forms and Landscape Properties [article]

Yi Zhou, Yingbin Liang
2017 arXiv   pre-print
One particular conclusion is that: The loss function of linear networks has no spurious local minimum, while the loss function of one-hidden-layer nonlinear networks with ReLU activation function does  ...  Furthermore, we exploit the analytical forms of the critical points to characterize the landscape properties for the loss functions of these neural networks.  ...  Thus, the landscape of the loss function of nonlinear networks is very different from that of the loss function of linear networks.  ... 
arXiv:1710.11205v1 fatcat:2523ommbwbeyxpm4ox4lzgu3cq

Discovery of Visual Semantics by Unsupervised and Self-Supervised Representation Learning [article]

Gustav Larsson
2017 arXiv   pre-print
To address this concern, with the long-term goal of leveraging the abundance of cheap unlabeled data, we explore methods of unsupervised "pre-training."  ...  The success of deep learning in computer vision is rooted in the ability of deep networks to scale up model complexity as demanded by challenging visual tasks.  ...  Division by the number of features, C, makes the scale of the loss invariant to C and the loss easily interpretable as the average loss per feature.  ... 
arXiv:1708.05812v1 fatcat:w77w3q3ms5c5fnyzl65mkj4ozy
« Previous Showing results 1 — 15 out of 214 results