Filters








4,807 Hits in 3.5 sec

Seismic Inversion by Newtonian Machine Learning [article]

Yuqing Chen, Gerard T. Schuster
2019 arXiv   pre-print
The skeletonized representation of the seismic traces consists of the low-rank latent-space variables predicted by a well-trained autoencoder neural network.  ...  The skeletal data can be the latent space variables of an autoencoder, a variational autoencoder, or a feature map from a convolutional neural network (CNN), or principal component analysis (PCA) features  ...  In this paper, we first use the observed seismic traces as the training set to train the autoencoder neural network.  ... 
arXiv:1904.10936v1 fatcat:ymdh7yhhbfdgzknq7baqlxyt6i

Simplified Learning of CAD Features Leveraging a Deep Residual Autoencoder [article]

Raoul Schönhof and Jannes Elstner and Radu Manea and Steffen Tauber and Ramez Awad and Marco F. Huber
2022 arXiv   pre-print
One key problem underlying the training of deep neural networks is the immanent lack of a sufficient amount of training data.  ...  In the domain of computer vision, deep residual neural networks like EfficientNet have set new standards in terms of robustness and accuracy.  ...  With the rise of artificial intelligence and especially deep neural networks, it would be beneficial to assist these experts by creating a neural network based classifier, as proposed in our previous work  ... 
arXiv:2202.10099v1 fatcat:bwxfq5fgj5efnkbvp73d67r6oy

Recursive Autoencoders for ITG-Based Translation

Peng Li, Yang Liu, Maosong Sun
2013 Conference on Empirical Methods in Natural Language Processing  
The recursive autoencoders are capable of generating vector space representations for variable-sized phrases, which enable predicting orders to exploit syntactic and semantic information from a neural  ...  ., straight and inverted) dependent on actual blocks being merged remains a challenge.  ...  Similarly, the same reconstruction neural network can be applied to each node in an ITG parse. These neural networks are called recursive autoencoders (Socher et al., 2011c) .  ... 
dblp:conf/emnlp/LiLS13 fatcat:3jsg5yzr5zcbplxqqipwlr6tgm

Seismic Inversion by Newtonian Machine Learning

Yuqing Chen, Gerard T. Schuster
2020 Geophysics  
The skeletonized representation of the seismic traces consists of the low-rank latent-space variables predicted by a well-trained autoencoder neural network.  ...  The most significant contribution of this paper is that it provides a general framework for using wave-equation inversion to invert skeletal data generated by any type of neural networks.  ...  In this paper, we first use the observed seismic traces as the training set to train the autoencoder neural network.  ... 
doi:10.1190/geo2019-0434.1 fatcat:6f3xwxhawjeizgkkxau3jamkfm

Benchmarking Invertible Architectures on Inverse Problems [article]

Jakob Kruse, Lynton Ardizzone, Carsten Rother, Ullrich Köthe
2021 arXiv   pre-print
Recent work demonstrated that flow-based invertible neural networks are promising tools for solving ambiguous inverse problems.  ...  autoencoders.  ...  ., 2019) has shown that flowbased invertible neural networks such as RealNVP (Dinh et al., 2016) can be trained with data from the forward process, and then used in inverse mode to sample from p(x |  ... 
arXiv:2101.10763v3 fatcat:xeq6dmfvubcmpo7d5y5gfrravm

Training Generative Reversible Networks [article]

Robin Tibor Schirrmeister, Patryk Chrabąszcz, Frank Hutter, Tonio Ball
2018 arXiv   pre-print
To overcome this problem, by-design reversible neural networks (RevNets) had been previously used as generative models either directly optimizing the likelihood of the data under the model or using an  ...  Generative models with an encoding component such as autoencoders currently receive great interest.  ...  Recently, invertible-by-design neural networks called reversible neural networks were proposed.  ... 
arXiv:1806.01610v4 fatcat:nrzlotkggfgutbpdbij4tpyimy

A Disentangling Invertible Interpretation Network for Explaining Latent Representations [article]

Patrick Esser, Robin Rombach, Björn Ommer
2020 arXiv   pre-print
Neural networks have greatly boosted performance in computer vision by learning powerful representations of input data.  ...  The invertible interpretation network disentangles the hidden representation into separate, semantically meaningful concepts.  ...  We then invert the modified representation back to the latent space of the autoencoder and visualize the resulting representation in a two-dimensional embedding of the latent space (bottom left).  ... 
arXiv:2004.13166v1 fatcat:quaab4pocrea7ot2abtkog62ae

A Disentangling Invertible Interpretation Network for Explaining Latent Representations

Patrick Esser, Robin Rombach, Bjorn Ommer
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Neural networks have greatly boosted performance in computer vision by learning powerful representations of input data.  ...  The invertible interpretation network disentangles the hidden representation into separate, semantically meaningful concepts.  ...  Invertible neural networks [5, 6, 19, 22] have been used to get a better understanding of adversarial attacks [18] .  ... 
doi:10.1109/cvpr42600.2020.00924 dblp:conf/cvpr/EsserRO20 fatcat:lvm3bc4javhmrkd4cfecfllbki

Disentangled Inference for GANs with Latently Invertible Autoencoder [article]

Jiapeng Zhu, Deli Zhao, Bo Zhang, Bolei Zhou
2022 arXiv   pre-print
The decoder of LIA is first trained as a standard GAN with the invertible network and then the partial encoder is learned from a disentangled autoencoder by detaching the invertible network from LIA, thus  ...  To address the entanglement issue and enable inference in GAN we propose a novel algorithm named Latently Invertible Autoencoder (LIA).  ...  The core idea of LIA is to symmetrically embed an invertible network in an autoencoder. Then the neural architecture is trained with adversarial learning as two decomposed modules.  ... 
arXiv:1906.08090v4 fatcat:7oqsuzk3pvc5fc4mr7r3rva6mu

A Deep Learning Approach to Data-driven Parameterizations for Statistical Parametric Speech Synthesis [article]

Prasanna Kumar Muthukumar, Alan W. Black
2014 arXiv   pre-print
We create an invertible, low-dimensional, noise-robust encoding of the Mel Log Spectrum by training a tapered Stacked Denoising Autoencoder (SDA).  ...  This SDA is then unwrapped and used as the initialization for a Multi-Layer Perceptron (MLP). The MLP is fine-tuned by training it to reconstruct the input at the output layer.  ...  As the name suggests, the Stacked Denoising Autoencoder is constructed by stacking several Denoising Autoencoders together to form a deep neural network.  ... 
arXiv:1409.8558v1 fatcat:5htuwxqhnngq3aoc6a72mimwcm

End-to-end Sinkhorn Autoencoder with Noise Generator

Kamil Deja, Jan Dubinski, Piotr Nowak, Sandro Wenzel, Przemyslaw Spurek, Tomasz Trzcinski
2020 IEEE Access  
More precisely, we extend autoencoder architecture by adding a deterministic neural network trained to map noise from a known distribution onto autoencoder latent space representing data distribution.  ...  Multiple attempts are taken to reduce this burden, e.g. using generative approaches based on Generative Adversarial Networks or Variational Autoencoders.  ...  The decoder of LIA is first trained as a standard GAN with the invertible network and then the partial encoder is learned from a disentangled autoencoder by detaching the invertible network from LIA.  ... 
doi:10.1109/access.2020.3048622 fatcat:pm6gjwnt4jdlrnm47as5j2gwcy

Autoencoding Blade Runner: Reconstructing Films with Artificial Neural Networks

Terence Broad, Mick Grierson
2017 Leonardo: Journal of the International Society for the Arts, Sciences and Technology  
Blade Runner-Autoencoded' is a film made by training an autoencoder-a type of generative neural network-to recreate frames from the film Blade Runner.  ...  The project explores the aesthetic qualities of the disembodied gaze of the neural network.  ...  The was significant as this was the first time a convolutional neural network had been effectively inverted and used as a generative model, creating images almost indistinguishable from photographs at  ... 
doi:10.1162/leon_a_01455 fatcat:zwtukmxvk5bhze5whpxwgr5cfm

Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification [article]

Yuting Zhang, Kibok Lee, Honglak Lee
2016 arXiv   pre-print
However, as high-capacity supervised neural networks trained with a large amount of labels have achieved remarkable success in many computer vision tasks, the availability of large-scale labeled images  ...  Taking the 16-layer VGGNet trained under the ImageNet ILSVRC 2012 protocol as a strong baseline for image classification, our methods improve the validation-set accuracy by a noticeable margin.  ...  As the best existing method for inverting neural networks with no skip link, it used unpooling with fixed switches to upsample the intermediate activation maps.  ... 
arXiv:1606.06582v1 fatcat:ckilgys6qjb4haluhwihn77hlm

Network-to-Network Translation with Conditional Invertible Neural Networks [article]

Robin Rombach and Patrick Esser and Björn Ommer
2020 arXiv   pre-print
Therefore, we seek a model that can relate between different existing representations and propose to solve this task with a conditionally invertible network.  ...  diagnosis of existing representations by translating them into interpretable domains such as images.  ...  In our implementation, the conditional invertible neural network (cINN) consists of a sequence of INN-blocks as shown in Fig.  ... 
arXiv:2005.13580v2 fatcat:2wlmgachpvdsne7lhcpiio2noe

Skeletonized Wave-Equation Refraction Inversion With Autoencoded Waveforms

Han Yu, Yuqing Chen, Sherif M. Hanafy, Gerard T. Schuster
2021 IEEE Transactions on Geoscience and Remote Sensing  
The benefit of this approach is that an elaborated autoencoding neural network not only refines intrinsic information hidden in the refractions but also improves the quality of inversion for a reliable  ...  In this study, first arrivals can be compressed in a low-rank sense with their skeletal features extracted by a well-trained autoencoder.  ...  This subscript z * is the scalar representing its skeletonized feature encoded by a well-trained autoencoder neural network.  ... 
doi:10.1109/tgrs.2020.3046093 fatcat:vd6xkq5zpfaqln5cqip6svf24y
« Previous Showing results 1 — 15 out of 4,807 results