Filters








1,484 Hits in 2.3 sec

Scheduled denoising autoencoders [article]

Krzysztof J. Geras, Charles Sutton
2015 arXiv   pre-print
This motivates the scheduled denoising autoencoder, which starts with a high level of noise that lowers as training progresses.  ...  Working within the unsupervised framework of denoising autoencoders, we observe that when the input is heavily corrupted during training, the network tends to learn coarse-grained features, whereas when  ...  Experimentally, we find on both image and text data that scheduled denoising autoencoders learn better representations than standard denoising autoencoders, as measured by the features' performance on  ... 
arXiv:1406.3269v3 fatcat:xm2qgvkjdjb3thz756pipbkavu

Composite Denoising Autoencoders [chapter]

Krzysztof J. Geras, Charles Sutton
2016 Lecture Notes in Computer Science  
We introduce an unsupervised representation learning method called a composite denoising autoencoder (CDA) to address this.  ...  We exploit the observation from previous work that in a denoising autoencoder, training with lower levels of noise results in more specific, fine-grained features.  ...  Model Test error Composite Denoising Autoencoder 35.06% Scheduled Denoising Autoencoder [10] 35.7% Zero-bias Autoencoder [22] 35.9% Fastfood FFT [20] 36.9% Nonparametrically Guided Autoencoder [  ... 
doi:10.1007/978-3-319-46128-1_43 fatcat:j5ir4z43abcr3cpftszhvtp5ji

Semantic denoising autoencoders for retinal optical coherence tomography

Max-Heinrich Laves, Sontje Ihler, Lüder Alexander Kahrs, Tobias Ortmaier, Stephen A. Boppart, Maciej Wojtkowski, Wang-Yuhl Oh
2019 Optical Coherence Imaging Techniques and Imaging in Scattering Media III  
We propose semantic denoising autoencoders, which combine a convolutional denoising autoencoder with a priorly trained ResNet image classifier as regularizer during training.  ...  It is shown that semantically regularized autoencoders are capable of denoising retinal OCT images without blurring details of diseases.  ...  Future work therefore aims on variational autoencoder and generative adversarial networks for OCT denoising.  ... 
doi:10.1117/12.2526936 fatcat:lxspr5jr3ffjxfs5k55qxxlfhe

Semantic denoising autoencoders for retinal optical coherence tomography [article]

Max-Heinrich Laves, Sontje Ihler, Lüder Alexander Kahrs, Tobias Ortmaier
2019 arXiv   pre-print
It is shown that regularized autoencoders are capable of denoising retinal OCT images without blurring details of diseases.  ...  With our approach, higher peak signal-to-noise ratios with PSNR = 31.2 dB and higher classification accuracy of ACC = 85.0 % can be achieved for denoised images compared to state-of-the-art denoising with  ...  This paper describes a domain-specific postprocessing method for denoising OCT images with machine learning and more specific convolutional autoencoders (AE) while maintaining disease characteristics.  ... 
arXiv:1903.09809v1 fatcat:kvke2n7pkbbytcec4kic4hhe7a

Autoencoders, Kernels, and Multilayer Perceptrons for Electron Micrograph Restoration and Compression [article]

Jeffrey M. Ede
2018 arXiv   pre-print
Kernels and multilayer perceptrons have been trained to approximate the denoising effect of the 4× compression autoencoders.  ...  Our code, example usage and pre-trained models are available at https://github.com/Jeffrey-Ede/Denoising-Kernels-MLPs-Autoencoders  ...  This meant that STEM autoencoder learning curves did not plateau in our 60000 batch training schedule, resulting in STEM MSEs being above their convergence limit.  ... 
arXiv:1808.09916v1 fatcat:canyqapil5aonkdvedth6ia2zu

Adaptive Noise Level for Stacked Denoising Autoencoder

Qianjun Zhang, Lei Zhang
2017 DEStech Transactions on Engineering and Technology Research  
The stacked denoising autoencoder (SDAE) is a slight modification of stacked autoencoder, which is trained to reconstruct a clean version of an input from its corrupted version.  ...  To address this limitation, we present an adaptive stacked denoising autoencoder based on the principle of annealing (AdaptiveSDAE), a novel method of adaptively obtaining the noise level.  ...  Geras et al. (2014) [7] proposed a scheduled denoising autoencoder, which starts with a high level of noise that lowers as training progresses.  ... 
doi:10.12783/dtetr/ismii2017/16665 fatcat:exd32wo4crhdzjuqgcnbgqhf2e

Denoising Induction Motor Sounds Using an Autoencoder [article]

Thanh Tran, Sebastian Bader, Jan Lundgren
2022 arXiv   pre-print
This paper describes a method for creating an autoencoder to map noisy machine sounds to clean sounds for denoising purposes.  ...  The mean square error (MSE) was used as the assessment criteria to evaluate the similarity between denoised sounds using the proposed autoencoder and the original sounds in the test set.  ...  INTRODUCTION In the manufacturing industry, machine failure detection systems are critical for automatically detecting broken components in machines for scheduled maintenance [1] , [2] .  ... 
arXiv:2208.04462v1 fatcat:xutt3f6nt5dz3edzrww3gotrpu

Finetuning Pretrained Transformers into Variational Autoencoders [article]

Seongmin Park, Jihwa Lee
2021 arXiv   pre-print
Text variational autoencoders (VAEs) are notorious for posterior collapse, a phenomenon where the model's decoder learns to ignore signals from the encoder.  ...  We compare different input denoising percentages, encoder pooling strategies, KL annealing schedules, and KL thresholds.  ...  (Shen et al., 2020) Denoising text inputs by deleting random tokens motivate autoencoders (AEs) to learn better latent representations.  ... 
arXiv:2108.02446v3 fatcat:ac2jjb65dbdsrbbztvr3i3y4tu

k-Sparse Autoencoders [article]

Alireza Makhzani, Brendan Frey
2014 arXiv   pre-print
When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and RBMs. k-sparse autoencoders  ...  To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is an autoencoder with linear activation function, where in hidden layers only the k highest activities  ...  We trained a number of architectures on the MNIST and NORB datasets, including RBM, dropout autoencoder and denoising autoencoder.  ... 
arXiv:1312.5663v2 fatcat:dyhkejyisjapbk4ib4hpydbmtq

Are Large-scale Datasets Necessary for Self-Supervised Pre-training? [article]

Alaaeldin El-Nouby, Gautier Izacard, Hugo Touvron, Ivan Laptev, Hervé Jegou, Edouard Grave
2021 arXiv   pre-print
Our study shows that denoising autoencoders, such as BEiT or a variant that we introduce in this paper, are more robust to the type and size of the pre-training data than popular self-supervised methods  ...  This is a strong indication that denoising autoencoders are highly sample efficient unsupervised learning methods.  ...  Such a model can be thought of as a denoising autoencoder [25] where the noise corresponds to the patch masking operation.  ... 
arXiv:2112.10740v1 fatcat:fn36ircukjbqtd2ffg7ke7m3s4

Agon: A Scalable Competitive Scheduler for Large Heterogeneous Systems [article]

Andreas Prodromou, Ashish Venkat, Dean M. Tullsen
2021 arXiv   pre-print
overhead for each scheduling interval.  ...  This scheduler overcomes the challenges of (1) the high computation overhead of near-optimal schedulers, and (2) the error introduced by inaccurate performance predictions.  ...  Given the success of autoencoders in denoising images, we expect it work well in our example.  ... 
arXiv:2109.00665v1 fatcat:w3gsp7ma6beejc6gbuyu2soqi4

Lateral Connections in Denoising Autoencoders Support Supervised Learning [article]

Antti Rasmus, Harri Valpola, Tapani Raiko
2015 arXiv   pre-print
We show how a deep denoising autoencoder with lateral connections can be used as an auxiliary unsupervised learning task to support supervised learning.  ...  Denoising autoencoders (Vincent et al., 2010) use the same principle to create unsupervised models for data.  ...  Decoder for Unsupervised Auxiliary Task The unsupervised auxiliary task performs denoising similar to traditional denoising autoencoder, that is, it tries to match the reconstructionx with the original  ... 
arXiv:1504.08215v1 fatcat:2ggv55nbmvch3j7scocyh6ptke

Diffusion Autoencoders: Toward a Meaningful and Decodable Representation [article]

Konpat Preechakul, Nattanat Chatthee, Suttisak Wizadwongsa, Supasorn Suwajanakorn
2022 arXiv   pre-print
We also show that this two-level encoding improves denoising efficiency and naturally facilitates various downstream tasks including few-shot conditional sampling.  ...  This paper explores the possibility of using DPMs for representation learning and seeks to extract a meaningful and decodable representation of an input image via autoencoding.  ...  (We found that Cosine [36] scheduler underperformed during preliminary experiments for our latent DDIM.) We compared the two schedulers on the z sem of LSUN's Horse 128 diffusion autoencoder model.  ... 
arXiv:2111.15640v3 fatcat:sahzuednxbb4dpjxswv6yilx2i

A Novel Denoising Method for Retaining Data Characteristics Brought from Washing Aeroengines

Zhiqi Yan, Ming Zu, Zhiquan Cui, Shisheng Zhong
2022 Mathematics  
Second, a Gated Recurrent Unit Autoencoder (GAE) model is proposed to capture engine data features.  ...  These step changes increase the difficulty of denoising because they will be smoothed in the denoising.  ...  Gated Recurrent Unit Autoencoder(GAE): A Proposed Denoising Autoencoder Model for Aeroengine Data DAE is a promising method for data denoising, which can be used for aeroengine data denoising.  ... 
doi:10.3390/math10091485 fatcat:yurk6kxfrffm5hukzlhd7f4pqi

Boltzmann Machines and Denoising Autoencoders for Image Denoising [article]

Kyunghyun Cho
2013 arXiv   pre-print
, better than denoising autoencoders.  ...  Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and  ...  Denoising Autoencoders A denoising autoencoder (DAE) is a special form of multi-layer perceptron network with 2L − 1 hidden layers and L − 1 sets of tied weights.  ... 
arXiv:1301.3468v6 fatcat:ubupuw5xpvdtlhfj546bhlwsta
« Previous Showing results 1 — 15 out of 1,484 results