Filters








14,088 Hits in 2.0 sec

Negative Sampling in Variational Autoencoders [article]

Adrián Csiszárik, Beatrix Benkő, Dániel Varga
2022 arXiv   pre-print
training scheme utilizing negative samples.  ...  We investigate this failure mode in Variational Autoencoder models, which are also prone to this, and improve upon the out-of-distribution generalization performance of the model by employing an alternative  ...  In particular, our experiments show that with certain datasets it diminishes when a Gaussian noise model is considered instead of a Bernoulli. • We propose negative sampling in Variational Autoencoders  ... 
arXiv:1910.02760v3 fatcat:prgxnqxh6jeslbjoa2utx2bu6i

An Ameliorated method for Fraud Detection using Complex Generative Model: Variational Autoencoder

2019 VOLUME-8 ISSUE-10, AUGUST 2019, REGULAR ISSUE  
The proposed system, VAE based fraud detection, which uses a variational autoencoder for predicting and detecting of fraud detection.  ...  The fraud detector uses the latent representations obtained from the variational autoencoder to classify whether transactions are fraud or not.  ...  The loss function of the variational autoencoder is the negative log-likelihood with a regularizer. The total loss is then for N entire data points.  ... 
doi:10.35940/ijitee.b1005.1292s19 fatcat:seg34ssutbbqzktx6o52hcmfni

Tutorial - What is a Variational Autoencoder?

Jaan Altosaar
2016 Zenodo  
Tutorial on variational autoencoders.  ...  DOI for citations: https://doi.org/10.5281/zenodo.4458806 Post: https://jaan.io/what-is-variational-autoencoder-vae-tutorial/ Code at https://github.com/altosaar/variational-autoencoder/  ...  The loss function of the variational autoencoder is the negative loglikelihood with a regularizer.  ... 
doi:10.5281/zenodo.4458805 fatcat:jny5puuegbfnhccfto7abnxyey

Novel Applications for VAE-based Anomaly Detection Systems [article]

Luca Bergamin, Tommaso Carraro, Mirko Polato, Fabio Aiolli
2022 arXiv   pre-print
We propose Variational Auto-encoding Binary Classifiers (V-ABC): a novel model that repurposes and extends the Auto-encoding Binary Classifier (ABC) anomaly detector, using the Variational Auto-encoder  ...  The recent rise in deep learning technologies fueled innovation and boosted scientific research.  ...  Variational autoencoders Variational autoencoders are generative models based on variational inference, with an architecture similar to vanilla autoencoders.  ... 
arXiv:2204.12577v1 fatcat:pui4kzulrndpfp2fxtovgd3kge

Self-taught learning of a deep invariant representation for visual tracking via temporal slowness principle

Jason Kuen, Kian Ming Lim, Chin Poo Lee
2015 Pattern Recognition  
The proposed observational model retains old training samples to alleviate drift, and collect negative samples which are coherent with target's motion pattern for better discriminative tracking.  ...  In this paper, we propose to learn complex-valued invariant representations from tracked sequential image patches, via strong temporal slowness constraint and stacked convolutional autoencoders.  ...  Online negative sampling: Online negative sampling is one of the most important components in discriminative tracking.  ... 
doi:10.1016/j.patcog.2015.02.012 fatcat:26lq2q5uvnduxcdtiqttm3cgui

An Overview of Deep Learning Based Methods for Unsupervised and Semi-Supervised Anomaly Detection in Videos

B. Kiran, Dilip Thomas, Ranjith Parakkal
2018 Journal of Imaging  
Videos represent the primary source of information for surveillance applications and are available in large amounts but in most cases contain little or no annotation for supervised learning.  ...  Acknowledgments : The authors would like to thank Benjamin Crouzier for his help in proof reading the manuscript, and Y. Senthil Kumar (Valeo) for helpful suggestions.  ...  We use the latent space representation of the variational autoencoders to fit a multivariate 1-Gaussian on the training dataset and evaluate the negative-log probability for the test samples.  ... 
doi:10.3390/jimaging4020036 fatcat:za52zspzjbewbakdordavpatvq

A New Dimension of Breast Cancer Epigenetics - Applications of Variational Autoencoders with DNA Methylation

Alexander J. Titus, Carly A. Bobak, Brock C. Christensen
2018 Proceedings of the 11th International Joint Conference on Biomedical Engineering Systems and Technologies  
In this article, we discuss the applications of unsupervised learning through the use of variational autoencoders using DNA methylation data and motivate further work with initial results using breast  ...  However, the black-box nature of non-linear models, such as those in deep learning, and a lack of accurately labeled ground truth data have limited the same rapid adoption in this space that other methods  ...  ACKNOWLEDGEMENTS Research reported in this publication was supported by the Office of the U.S.  ... 
doi:10.5220/0006636401400145 dblp:conf/biostec/TitusBC18 fatcat:t63p6wbztbglrby6bvfjmp5gfi

A Variational Autoencoder for Probabilistic Non-Negative Matrix Factorisation [article]

Steven Squires, Adam Prügel Bennett, Mahesan Niranjan
2019 arXiv   pre-print
We introduce and demonstrate the variational autoencoder (VAE) for probabilistic non-negative matrix factorisation (PAE-NMF).  ...  By restricting the weights in the final layer of the network to be non-negative and using the non-negative Weibull distribution we produce a probabilistic form of NMF which allows us to generate new data  ...  A standard variational autoencoder uses Gaussian distributions.  ... 
arXiv:1906.05912v1 fatcat:4to4yx5itfcabmafz57mnanha4

Cost-sensitive detection with variational autoencoders for environmental acoustic sensing [article]

Yunpeng Li, Ivan Kiskin, Davide Zilli, Marianne Sinka, Henry Chan, Kathy Willis, Stephen Roberts
2017 arXiv   pre-print
This paper presents a cost-sensitive classification paradigm, in which the hyper-parameters of classifiers and the structure of variational autoencoders are selected in a principled Neyman-Pearson framework  ...  Most existing machine learning techniques developed for environmental acoustic sensing do not provide flexible control of the trade-off between the false positive rate and the false negative rate.  ...  The variational autoencoder The variational autoencoder (VAE) is a variational inference technique using a neural network for function approximations Kingma and Welling (2014) .  ... 
arXiv:1712.02488v1 fatcat:gngaea65ive6ld7feannq6zi4i

Weak Label Supervision for Monaural Source Separation Using Non-negative Denoising Variational Autoencoders [article]

Ertuğ Karamatlı, Ali Taylan Cemgil, Serap Kırbız
2018 arXiv   pre-print
We associate a variational autoencoder (VAE) with each class within a non-negative model.  ...  We demonstrate that deep convolutional VAEs provide a prior model to identify complex signals in a sound mixture without having access to any source signal.  ...  NON-NEGATIVE DENOISING VARIATIONAL AUTOENCODERS We start by describing the NMF [1] model due to its connections to our model.  ... 
arXiv:1810.13104v2 fatcat:7cscedj4frcb7idz43rnhtoxvu

Diffusion Autoencoders: Toward a Meaningful and Decodable Representation [article]

Konpat Preechakul, Nattanat Chatthee, Suttisak Wizadwongsa, Supasorn Suwajanakorn
2022 arXiv   pre-print
Diffusion probabilistic models (DPMs) have achieved remarkable quality in image generation that rivals GANs'.  ...  Our key idea is to use a learnable encoder for discovering the high-level semantics, and a DPM as the decoder for modeling the remaining stochastic variations.  ...  With positive and negative examples of the target attribute, one can modify the latent code by moving z sem linearly from the negative to the positive attribute in the latent space while keeping x T intact  ... 
arXiv:2111.15640v3 fatcat:sahzuednxbb4dpjxswv6yilx2i

OUTRIDER: A statistical method for detecting aberrantly expressed genes in RNA sequencing data [article]

Felix Brechtmann, Agne Matuseviciute, Christian Mertes, Vicente A Yepez, Ziga Avsec, Maximilian Herzog, Daniel Magnus Bader, Holger Prokisch, Julien Gagneur
2018 bioRxiv   pre-print
The algorithm uses an autoencoder to model read count expectations according to the co-variation among genes resulting from technical, environmental, or common genetic variations.  ...  OUTRIDER is open source and includes functions for filtering out genes not expressed in a data set, for identifying outlier samples with too many aberrantly expressed genes, and for the P-value-based detection  ...  This is 321 consistent with the study of Way and Greene, who modeled co-variations in RNA-seq samples 322 using a single-layer autoencoder 13 .  ... 
doi:10.1101/322149 fatcat:nik4de2fzvdclbsac7zf3e3m4y

OUTRIDER: A Statistical Method for Detecting Aberrantly Expressed Genes in RNA Sequencing Data

Felix Brechtmann, Christian Mertes, Agnė Matusevičiūtė, Vicente A. Yépez, Žiga Avsec, Maximilian Herzog, Daniel M. Bader, Holger Prokisch, Julien Gagneur
2018 American Journal of Human Genetics  
The algorithm uses an autoencoder to model read-count expectations according to the gene covariation resulting from technical, environmental, or common genetic variations.  ...  OUTRIDER is open source and includes functions for filtering out genes not expressed in a dataset, for identifying outlier samples with too many aberrantly expressed genes, and for detecting aberrant gene  ...  This is 321 consistent with the study of Way and Greene, who modeled co-variations in RNA-seq samples 322 using a single-layer autoencoder 13 .  ... 
doi:10.1016/j.ajhg.2018.10.025 pmid:30503520 pmcid:PMC6288422 fatcat:7dkqmv3w5jc4xf323af3wcfqdq

Implicit Autoencoders [article]

Alireza Makhzani
2019 arXiv   pre-print
We show the applications of implicit autoencoders in disentangling content and style information, clustering, semi-supervised classification, learning expressive variational distributions, and multimodal  ...  In this paper, we describe the "implicit autoencoder" (IAE), a generative autoencoder in which both the generative path and the recognition path are parametrized by implicit distributions.  ...  Conclusion In this paper, we proposed the implicit autoencoder, which is a generative autoencoder that uses implicit distributions to learn expressive variational posterior and conditional likelihood distributions  ... 
arXiv:1805.09804v2 fatcat:mwjpf6wuqnhqdny3ockp4hh2b4

Adversarial Autoencoders [article]

Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, Brendan Frey
2016 arXiv   pre-print
In this paper, we propose the "adversarial autoencoder" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference  ...  Matching the aggregated posterior to the prior ensures that generating from any part of prior space results in meaningful samples.  ...  than a sample from our generative model (negative samples).  ... 
arXiv:1511.05644v2 fatcat:wzmcxtyd65f4tb6mgtzkahh774
« Previous Showing results 1 — 15 out of 14,088 results