Filters








48 Hits in 1.3 sec

PixelGAN Autoencoders [article]

Alireza Makhzani, Brendan Frey
2017 arXiv   pre-print
In this paper, we describe the "PixelGAN autoencoder", a generative autoencoder in which the generative path is a convolutional autoregressive neural network on pixels (PixelCNN) that is conditioned on a latent code, and the recognition path uses a generative adversarial network (GAN) to impose a prior distribution on the latent code. We show that different priors result in different decompositions of information between the latent code and the autoregressive decoder. For example, by imposing a
more » ... Gaussian distribution as the prior, we can achieve a global vs. local decomposition, or by imposing a categorical distribution as the prior, we can disentangle the style and content information of images in an unsupervised fashion. We further show how the PixelGAN autoencoder with a categorical prior can be directly used in semi-supervised settings and achieve competitive semi-supervised classification results on the MNIST, SVHN and NORB datasets.
arXiv:1706.00531v1 fatcat:u43hdjhppbfljojfigdnvd737y

Implicit Autoencoders [article]

Alireza Makhzani
2019 arXiv   pre-print
In this paper, we describe the "implicit autoencoder" (IAE), a generative autoencoder in which both the generative path and the recognition path are parametrized by implicit distributions. We use two generative adversarial networks to define the reconstruction and the regularization cost functions of the implicit autoencoder, and derive the learning rules based on maximum-likelihood learning. Using implicit distributions allows us to learn more expressive posterior and conditional likelihood
more » ... tributions for the autoencoder. Learning an expressive conditional likelihood distribution enables the latent code to only capture the abstract and high-level information of the data, while the remaining low-level information is captured by the implicit conditional likelihood distribution. We show the applications of implicit autoencoders in disentangling content and style information, clustering, semi-supervised classification, learning expressive variational distributions, and multimodal image-to-image translation from unpaired data.
arXiv:1805.09804v2 fatcat:mwjpf6wuqnhqdny3ockp4hh2b4

k-Sparse Autoencoders [article]

Alireza Makhzani, Brendan Frey
2014 arXiv   pre-print
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is an autoencoder with linear activation function, where in hidden layers only the k highest activities are kept. When applied to
more » ... he MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and RBMs. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
arXiv:1312.5663v2 fatcat:dyhkejyisjapbk4ib4hpydbmtq

Adversarial Autoencoders [article]

Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, Brendan Frey
2016 arXiv   pre-print
In this paper, we propose the "adversarial autoencoder" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution. Matching the aggregated posterior to the prior ensures that generating from any part of prior space results in meaningful samples. As a result, the decoder of the adversarial
more » ... toencoder learns a deep generative model that maps the imposed prior to the data distribution. We show how the adversarial autoencoder can be used in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction and data visualization. We performed experiments on MNIST, Street View House Numbers and Toronto Face datasets and show that adversarial autoencoders achieve competitive results in generative modeling and semi-supervised classification tasks.
arXiv:1511.05644v2 fatcat:wzmcxtyd65f4tb6mgtzkahh774

Winner-Take-All Autoencoders [article]

Alireza Makhzani, Brendan Frey
2015 arXiv   pre-print
In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of convolutional architectures and autoencoders for learning shift-invariant sparse representations. We
more » ... scribe a way to train convolutional autoencoders layer by layer, where in addition to lifetime sparsity, a spatial sparsity within each feature map is achieved using winner-take-all activation functions. We will show that winner-take-all autoencoders can be used to to learn deep sparse representations from the MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets, and achieve competitive classification performance.
arXiv:1409.2752v2 fatcat:h52wgsohlng2nofn4gfwdnzrm4

Compressing Multisets with Large Alphabets [article]

Daniel Severo, James Townsend, Ashish Khisti, Alireza Makhzani, Karen Ullrich
2021 arXiv   pre-print
Current methods that optimally compress multisets are not suitable for high-dimensional symbols, as their compute time scales linearly with alphabet size. Compressing a multiset as an ordered sequence with off-the-shelf codecs is computationally more efficient, but has a sub-optimal compression rate, as bits are wasted encoding the order between symbols. We present a method that can recover those bits, assuming symbols are i.i.d., at the cost of an additional 𝒪(|ℳ|log M) in average time
more » ... ty, where |ℳ| and M are the total and unique number of symbols in the multiset. Our method is compatible with any prefix-free code. Experiments show that, when paired with efficient coders, our method can efficiently compress high-dimensional sources such as multisets of images and collections of JSON files.
arXiv:2107.09202v1 fatcat:xpmxlyp2nfbkllsnrhcnbdvupa

Evaluating Lossy Compression Rates of Deep Generative Models [article]

Sicong Huang, Alireza Makhzani, Yanshuai Cao, Roger Grosse
2020 arXiv   pre-print
Acknowledgements Alireza Makhzani and Roger Grosse acknowledge support from the CIFAR Canadian AI Chairs program.  ...  Correspondence to: Alireza Makhzani,  ...  Correspondence to: Alireza Makhzani, Roger Grosse <makhzani, rgrosse@cs.toronto.edu>. Proceedings of the 37 th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020.  ... 
arXiv:2008.06653v1 fatcat:o4doah7oszatrbd4et4r75mcjq

Likelihood Ratio Exponential Families [article]

Rob Brekelmans, Frank Nielsen, Alireza Makhzani, Aram Galstyan, Greg Ver Steeg
2021 arXiv   pre-print
The exponential family is well known in machine learning and statistical physics as the maximum entropy distribution subject to a set of observed constraints, while the geometric mixture path is common in MCMC methods such as annealed importance sampling. Linking these two ideas, recent work has interpreted the geometric mixture path as an exponential family of distributions to analyze the thermodynamic variational objective (TVO). We extend these likelihood ratio exponential families to
more » ... solutions to rate-distortion (RD) optimization, the information bottleneck (IB) method, and recent rate-distortion-classification approaches which combine RD and IB. This provides a common mathematical framework for understanding these methods via the conjugate duality of exponential families and hypothesis testing. Further, we collect existing results to provide a variational representation of intermediate RD or TVO distributions as a minimizing an expectation of KL divergences. This solution also corresponds to a size-power tradeoff using the likelihood ratio test and the Neyman Pearson lemma. In thermodynamic integration bounds such as the TVO, we identify the intermediate distribution whose expected sufficient statistics match the log partition function.
arXiv:2012.15480v2 fatcat:lc4lwxcrwbaxnpzb6dzwourvwu

Reconstruction of a Generalized Joint Sparsity Model using Principal Component Analysis

Alireza Makhzani, Shahrokh Valaee
2011 2011 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)  
In this paper, we define a new Joint Sparsity Model (JSM) and use Principal Component Analysis followed by Minimum Description Length and Compressive Sensing to reconstruct spatially and temporally correlated signals in a sensor network. The proposed model decomposes each sparse signal into two sparse components. The first component has a common support across all sensed signals. The second component is an innovation part that is specific to each sensor and might have a support that is
more » ... from the support of the other innovation signals. We use the fact that the common component generates a common subspace that can be found using the principal component analysis and the minimum description length. We show that with this general model, we can reconstruct the signal with smaller samples that are needed by the direct application of the compressive sensing on each sensor.
doi:10.1109/camsap.2011.6136001 dblp:conf/camsap/MakhzaniV11 fatcat:vg6rf722xjgcxahqacf6jbwns4

Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding [article]

Yangjun Ruan, Karen Ullrich, Daniel Severo, James Townsend, Ashish Khisti, Arnaud Doucet, Alireza Makhzani, Chris J. Maddison
2021 arXiv   pre-print
Latent variable models have been successfully applied in lossless compression with the bits-back coding algorithm. However, bits-back suffers from an increase in the bitrate equal to the KL divergence between the approximate posterior and the true posterior. In this paper, we show how to remove this gap asymptotically by deriving bits-back coding algorithms from tighter variational bounds. The key idea is to exploit extended space representations of Monte Carlo estimators of the marginal
more » ... ood. Naively applied, our schemes would require more initial bits than the standard bits-back coder, but we show how to drastically reduce this additional cost with couplings in the latent space. When parallel architectures can be exploited, our coders can achieve better rates than bits-back with little additional cost. We demonstrate improved lossless compression rates in a variety of settings, especially in out-of-distribution or sequential data compression.
arXiv:2102.11086v2 fatcat:ma5ywl4phrb6zfeuqq6jotvc4q

StarCraft II: A New Challenge for Reinforcement Learning [article]

Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney (+11 others)
2017 arXiv   pre-print
This paper introduces SC2LE (StarCraft II Learning Environment), a reinforcement learning environment based on the StarCraft II game. This domain poses a new grand challenge for reinforcement learning, representing a more difficult class of problems than considered in most prior work. It is a multi-agent problem with multiple players interacting; there is imperfect information due to a partially observed map; it has a large action space involving the selection and control of hundreds of units;
more » ... t has a large state space that must be observed solely from raw input feature planes; and it has delayed credit assignment requiring long-term strategies over thousands of steps. We describe the observation, action, and reward specification for the StarCraft II domain and provide an open source Python-based interface for communicating with the game engine. In addition to the main game maps, we provide a suite of mini-games focusing on different elements of StarCraft II gameplay. For the main game maps, we also provide an accompanying dataset of game replay data from human expert players. We give initial baseline results for neural networks trained from this data to predict game outcomes and player actions. Finally, we present initial baseline results for canonical deep reinforcement learning agents applied to the StarCraft II domain. On the mini-games, these agents learn to achieve a level of play that is comparable to a novice player. However, when trained on the main game, these agents are unable to make significant progress. Thus, SC2LE offers a new and challenging environment for exploring deep reinforcement learning algorithms and architectures.
arXiv:1708.04782v1 fatcat:z2gjz6reqbeora2sst6glrb6ja

Variational Model Inversion Attacks [article]

Kuan-Chieh Wang, Yan Fu, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani
2022
Given the ubiquity of deep neural networks, it is important that these models do not reveal information about sensitive data that they have been trained on. In model inversion attacks, a malicious user attempts to recover the private dataset used to train a supervised neural network. A successful model inversion attack should generate realistic and diverse samples that accurately describe each of the classes in the private dataset. In this work, we provide a probabilistic interpretation of
more » ... inversion attacks, and formulate a variational objective that accounts for both diversity and accuracy. In order to optimize this variational objective, we choose a variational family defined in the code space of a deep generative model, trained on a public auxiliary dataset that shares some structural similarity with the target dataset. Empirically, our method substantially improves performance in terms of target attack accuracy, sample realism, and diversity on datasets of faces and chest X-ray images.
doi:10.48550/arxiv.2201.10787 fatcat:z7ntfgvrrzhejposkq5wxr6fpq

TzK: Flow-Based Conditional Generative Model [article]

Micha Livne, David Fleet
2019 arXiv   pre-print
Acknowledgements We thank Ethan Fetaya, James Lucas, Alireza Makhzani, Leonid Sigal, and Kevin Swersky for helpful comments on this work.  ...  ., 2014; Makhzani, 2018; Makhzani et al., 2015; Chen et al., 2016) .  ...  For example, Makhzani, 2018) allow unsupervised learning but assume the number of (disjoint) categories is given.  ... 
arXiv:1902.01893v4 fatcat:b4bs76fs2nfnhgpduwsrpob5fe

Learning deep representations by mutual information estimation and maximization [article]

R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, Yoshua Bengio
2019 arXiv   pre-print
Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015.  ...  We combine MI maximization with prior matching in a manner similar to adversarial autoencoders (AAE, Makhzani et al., 2015) to constrain representations according to desired statistical properties.  ... 
arXiv:1808.06670v5 fatcat:j4jhucpo2bhxbefhvssd75sara

Neural Text Clustering with Document-Level Attention Based on Dynamic Soft Labels

Zhi Chen, Wu Guo, Li-Rong Dai, Zhen-Hua Ling, Jun Du
2019 Interspeech 2019  
For example, Alireza Makhzani proposed K-sparse autoencoder (KSAE) [6] which explicitly enforces sparsity by only keeping the K highest activities in the feedforward phase and Yu Chen proposed K-competitive  ... 
doi:10.21437/interspeech.2019-1417 dblp:conf/interspeech/ChenGDLD19 fatcat:nkv3nwhkqfdzbeupdlrqp5us6e
« Previous Showing results 1 — 15 out of 48 results