44,196 Hits in 5.9 sec

Classification Representations Can be Reused for Downstream Generations [article]

Saisubramaniam Gopalakrishnan, Pranshu Ranjan Singh, Yasin Yazici, Chuan-Sheng Foo, Vijay Chandrasekhar, ArulMurugan Ambikapathi
2020 arXiv   pre-print
Unlike generative modeling approaches that aim to model the manifold distribution, we directly represent the given data manifold in the classification space and leverage properties of latent space representations  ...  classifier for the downstream task of sample generation.  ...  Fig. 2 . 2 Image generation using ReGene: Top blocks show classifier (for supervised latent space representations) and decoder (for image reconstruction) to be modeled.  ... 
arXiv:2004.07543v1 fatcat:jheawxxqyrbn3jwv4imdo6c2wq

Generative Max-Mahalanobis Classifiers for Image Classification, Generation and More [article]

Xiulong Yang, Hui Ye, Yang Ye, Xiang Li, Shihao Ji
2021 arXiv   pre-print
However, the softmax classifier that JEM exploits is inherently discriminative and its latent feature space is not well formulated as probabilistic distributions, which may hinder its potential for image  ...  We show that our Generative MMC (GMMC) can be trained discriminatively, generatively, or jointly for image classification and generation.  ...  Therefore, in this paper we investigate an LDA classifier for image classification and generation.  ... 
arXiv:2101.00122v4 fatcat:kpahcnlfljcercpuf6hkxfcnri

Explaining Image Classifiers Generating Exemplars and Counter-Exemplars from Latent Representations

Riccardo Guidotti, Anna Monreale, Stan Matwin, Dino Pedreschi
A decision tree is trained on a set of images represented in the latent space, and its decision rules are used to generate exemplar images showing how the original image can be modified to stay within  ...  Our explanation method exploits the latent representations learned through an adversarial autoencoder for generating a synthetic neighborhood of the image for which an explanation is required.  ...  Acknowledgments This work is supported by the EC H2020 programme under the funding schemes: G.A. 654024 SoBigData, G.A. 78835 Pro-Res, G.A. 825619 AI4EU and G.A. 761758 Humane AI.  ... 
doi:10.1609/aaai.v34i09.7116 fatcat:abys6hp43vcw7d5ouxchmmjl3a

A Multimodal Classifier Generative Adversarial Network for Carry and Place Tasks from Ambiguous Language Instructions [article]

Aly Magassouba, Komei Sugiura, Hisashi Kawai
2018 arXiv   pre-print
We develop the Multi-Modal Classifier Generative Adversarial Network (MMC-GAN) to predict the likelihood of different target areas considering the robot's physical limitation and the target clutter.  ...  This paper focuses on a multimodal language understanding method for carry-and-place tasks with domestic service robots.  ...  Using a latent space representation of these inputs, MMC-GAN can address both modalities through a unified framework. As a result, our classification method accuracy exceeds 80%.  ... 
arXiv:1806.03847v1 fatcat:jjt62cygyfdlrlfqifklbarehe

Uncertainty-Aware Deep Classifiers Using Generative Models

Murat Sensoy, Lance Kaplan, Federico Cerutti, Maryam Saleki
To this end, variational autoencoders and generative adversarial networks are incorporated to automatically generate out-of-distribution exemplars for training.  ...  However, selection or creation of such an auxiliary data set is non-trivial, especially for high dimensional data such as images.  ...  The U.S. and U.K. Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon. Also, Dr. Sensoy thanks to ARL for its support  ... 
doi:10.1609/aaai.v34i04.6015 fatcat:v5lveat4e5fxte3w4pvymkpmzi

Classify and Generate Reciprocally: Simultaneous Positive-Unlabelled Learning and Conditional Generation with Extra Data [article]

Bing Yu, Ke Sun, He Wang, Zhouchen Lin, Zhanxing Zhu
2020 arXiv   pre-print
unlabeled data to improve classification and generation performances.  ...  In this paper, we address this problem by leveraging Positive-Unlabeled (PU) classification and conditional generation with extra unlabeled data simultaneously, both of which aim to make full use of agnostic  ...  The latent space dimensions of generator are 128, 128, 256 for the three datasets, respectively.  ... 
arXiv:2006.07841v1 fatcat:cmr3ufn2bzcs7ctnlvz5x756le

Evaluating and Mitigating Bias in Image Classifiers: A Causal Perspective Using Counterfactuals [article]

Saloni Dash, Vineeth N Balasubramanian, Amit Sharma
2022 arXiv   pre-print
Moreover, generated counterfactuals are indistinguishable from reconstructed images in a human evaluation experiment and we subsequently use them to evaluate the fairness of a standard classifier trained  ...  Based on the generated counterfactuals, we show how to explain a pre-trained machine learning classifier, evaluate its bias, and mitigate the bias using a counterfactual regularizer.  ...  latent space representations.  ... 
arXiv:2009.08270v4 fatcat:whtgbxkrprgn7oyx3gxayqnhae

Complementary Auxiliary Classifiers for Label-Conditional Text Generation

Yuan Li, Chunyuan Li, Yizhe Zhang, Xiujun Li, Guoqing Zheng, Lawrence Carin, Jianfeng Gao
In this paper, we present CARA to alleviate the issue, where two auxiliary classifiers work simultaneously to ensure that (1) the encoder learns disentangled features and (2) the generator produces label-related  ...  Learning to generate text with a given label is a challenging task because natural language sentences are highly variable and ambiguous.  ...  Classifier C operates in the latent space, and the encoder is trained to maximize the classification loss so that disentangled features can be learned.  ... 
doi:10.1609/aaai.v34i05.6346 fatcat:ak2sf35nsrfhfjfpd332xlggja

AutoQML: Automatic Generation and Training of Robust Quantum-Inspired Classifiers by Using Genetic Algorithms on Grayscale Images [article]

Sergio Altares-López, Juan José García-Ripoll, Angela Ribeiro
2022 arXiv   pre-print
We propose a new hybrid system for automatically generating and training quantum-inspired classifiers on grayscale images by using multiobjective genetic algorithms.  ...  the preprocessing technique used for dimensionality reduction.  ...  Her research interests include artificial perception, pattern recognition, evolutionary algorithms, spatial knowledge representation, spatial reasoning for decision support systems, distributed systems  ... 
arXiv:2208.13246v1 fatcat:wwubprxynvh2lcocccbsogcm4e

Projective Latent Interventions for Understanding and Fine-tuning Classifiers [article]

Andreas Hinterreiter and Marc Streit and Bernhard Kainz
2020 arXiv   pre-print
Especially in medical applications, model developers and domain experts desire a better understanding of how these latent representations relate to the resulting classification performance.  ...  We present Projective Latent Interventions (PLIs), a technique for retraining classifiers by back-propagating manual changes made to low-dimensional embeddings of the latent space.  ...  We argue that such interventions can be useful to mentally connect the embedded latent space with the classification properties of a classifier.  ... 
arXiv:2006.12902v2 fatcat:cfxtxadjgvf57pjxclpb5x2yai

Semi-Unsupervised Learning: Clustering and Classifying using Ultra-Sparse Labels [article]

Matthew Willetts, Stephen J Roberts, Christopher C Holmes
2021 arXiv   pre-print
In semi-supervised learning for classification, it is assumed that every ground truth class of data is present in the small labelled dataset.  ...  We then show how a combination of clustering and semi-supervised learning, using DGMs, can be brought to bear on this problem.  ...  Acknowledgements We would like to thank Raza Habib, Aiden Doherty, Rui Shu, José Miguel Hernández-Lobato, Miguel Morin and Alexander Camuto for their insights and discussion  ... 
arXiv:1901.08560v3 fatcat:g6wi6scl6beo5iond3qvggncmi

Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike [article]

Johannes Schneider, Giovanni Apruzzese
2022 arXiv   pre-print
We propose to generate adversarial samples by modifying activations of upper layers encoding semantically meaningful concepts.  ...  A human might (and possibly should) notice differences between the original and the adversarial sample.  ...  ., the arts, the latent representation used to generate new samples is often randomly chosen [31] .  ... 
arXiv:2203.10166v1 fatcat:cdrlcs77xzdt3nrbkzvk6dgcre

Modeling and Analysis of Particle Deposition Processes on PVDF Membranes Using SEM Images and Image Generation by Auxiliary Classifier Generative Adversarial Networks

Caterina Cacciatori, Takashi Hashimoto, Satoshi Takizawa
2020 Water  
The images generated with the ACGAN model successfully reconstructed the real images of particles deposited on the membranes, as verified by human validation and particle counting of the real and generated  ...  To overcome those shortcomings of the previous models, this study aimed to provide an alternative method of modeling membrane fouling in water filtration, using auxiliary classifier generative adversarial  ...  Acknowledgments: The authors acknowledge Kazuyoshi Fujimura for his technical support on experimental works. Conflicts of Interest: There is no conflict of interest.  ... 
doi:10.3390/w12082225 fatcat:allgn7p6vvbp7my4egewngifwq

IntroVAC: Introspective Variational Classifiers for Learning Interpretable Latent Subspaces [article]

Marco Maggipinto and Matteo Terzi and Gian Antonio Susto
2020 arXiv   pre-print
Learning useful representations of complex data has been the subject of extensive research for many years.  ...  However, the latent space is not easily interpretable and the generation capabilities show some limitations since images typically look blurry and lack details.  ...  representations of the input data and one that acts as a decoder allowing to generate data in the input space by starting from their latent representations.  ... 
arXiv:2008.00760v2 fatcat:6n4xznotlnbb7lrpt2ipd6bb64

Automated Sewer Defects Detection Using Style-based Generative Adversarial Networks and Fine-tuned Well-known CNN Classifier

Zuxiang Situ, Shuai Teng, Hanlin Liu, Jinhua Luo, Qianqian Zhou
2021 IEEE Access  
The input (z∈Z) is mapped into an intermediate latent space (w∈W) by the mapping network and the style of images at each convolution layer is adjusted by giving a latent code in the latent space [21]  ...  [63] calculates the square of the maximum average difference between the real images and the feature representation of the generated images.  ... 
doi:10.1109/access.2021.3073915 fatcat:gkybbs5xercrhovvsnxmey2zou
« Previous Showing results 1 — 15 out of 44,196 results