Filters








5,861 Hits in 1.3 sec

Wasserstein discriminant analysis

Rémi Flamary, Marco Cuturi, Nicolas Courty, Alain Rakotomamonjy
2018 Machine Learning  
Wasserstein Discriminant Analysis (WDA) is a new supervised method that can improve classification of high-dimensional data by computing a suitable linear map onto a lower dimensional subspace.  ...  Following the blueprint of classical Linear Discriminant Analysis (LDA), WDA selects the projection matrix that maximizes the ratio of two quantities: the dispersion of projected points coming from different  ...  Conclusion This work presents the Wasserstein Discriminant Analysis, a new and original linear discriminant subspace estimation method.  ... 
doi:10.1007/s10994-018-5717-1 fatcat:lku64gfoljfm3g2voxiopgicpu

Ratio Trace Formulation of Wasserstein Discriminant Analysis

Hexuan Liu, Yunfeng Cai, You-Lin Chen, Ping Li
2020 Neural Information Processing Systems  
We reformulate the Wasserstein Discriminant Analysis (WDA) as a ratio trace problem and present an eigensolver-based algorithm to compute the discriminative subspace of WDA.  ...  We provide a rigorous convergence analysis for the proposed algorithm under the self-consistent field framework, which is crucial but missing in the literature.  ...  Introduction Wasserstein Discriminant Analysis (WDA) [13] is a supervised linear dimensionality reduction technique that generalizes the classical Fisher Discriminant Analysis (FDA) [16] using the  ... 
dblp:conf/nips/LiuCCL20 fatcat:33arjsyegbcjlblsvfk4wpnjha

Max-Sliced Wasserstein Distance and its use for GANs [article]

Ishan Deshpande, Yuan-Ting Hu, Ruoyu Sun, Ayis Pyrros, Nasir Siddiqui, Sanmi Koyejo, Zhizhen Zhao, David Forsyth, Alexander Schwing
2019 arXiv   pre-print
We first show that the recently proposed sliced Wasserstein distance has compelling sample complexity properties when compared to the Wasserstein distance.  ...  To further improve the sliced Wasserstein distance we then analyze its 'projection complexity' and develop the max-sliced Wasserstein distance which enjoys compelling sample complexity while reducing projection  ...  and Max-Sliced Distance In this section we provide the first analysis of the samplecomplexity benefits of the sliced Wasserstein distance compared to the Wasserstein distance.  ... 
arXiv:1904.05877v1 fatcat:xvvod74kzbetjgygzgxgi4c6de

Max-Sliced Wasserstein Distance and Its Use for GANs

Ishan Deshpande, Yuan-Ting Hu, Ruoyu Sun, Ayis Pyrros, Nasir Siddiqui, Sanmi Koyejo, Zhizhen Zhao, David Forsyth, Alexander G. Schwing
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
We first show that the recently proposed sliced Wasserstein distance has compelling sample complexity properties when compared to the Wasserstein distance.  ...  To further improve the sliced Wasserstein distance we then analyze its 'projection complexity' and develop the max-sliced Wasserstein distance which enjoys compelling sample complexity while reducing projection  ...  and Max-Sliced Distance In this section we provide the first analysis of the samplecomplexity benefits of the sliced Wasserstein distance compared to the Wasserstein distance.  ... 
doi:10.1109/cvpr.2019.01090 dblp:conf/cvpr/DeshpandeHSPSKZ19 fatcat:5pq3ivkskbacxej4zggbgitw4u

Orthogonal Wasserstein GANs [article]

Jan Müller, Reinhard Klein, Michael Weinmann
2019 arXiv   pre-print
However, Wasserstein-GANs require the discriminator to be Lipschitz continuous. In current state-of-the-art Wasserstein-GANs this constraint is enforced via gradient norm regularization.  ...  Finally, we provide a novel metric to evaluate the generalization capabilities of the discriminators of different Wasserstein-GANs.  ...  Analysis of mode preservation for different WGAN approaches One of the main benefits of Wasserstein GANs over standard GANs is their capability to mitigate mode collapse.  ... 
arXiv:1911.13060v2 fatcat:pa4ay575wrgkfgytypw5uztgiy

Wasserstein GANs Work Because They Fail (to Approximate the Wasserstein Distance) [article]

Jan Stanczuk, Christian Etmann, Lisa Maria Kreusser, Carola-Bibiane Schönlieb
2021 arXiv   pre-print
We provide an in-depth mathematical analysis of differences between the theoretical setup and the reality of training Wasserstein GANs.  ...  Wasserstein GANs are based on the idea of minimising the Wasserstein distance between a real and a generated distribution.  ...  Choosing the right divergence A rigorous mathematical analysis of the vanilla GAN's optimal discriminator dynamics has been performed in .  ... 
arXiv:2103.01678v4 fatcat:lzbp2t545jabzk2ategxvr6qhu

Wasserstein Distance Learns Domain Invariant Feature Representations for Drift Compensation of E-Nose

Yang Tao, Chunyan Li, Zhifang Liang, Haocheng Yang, Juan Xu
2019 Sensors  
It regards a neural network as a domain discriminator to measure the empirical Wasserstein distance between the source domain (data without drift) and target domain (drift data).  ...  The WDLFR minimizes Wasserstein distance by optimizing the feature extractor in an adversarial manner. The Wasserstein distance for domain adaption has good gradient and generalization bound.  ...  In order to better verify the effectiveness of the proposed WDLFR method in this paper, the proposed approach compares with principal component analysis (PCA) [8] , linear discriminant analysis (LDA)  ... 
doi:10.3390/s19173703 fatcat:nispyyu3d5ebxhhbwblzcr3ziq

Linear Discriminant Generative Adversarial Networks [article]

Zhun Sun, Mete Ozay, Takayuki Okatani
2017 arXiv   pre-print
We develop a novel method for training of GANs for unsupervised and class conditional generation of images, called Linear Discriminant GAN (LD-GAN).  ...  between generators and discriminators in the unsupervised case, without employment of normalization methods and constraints on weights.  ...  Linear Discriminant Analysis Linear Discriminant Analysis (LDA) methods are used to compute a linear combination of features which characterize or separate two or more classes of objects.  ... 
arXiv:1707.07831v1 fatcat:ffoziw4hhvgjnjdrh4itagrioi

Wasserstein Distance Guided Representation Learning for Domain Adaptation [article]

Jian Shen, Yanru Qu, Weinan Zhang, Yong Yu
2018 arXiv   pre-print
estimated Wasserstein distance in an adversarial manner.  ...  One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction.  ...  Theoretical Analysis In this section, we give some theoretical analysis about the advantages of using Wasserstein distance for domain adaptation.  ... 
arXiv:1707.01217v4 fatcat:znq72ztuufa3fc4pgymmor23bm

Generative Modeling Using the Sliced Wasserstein Distance

Ishan Deshpande, Ziyu Zhang, Alexander Schwing
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
By augmenting this approach with a discriminator we improve its accuracy. We found our approach to be significantly more stable compared to even the improved Wasserstein GAN.  ...  While this is particularly true for early GAN formulations, there has been significant empirically motivated and theoretically founded progress to improve stability, for instance, by using the Wasserstein  ...  In theory this can be addressed by methods such as linear discriminant analysis, but they are expensive.  ... 
doi:10.1109/cvpr.2018.00367 dblp:conf/cvpr/DeshpandeZS18 fatcat:2ewcg6v7mncmhotqykcih5jqbu

Wasserstein Distance guided Adversarial Imitation Learning with Reward Shape Exploration [article]

Ming Zhang, Yawei Wang, Xiaoteng Ma, Li Xia, Jun Yang, Zhiheng Li, Xiu Li
2020 arXiv   pre-print
In this paper, we propose a new algorithm named Wasserstein Distance guided Adversarial Imitation Learning (WDAIL) for promoting the performance of imitation learning (IL).  ...  There are three improvements in our method: (a) introducing the Wasserstein distance to obtain more appropriate measure in adversarial training process, (b) using proximal policy optimization (PPO) in  ...  The L 1 -Wasserstein distance is more flexible and easier to bound and it has strong implications in functional analysis.  ... 
arXiv:2006.03503v1 fatcat:i5atdc456vfeheaxqijf2kys7i

Generative Modeling using the Sliced Wasserstein Distance [article]

Ishan Deshpande, Ziyu Zhang, Alexander Schwing
2018 arXiv   pre-print
By augmenting this approach with a discriminator we improve its accuracy. We found our approach to be significantly more stable compared to even the improved Wasserstein GAN.  ...  While this is particularly true for early GAN formulations, there has been significant empirically motivated and theoretically founded progress to improve stability, for instance, by using the Wasserstein  ...  For faster convergence it is, therefore, better to use projections along which the distributions are In theory this can be addressed by methods such as linear discriminant analysis, but they are expensive  ... 
arXiv:1803.11188v1 fatcat:w24rmcq3wbbhpa5o23trx2z7ae

Towards Efficient and Unbiased Implementation of Lipschitz Continuity in GANs [article]

Zhiming Zhou, Jian Shen, Yuxuan Song, Weinan Zhang, Yong Yu
2019 arXiv   pre-print
It was observed that the Lipschitz regularized discriminator leads to improved training stability and sample quality.  ...  Our experiments verify our analysis and show that the proposed method is able to achieve successful training in various situations where gradient penalty and spectral normalization fail.  ...  Unfortunately, regularized Wasserstein distance usually also alters the property of the optimal discriminative function and blurs the π * [Seguy et al., 2017], which is consistent with our analysis here  ... 
arXiv:1904.01184v1 fatcat:6e3beuwxpnfq3doqgergpn5nue

Wasserstein Adversarially Regularized Graph Autoencoder [article]

Huidong Liang, Junbin Gao
2021 arXiv   pre-print
via the Wasserstein metric.  ...  This paper introduces Wasserstein Adversarially Regularized Graph Autoencoder (WARGA), an implicit generative algorithm that directly regularizes the latent distribution of node embedding to a target distribution  ...  For instance, Wasserstein Generative Adversarial Networks (WGAN) [2] replaces the discriminator in Generative Adversarial Nets (GAN) [4] by Wasserstein metric to handle the problems of unstable training  ... 
arXiv:2111.04981v1 fatcat:afco4smx6ffmdic4cjypkjhnry

WRGAN: Improvement of RelGAN with Wasserstein Loss for Text Generation

Ziyun Jiao, Fuji Ren
2021 Electronics  
Differently from RelGAN, we modified the discriminator network structure with 1D convolution of multiple different kernel sizes.  ...  Compared with the current loss function, the Wasserstein distance can provide more information to the generator, but RelGAN does not work well with Wasserstein distance in experiments.  ...  WGAN converts the discriminator from a binary classification task into an approximately fitting Wasserstein distance.  ... 
doi:10.3390/electronics10030275 fatcat:d6amgpspkrhyxdmqlh6x5ygczi
« Previous Showing results 1 — 15 out of 5,861 results