Filters








87,858 Hits in 4.9 sec

Multi-component Image Translation for Deep Domain Generalization [article]

Mohammad Mahfujur Rahman, Clinton Fookes, Mahsa Baktashmotlagh, Sridha Sridharan
2018 arXiv   pre-print
In our first approach, we propose a novel deep domain generalization architecture utilizing synthetic data generated by a Generative Adversarial Network (GAN).  ...  The discrepancy between the generated images and synthetic images is minimized using existing domain discrepancy metrics such as maximum mean discrepancy or correlation alignment.  ...  can handle more than two domains at a time compared to MUNIT and Stargan, it can generate more multi-component images which are more effective for domain generalization.  ... 
arXiv:1812.08974v1 fatcat:idoatrfjezh7rchjto336jdxhq

Deep Domain Adaptive Object Detection: a Survey [article]

Wanyi Li, Fuyu Li, Yongkang Luo, Peng Wang, Jia sun
2020 arXiv   pre-print
This paper aims to review the state-of-the-art progress on deep domain adaptive object detection approaches. Firstly, we introduce briefly the basic concepts of deep domain adaptation.  ...  Finally, insights for future research trend are presented.  ...  The image translation model generates diverse and structure-preserved translated images across complex domains.  ... 
arXiv:2002.06797v3 fatcat:mozths3lk5djndue6dzefxuq3q

GMM-UNIT: Unsupervised Multi-Domain and Multi-Modal Image-to-Image Translation via Attribute Gaussian Mixture Modeling [article]

Yahui Liu, Marco De Nadai, Jian Yao, Nicu Sebe, Bruno Lepri, Xavier Alameda-Pineda
2020 arXiv   pre-print
First, it can be easily extended to most multi-domain and multi-modal image-to-image translation tasks.  ...  Second, the continuous domain encoding allows for interpolation between domains and for extrapolation to unseen domains and translations.  ...  A variational loss forces the latent representation to follow this GMM, where each component is associated to a domain. This is the key to provide for both multi-modal and multi-domain translation.  ... 
arXiv:2003.06788v2 fatcat:hf25f3b23feddo5jfuvnllhjpy

MultiPathGAN: Structure Preserving Stain Normalization using Unsupervised Multi-domain Adversarial Network with Perception Loss [article]

Haseeb Nazki, Ognjen Arandjelović, InHwa Um, David Harrison
2022 arXiv   pre-print
In the present paper we introduce an unsupervised adversarial network to translate (and hence normalize) whole slide images across multiple data acquisition domains.  ...  Our key contributions are: (i) an adversarial architecture which learns across multiple domains with a single generator-discriminator network using an information flow branch which optimizes for perceptual  ...  Figure 3 : 3 Figure 3: Multi-domain translation results using MultiPathGAN on our WSI dataset for inter-domain normalization.  ... 
arXiv:2204.09782v2 fatcat:5fldyskvwjadjpucdkmdz5eewa

A Survey on Adversarial Image Synthesis [article]

William Roy, Glen Kelly, Robert Leer, Frederick Ricardo
2021 arXiv   pre-print
Generative Adversarial Networks (GANs) have been extremely successful in various application domains.  ...  In this paper, we provide a taxonomy of methods used in image synthesis, review different models for text-to-image synthesis and image-to-image translation, and discuss some evaluation metrics as well  ...  It tackles multi-domain image synthesis problem with a global shared variational autoencoder and n domain-specific component banks. Each bank consists of an encoder and a decoder for one domain.  ... 
arXiv:2106.16056v2 fatcat:mivx26q4x5ampfi566tipcwv3e

A Unified Feature Disentangler for Multi-Domain Image Translation and Manipulation [article]

Alexander H. Liu, Yen-Cheng Liu, Yu-Ying Yeh, Yu-Chiang Frank Wang
2018 arXiv   pre-print
for describing cross-domain data.  ...  Realized by adversarial training with additional ability to exploit domain-specific information, the proposed network is able to perform continuous cross-domain image translation and manipulation, and  ...  The contributions of this paper are highlighted as follows: • We propose a Unified Feature Disentanglement Network (UFDN), which learns deep disentangled feature representation for multi-domain image translation  ... 
arXiv:1809.01361v3 fatcat:mohw4eefg5ajrotb6dlivsr7du

Camera-Aware Image-To-Image Translation Using Similarity Preserving StarGAN for Person Re-Identification

Dahjung Chung, Edward J. Delp
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
In this paper, we present a camera-aware image-to-image translation using similarity preserving StarGAN (SP-StarGAN) as the data augmentation for person re-identification.  ...  We propose the addition of an identity mapping term and a multiscale structural similarity term as additional losses for the generator.  ...  SP-StarGAN can be used not only for ReID data augmentation but also for general multi-domain image-toimage translation.  ... 
doi:10.1109/cvprw.2019.00193 dblp:conf/cvpr/ChungD19 fatcat:ofszwvozvbc6bd26twexbpr2mi

Cross-spectral Face Completion for NIR-VIS Heterogeneous Face Recognition [article]

Ran He, Jie Cao, Lingxiao Song, Zhenan Sun, Tieniu Tan
2019 arXiv   pre-print
We demonstrate that by attaching the correction component, we can simplify heterogeneous face synthesis from one-to-many unpaired image translation to one-to-one paired image translation, and minimize  ...  A warping procedure is developed to integrate the two components into an end-to-end deep network.  ...  CFC presents a deep framework for generating a frontal VIS image of a person's face given an input NIR face image.  ... 
arXiv:1902.03565v1 fatcat:3u4ihwcotvcy7bfrktqxwdt554

MISS GAN: A Multi-IlluStrator Style Generative Adversarial Network for image to illustration translation

Noa Barzilay, Tal Berkovitz Shalev, Raja Giryes
2021 Pattern Recognition Letters  
This paper proposes a Multi-IlluStrator Style Generative Adversarial Network (MISS GAN) that is a multi-style framework for unsupervised image-to-illustration translation, which can generate styled yet  ...  Existing methods require to train several generators (as the number of illustrators) to handle the different illustrators' styles, which limits their practical usage, or require to train an image specific  ...  The MISS GAN model We turn to introduce now our proposed Multi-IlluStrator Style Generative Adversarial Network (MISS GAN) for image to illustration translation.  ... 
doi:10.1016/j.patrec.2021.08.006 fatcat:oq4keyjfcbaj3jgdr77ocwtpmu

Deep Generative Adversarial Networks for Image-to-Image Translation: A Review

Aziz Alotaibi
2020 Symmetry  
Image-to-image translation with generative adversarial networks (GANs) has been intensively studied and applied to various tasks, such as multimodal image-to-image translation, super-resolution translation  ...  Many image processing, computer graphics, and computer vision problems can be treated as image-to-image translation tasks.  ...  There are two main advantages of GMM-UNIT: first, allowing for multi-model and multi-domain translation and, second, allowing for interpolation between domains and extrapolation to unseen domains and translation  ... 
doi:10.3390/sym12101705 fatcat:rqlwjjhrvbc6fhc4mxjjvkwk6i

Shadow Transfer: Single Image Relighting For Urban Road Scenes [article]

Alexandra Carlson, Ram Vasudevan, Matthew Johnson-Roberson
2019 arXiv   pre-print
the art image to image translation methods.  ...  There have been impressive advances in the realm of image to image translation in transferring previously unseen visual effects into a dataset, specifically in day to night translation.  ...  Our results indicate that the proposed Shadow Transfer framework generates more realistic images than the state-of-the-art multi-domain to multi-domain transfer methods.  ... 
arXiv:1909.10363v2 fatcat:jee56bmrsneltj4bsokkig357y

StereoGAN: Bridging Synthetic-to-Real Domain Gap by Joint Optimization of Domain Translation and Stereo Matching

Rui Liu, Chengxi Yang, Wenxiu Sun, Xiaogang Wang, Hongsheng Li
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Although unsupervised image-to-image translation networks represented by CycleGAN show great potential in dealing with domain gap, it is non-trivial to generalize this method to stereo matching due to  ...  ., bidirectional multi-scale feature re-projection loss and correlation consistency loss, to help translate all synthetic stereo images into realistic ones as well as maintain epipolar constraints.  ...  This work is supported in part by SenseTime Group Limited, and in part by the General Research Fund through the Research Grants Council of Hong Kong under Grants CUHK14202217, CUHK14203118, CUHK14205615  ... 
doi:10.1109/cvpr42600.2020.01277 dblp:conf/cvpr/LiuYSWL20 fatcat:7r6mbp5y3fa2dp42gzyblssmpe

Multi-source Domain Adaptation for Visual Sentiment Classification [article]

Chuang Lin, Sicheng Zhao, Lei Meng, Tat-Seng Chua
2020 arXiv   pre-print
In this paper, we propose a novel multi-source domain adaptation (MDA) method, termed Multi-source Sentiment Generative Adversarial Network (MSGAN), for visual sentiment classification.  ...  Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-the-art MDA approaches for visual sentiment classification.  ...  Image Translation Pipeline As aforementioned, the unified sentiment space allows to use only one generator G ST to adapt multi-source images indistinguishable from the target domain.  ... 
arXiv:2001.03886v1 fatcat:hthefzghcbfspg3d7dzilkekgm

OT-driven Multi-Domain Unsupervised Ultrasound Image Artifact Removal using a Single CNN [article]

Jaeyoung Huh, Shujaat Khan, Jong Chul Ye
2020 arXiv   pre-print
Inspired by the recent success of multi-domain image transfer, here we propose a novel, unsupervised, deep learning approach in which a single neural network can be used to deal with different types of  ...  This poses a fundamental limitation in the practical use of deep learning for US, since large number of models should be stored to deal with various US image artifacts.  ...  Multi-Domain Image Translation One of the earliest works in image translation is Pix2Pix [15] based on conditional generative adversarial networks (cGAN).  ... 
arXiv:2007.05205v1 fatcat:shnlycu2y5gbnhue3gwnrbnpjy

Multi-Style Unsupervised image synthesis us-ing Generative Adversarial Nets

Guoyun Lv, Syed Muhammad Israr, Shengyong Qi
2021 IEEE Access  
Unsupervised cross-domain image-to-image translation is a very active topic in computer vision and graphics.  ...  A Multi-Style Unsupervised Feature-Wise image synthesis model using Generative Adversarial Nets (MSU-FW-GAN) based on the MSU-GAN is proposed for the shape variation tasks.  ...  CONCLUSION In this paper, a general and effective framework is proposed for multi-style unsupervised image-to-image translation.  ... 
doi:10.1109/access.2021.3087665 fatcat:6qzpaomykvc4pogw5ex53oceb4
« Previous Showing results 1 — 15 out of 87,858 results