A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Photo style transfer with consistency losses
[article]
2020
arXiv
pre-print
To enforce photorealism, we introduce a content preserving mechanism by combining a cycle-consistency loss with a self-consistency loss. ...
We address the problem of style transfer between two photos and propose a new way to preserve photorealism. ...
CONCLUSION We designed a new method for effective photo stylization between two images that consists in training a pair of deep convnets with cycle-and self-consistency losses. ...
arXiv:2005.04408v1
fatcat:ed6lzibl7jax7ohgstnhrck4xu
Unpaired Portrait Drawing Generation via Asymmetric Cycle Mapping
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
a relaxed forward cycle-consistency loss). ...
In this paper, we address the problem of automatic transfer from face photos to portrait drawings with unpaired training data. ...
Comparisons with neural style transfer methods are shown in Fig. 5 . ...
doi:10.1109/cvpr42600.2020.00824
dblp:conf/cvpr/YiLLR20
fatcat:gai3tdzdvvccpccf4mukbrjfje
MW-GAN: Multi-Warping GAN for Caricature Generation with Multi-Style Geometric Exaggeration
[article]
2020
arXiv
pre-print
It requires simultaneous style transfer and shape exaggeration with rich diversity, and meanwhile preserving the identity of the input. ...
We bridge the gap between the style and landmarks of an image with corresponding latent code spaces by a dual way design, so as to generate caricatures with arbitrary styles and geometric exaggeration, ...
(6) constrain the style transfer from caricatures to photos. It is only possible to impose this cycle consistency in our dual way design. ...
arXiv:2001.01870v1
fatcat:72437cchurclznoog52dql3wo4
Quality Metric Guided Portrait Line Drawing Generation from Unpaired Training Data
2022
IEEE Transactions on Pattern Analysis and Machine Intelligence
Along with localized discriminators for important facial regions, our method well preserves all important facial features. ...
Our method can (1) learn to generate high quality portrait drawings in multiple styles using a single network and (2) generate portrait drawings in a "new style" unseen in the training data. ...
Comparisons with neural style transfer methods are shown in Fig. 9 . ...
doi:10.1109/tpami.2022.3147570
pmid:35104210
fatcat:owiyzlthwvcyhfahi36qb4wfg4
Image to Image Translation using Deep Learning Techniques
2020
International Journal of Computer Applications
Our innovation is we train and test the cycle-consistent adversarial networks using dataset. ...
The experiment outcomes show that our method can successfully transfer for disparate tasks while conserving the original content. ...
RELATED WORK CNN has been proved to have powerful aptitude for image style transformation. Our work with CNN is based on the paper Deep Photo Style Transfer by Yanghao Li [1] . ...
doi:10.5120/ijca2020920745
fatcat:kilrichqqzetfeo2vxaemijddm
Photo-realistic Facial Texture Transfer
[article]
2017
arXiv
pre-print
Style transfer methods have achieved significant success in recent years with the use of convolutional neural networks. ...
However, many of these methods concentrate on artistic style transfer with few constraints on the output image appearance. ...
Despite the success of artistic style transfer, facial style transfer remains challenging due to the requirement of photo-realism and semantic consistency. ...
arXiv:1706.04306v1
fatcat:75ewb2dpizcddedmdtokqo75gm
3D Photo Stylization: Learning to Generate Stylized Novel Views from a Single Image
[article]
2021
arXiv
pre-print
Our key intuition is that style transfer and view synthesis have to be jointly modeled for this task. ...
Style transfer and single-image 3D photography as two representative tasks have so far evolved independently. ...
Effect of Consistency Loss. We evaluate the contribution of our consistency loss in Table 2 . Despite a shared point cloud, model trained without the consistency loss pro- Table 3 . ...
arXiv:2112.00169v2
fatcat:aiwjuqodtvaglppzcci6w5ecye
Face Style Transfer and Removal with Generative Adversarial Network
2019
2019 16th International Conference on Machine Vision Applications (MVA)
Style transfer gives a new way for artistic creation while style removal can be beneficial for face verification, photo-realistic content editing or facial analysis. ...
While style transfer has been widely studied, recovering photo-realistic images from corresponding artistic works has not been fully investigated. ...
. • We unite an adversarial loss, a cycle-consistent loss, an identity-preserving loss, and a pixel-level similarity loss to transfer style and recover face. ...
doi:10.23919/mva.2019.8757925
dblp:conf/mva/ZhuL19
fatcat:hpevcw3yjrfmtdiyifj3ojcc2q
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
[article]
2020
arXiv
pre-print
Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. ...
Because this mapping is highly under-constrained, we couple it with an inverse mapping F: Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). ...
Figure 15 : 15 We compare our method with neural style transfer
Figure 16 : 16 We compare our method with neural style transfer [13] on various applications. ...
arXiv:1703.10593v7
fatcat:xke57fshn5cn5of4icjvehrdq4
AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised Anime Face Generation
[article]
2021
arXiv
pre-print
Our aim is to synthesize anime-faces which are style-consistent with a given reference anime-face. ...
Specifically, a new generator architecture is proposed to simultaneously transfer color/texture styles and transform local facial shapes into anime-like counterparts based on the style of a reference anime-face ...
Adversarial loss. Given a photo-face x ∈ X and a reference anime-face y ∈ Y , the generator aims to synthesize from x an output image G(x, y) with a style transferred from y. ...
arXiv:2102.12593v2
fatcat:vpylvtmgf5atrjshe3yxumpfwy
Neural Comic Style Transfer: Case Study
[article]
2018
arXiv
pre-print
In this paper, we present a comparison of how state-of-the-art style transfer methods cope with transferring various comic styles on different images. ...
Some further works introduced various improvements regarding generalization, quality and efficiency, but each of them was mostly focused on styles such as paintings, abstract images or photo-realistic ...
This method consists of two networks: a style transfer network and a loss network. The loss network is pretty similar to the network presented in [1] . ...
arXiv:1809.01726v2
fatcat:skzu6hopyzapnh2qptd5ovfr7a
Automated Deep Photo Style Transfer
[article]
2019
arXiv
pre-print
Deep Photo Style Transfer is an attempt to transfer the style of a reference image to a content image while preserving its photorealism. ...
This is achieved by introducing a constraint that prevents distortions in the content image and by applying the style transfer independently for semantically different parts of the images. ...
With Deep Photo Style Transfer, it was not possible to produce any notable improvement with various weights for the image assessment loss. ...
arXiv:1901.03915v1
fatcat:6fxqztvfcbhcpfh7wzqbz74bom
From Reality to Perception: Genre-Based Neural Image Style Transfer
2018
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
The target style representation is reconstructed based on the semantic correspondence between real world photo and painting, which enable the perception guidance in style transfer. ...
We present a novel genre style transfer framework modeled after the mechanism of actual artwork production. ...
Johnson et al [2016] designed a transform net which is trained with custom perceptual loss functions for real-time image style transfer. ...
doi:10.24963/ijcai.2018/485
dblp:conf/ijcai/MaWGL18
fatcat:iknfotjzqvgoleoslxltyn3u4q
MOST-Net: A Memory Oriented Style Transfer Network for Face Sketch Synthesis
[article]
2022
arXiv
pre-print
To tackle this problem, we present an end-to-end Memory Oriented Style Transfer Network (MOST-Net) for face sketch synthesis which can produce high-fidelity sketches with limited data. ...
Furthermore, we design a novel Memory Refinement Loss (MR Loss) for feature alignment in the memory module, which enhances the accuracy of memory slots in an unsupervised manner. ...
Image Style Transfer Initial style transfer aims to transform photos into paintinglike images which indicates that face sketch synthesis can be regarded as a style transfer task. Gatys et al. ...
arXiv:2202.03596v1
fatcat:j6pbz4jttjawredqihlvajvgnu
PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Different from the image domain transfer problem, our style transfer problem involves two asymmetric functions: a forward function encodes example-based style transfer, whereas a backward function removes ...
We construct two coupled networks to implement these functions -one that transfers makeup style and a second that can remove makeup -such that the output of their successive application to an input photo ...
We compare the output from the second stage with the source to measure identity preservation and style consistency. ...
doi:10.1109/cvpr.2018.00012
dblp:conf/cvpr/ChangLYF18
fatcat:7abkkauvzbbptcb5yevubongdq
« Previous
Showing results 1 — 15 out of 34,175 results