Filters








124 Hits in 5.3 sec

Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features

Feida Zhu, Zhicheng Yan, Jiajun Bu, Yizhou Yu
2017 IEEE Transactions on Image Processing  
Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction.  ...  To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos.  ...  Our deep network makes use of fully convolutional networks to extract global and contextual semantic features for every superpixel in the input image.  ... 
doi:10.1109/tip.2017.2703099 pmid:28500001 fatcat:awwfdcekgzfp7nvx4e2ce37anq

Automatic Image Stylization Using Deep Fully Convolutional Networks [article]

Feida Zhu, Yizhou Yu
2018 arXiv   pre-print
Our deep learning architecture is an end-to-end deep fully convolutional network performing semantics-aware feature extraction as well as automatic image adjustment prediction.  ...  Color and tone stylization strives to enhance unique themes with artistic color and tone adjustments.  ...  Let us now elaborate on the fully convolutional sub-network for semantic feature computation.  ... 
arXiv:1811.10872v1 fatcat:oa63qhiafvfxbnkpkg46w5dfo4

Interactive Video Stylization Using Few-Shot Patch-Based Training [article]

Ondřej Texler, David Futschik, Michal Kučera, Ondřej Jamriška, Šárka Sochorová, Menglei Chai, Sergey Tulyakov, Daniel Sýkora
2020 arXiv   pre-print
We demonstrate how to train an appearance translation network from scratch using only a few stylized exemplars while implicitly preserving temporal consistency.  ...  In this paper, we present a learning-based method to the keyframe-based video stylization that allows an artist to propagate the style from a few selected keyframes to the rest of the sequence.  ...  ACKNOWLEDGMENTS We thank the reviewers for their insightful comments and suggestions. We are also grateful to Aneta Texler for her help on manuscript perparation as well as  ... 
arXiv:2004.14489v1 fatcat:h7lf55av7jbkbmuja5m2bbzcim

Automatic Content-Aware Color and Tone Stylization

Joon-Young Lee, Kalyan Sunkavalli, Zhe Lin, Xiaohui Shen, In So Kweon
2016 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We achieve this by learning style ranking for a given input using a large photo collection and selecting a diverse subset of matching styles for final style transfer.  ...  We also propose an improved technique that transfers the global color and tone of the chosen exemplars to the input photograph while avoiding the common visual artifacts produced by the existing style  ...  We segment the large photo collection into content-based clusters using semantic features, and learn a ranking of the style exemplars for each cluster by evaluating their style similarities to the images  ... 
doi:10.1109/cvpr.2016.271 dblp:conf/cvpr/LeeS0SK16 fatcat:5byuvnvjh5ev3fwidi74asa5ry

Neural Neighbor Style Transfer [article]

Nicholas Kolkin, Michal Kucera, Sylvain Paris, Daniel Sykora, Eli Shechtman, Greg Shakhnarovich
2022 arXiv   pre-print
Our approach is based on explicitly replacing neural features extracted from the content input (to be stylized) with those from a style exemplar, then synthesizing the final output based on these rearranged  ...  features.  ...  By transferring the style from an arbitrary exemplar image one can stylize a subset of frames and then run an existing keyframe-based video stylization technique of Jamriška et al.  ... 
arXiv:2203.13215v1 fatcat:crrccbvxkzbihh6w74wyv3o2tq

Real-Time Neural Style Transfer for Videos

Haozhi Huang, Hao Wang, Wenhan Luo, Lin Ma, Wenhao Jiang, Xiaolong Zhu, Zhifeng Li, Wei Liu
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images.  ...  Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually  ...  For each style, an individual stylizing network is trained. Fig. 3 shows the stylized results using 6 exemplar styles, named as Gothic, Candy, Dream, Mosaic, Composition and Starry Night.  ... 
doi:10.1109/cvpr.2017.745 dblp:conf/cvpr/HuangWLMJZLL17 fatcat:sqhs3mbhyfdcrou3tweuvw2jsy

StyleBlit: Fast Example-Based Stylization with Local Guidance [article]

Daniel Sýkora and Ondřej Jamriška and Jingwan Lu and Eli Shechtman
2018 arXiv   pre-print
Local guidance encourages transfer of content from the source exemplar to the target image in a semantically meaningful way.  ...  We present StyleBlit---an efficient example-based style transfer algorithm that can deliver high-quality stylized renderings in real-time on a single-core CPU.  ...  A oneto-one mapping between the input image and its stylized version is used to guide the transfer by establishing correspondences between the source and target (based, e.g., on color correspondence).  ... 
arXiv:1807.03249v1 fatcat:ayda2nmibzdw7kiw2p7mwp6wku

Non-Parametric Neural Style Transfer [article]

Nicholas Kolkin
2021 arXiv   pre-print
the stylized output's pixels.  ...  I will begin by proposing novel definitions of style and content based on optimal transport and self-similarity, and demonstrating how a style transfer algorithm based on these definitions generates outputs  ...  By transferring the style from an arbitrary exemplar image one can stylize a subset of frames and then run an existing keyframe-based video stylization technique of Jamriška et al.  ... 
arXiv:2108.12847v1 fatcat:v3mtlfi45vccjnh4r2pzuf4ntu

Neural Style Transfer: A Critical Review

Akhil Singh, Vaibhav Jaiswal, Gaurav Joshi, Adith Sanjeeve, Dr. Shilpa Gite, Dr. Ketan Kotecha
2021 IEEE Access  
[31] have used Conditional GAN to achieve the result. The architecture approach used here was on fully connected networks; [31] used layers of convolutions in Generator.  ...  Using the learned pre-feature scaling factor, the noise image is first transmitted on all feature maps, and then the corresponding convolution layer output is applied.  ... 
doi:10.1109/access.2021.3112996 fatcat:zj3gt4hgazejtgcplyqc67dyfe

Unpaired Image Translation via Adaptive Convolution-based Normalization [article]

Wonwoong Cho, Kangyeol Kim, Eungyeup Kim, Hyunwoo J. Kim, Jaegul Choo
2019 arXiv   pre-print
In response, we propose an advanced normalization technique based on adaptive convolution (AdaCoN), in order to properly impose style information into the content of an input image.  ...  Disentangling content and style information of an image has played an important role in recent success in image translation.  ...  Based on this idea, dynamic filter networks [19] proposed to take an auxiliary input image to determine convolution filter weights in an video prediction task. Furthermore, Kang et al.  ... 
arXiv:1911.13271v1 fatcat:goiotdn2mfeltmzkaoe66ncljm

Image Colorization: A Survey and Dataset [article]

Saeed Anwar, Muhammad Tahir, Chongyi Li, Ajmal Mian, Fahad Shahbaz Khan, Abdul Wahab Muzaffar
2022 arXiv   pre-print
Image colorization is the process of estimating RGB colors for grayscale images or video frames to improve their aesthetic and perceptual quality.  ...  Using the existing datasets and our new one, we perform an extensive experimental evaluation of existing image colorization methods.  ...  Exemplar-based Colorization Exemplar-based colorization utilizes the colors of example images provided along with input grayscale images.  ... 
arXiv:2008.10774v3 fatcat:xvtwyjouynbdpi4mnyibzp5hzu

A temporally coherent neural algorithm for artistic style transfer

Michael Dushkoff, Ryan McLaughlin, Raymond Ptucha
2016 2016 23rd International Conference on Pattern Recognition (ICPR)  
One such animation technique that has been widely used R Rectified Linear Unit A non-linear activation function used in artifical neural networks, p. 16. xiii S Stroke-Based Rendering A non-photorealistic  ...  and consistently push the boundaries of creativity.  ...  to dramatically improve semantic understanding of action in videos.  ... 
doi:10.1109/icpr.2016.7900142 dblp:conf/icpr/DushkoffMP16 fatcat:tki3zo4l3fgado5uny4saexabe

Unsupervised Exemplar-Domain Aware Image-to-Image Translation

Yuanbin Fu, Jiayi Ma, Xiaojie Guo
2021 Entropy  
The principle behind this is that, for images from multiple domains, the content features can be obtained by an extractor, while (re-)stylization is achieved by mapping the extracted features specifically  ...  Image-to-image translation is used to convert an image of a certain style to another of the target style with the original content preserved.  ...  The second and the sixth columns offer the translated results by fully using different exemplars.  ... 
doi:10.3390/e23050565 pmid:34063192 fatcat:5ll3s2huzfdbldpcjxp6gv5mra

Exemplar-Based Sketch Colorization with Cross-Domain Dense Semantic Correspondence

Jinrong Cui, Haowei Zhong, Hailong Liu, Yulu Fu
2022 Mathematics  
Conventional exemplar-based colorization methods tend to transfer styles from reference images to grayscale images by employing image analogy techniques or establishing semantic correspondences.  ...  To address this, we present a framework for exemplar-based sketch colorization tasks that synthesizes colored images from sketch input and reference input in a distinct domain.  ...  Our work is inspired by recent examples-based image coloring, but we address a more subtle problem: exemplar-based coloring of sparse semantic and informationally complex sketches.  ... 
doi:10.3390/math10121988 fatcat:2xxs4d25ijawhcxgquewvm4l7q

Manipulating Attributes of Natural Scenes via Hallucination [article]

Levent Karacan, Zeynep Akata, Aykut Erdem, Erkut Erdem
2019 arXiv   pre-print
Once the scene is hallucinated with the given attributes, the corresponding look is then transferred to the input image while preserving the semantic details intact, giving a photo-realistic manipulation  ...  Our comprehensive set of qualitative and quantitative results demonstrate the effectiveness of our approach against the competing methods.  ...  We would like to thank NVIDIA Corporation for the donation of GPUs used in this research. This work has been partially funded by the DFG-EXC-Nummer 2064/1-Projektnummer 390727645.  ... 
arXiv:1808.07413v3 fatcat:74xfm7dieram5ex466qdjlzjmu
« Previous Showing results 1 — 15 out of 124 results