A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Transposer: Universal Texture Synthesis Using Feature Maps as Transposed Convolution Filter
[article]
2020
arXiv
pre-print
Specifically, we directly treat the whole encoded feature map of the input texture as transposed convolution filters and the features' self-similarity map, which captures the auto-correlation information ...
In this work, based on the discovery that the assembling/stitching operation in traditional texture synthesis is analogous to a transposed convolution operation, we propose a novel way of using transposed ...
Non-parametric Texture Synthesis. ...
arXiv:2007.07243v1
fatcat:7dydvwapizc4zlmkccezis6pkq
FakeLocator: Robust Localization of GAN-Based Face Manipulations
[article]
2021
arXiv
pre-print
Full face synthesis and partial face manipulation by virtue of the generative adversarial networks (GANs) and its variants have raised wide public concerns. ...
To the best of our knowledge, this is the very first attempt to solve the GAN-based fake localization problem with a gray-scale fakeness map that preserves more information of fake regions. ...
IcGAN uses interpolation and others use transpose convolution. Although only two methods are used, all of these three methods have been proved inducing fake texture. ...
arXiv:2001.09598v4
fatcat:rf3o3pukvrdlxavp2bqiqqmj4m
Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks
[article]
2018
arXiv
pre-print
In contrast to existing methods that involve the bicubic interpolation for pre-processing (which results in large feature maps), the proposed method directly extracts features from the low-resolution input ...
Convolutional neural networks have recently demonstrated high-quality reconstruction for single image super-resolution. ...
Laplacian pyramid The Laplacian pyramid has been widely used in several vision tasks, including image blending [30] , texture synthesis [31] , edge-aware filtering [32] and semantic segmentation [ ...
arXiv:1710.01992v3
fatcat:fiqqlbaz5raeph4cq27gn25ge4
Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks
2018
IEEE Transactions on Pattern Analysis and Machine Intelligence
In contrast to existing methods that involve the bicubic interpolation for pre-processing (which results in large feature maps), the proposed method directly extracts features from the low-resolution input ...
Convolutional neural networks have recently demonstrated high-quality reconstruction for single image super-resolution. ...
The size of the transposed convolutional filters is 4 × 4. We pad zeros around the boundaries before applying convolution to keep the size of all feature maps the same as the input of each level. ...
doi:10.1109/tpami.2018.2865304
pmid:30106708
fatcat:gcvrtf5klvdpvdivadoanq7ln4
Up and Down Residual Blocks for Convolutional Generative Adversarial Networks
2021
IEEE Access
In addition, our method shows its universality for the improvement of existing methods. INDEX TERMS Generative adversarial network, convolutional neural network, residual block, sampling. ...
With the upResBlock module for the generator of convolutional GANs, our method can further enhance the generative power of the feature extraction while synthesizing image details for the specified size ...
. 1. upResBlock As illustrated in Figure 2 , by using upBlock, data from lowdimensional input space can be mapped to high-dimensional feature space. ...
doi:10.1109/access.2021.3056572
fatcat:2utpm7vzvnh7xfs6qneq26jtly
Manifold Regularized Slow Feature Analysis for Dynamic Texture Recognition
[article]
2017
arXiv
pre-print
MR-SFA for dynamic texture recognition is proposed in the following steps: 1) learning feature extraction functions as convolution filters by MR-SFA, 2) extracting local features by convolution and pooling ...
In this paper, we present a novel approach stemmed from slow feature analysis (SFA) for dynamic texture recognition. SFA extracts slowly varying features from fast varying signals. ...
We learn feature extraction functions from randomly extracted small video cubes, and we use them as convolution filters to generate feature maps. ...
arXiv:1706.03015v1
fatcat:el2ljsuv5vfz7eotxt34xmagde
Spot noise texture synthesis for data visualization
1991
Computer graphics
Local control of the texture is realized by variation of the spot, The spot is a useful primitive for texture design, because, in general, the relations between features of the spot and features of the ...
with a variety of other techniques, such as random faults, tittering, sparse convolution, and particle systems, are discussed. ...
ACKNOWLEDGEMENTS The discussions with Teun Burgers, Wim Rijnsburger (ECNL and Pieter-Jan Stappers (Delft University of Technology -DUT) were very helpful during the development of the work described here ...
doi:10.1145/127719.122751
fatcat:azpbydhps5fahmphbgfledthhq
Spot noise texture synthesis for data visualization
1991
Proceedings of the 18th annual conference on Computer graphics and interactive techniques - SIGGRAPH '91
Local control of the texture is realized by variation of the spot, The spot is a useful primitive for texture design, because, in general, the relations between features of the spot and features of the ...
with a variety of other techniques, such as random faults, tittering, sparse convolution, and particle systems, are discussed. ...
ACKNOWLEDGEMENTS The discussions with Teun Burgers, Wim Rijnsburger (ECNL and Pieter-Jan Stappers (Delft University of Technology -DUT) were very helpful during the development of the work described here ...
doi:10.1145/122718.122751
dblp:conf/siggraph/Wijk91
fatcat:q7slnsuf7bfzniegsxqlbn2ct4
Texture Synthesis with Spatial Generative Adversarial Networks
[article]
2017
arXiv
pre-print
Our method has the following features which make it a state of the art algorithm for texture synthesis: high image quality of the generated textures, very high scalability w.r.t. the output texture size ...
To illustrate these capabilities we present multiple experiments with different classes of texture images and use cases. ...
An adversarial approach to texture synthesis We will present a novel class of generative parametric models for texture synthesis, using a fully convolutional architecture trained employing an adversarial ...
arXiv:1611.08207v4
fatcat:z7erif37lrfonowoeudw4sibve
MFF-Net: Deepfake Detection Network Based on Multi-Feature Fusion
2021
Entropy
Specifically, it consists of four key components: (1) a feature extraction module to further extract textural and frequency information using the Gabor convolution and residual attention blocks; (2) a ...
Recently, many deepfake detection methods based on forged features have been proposed. Among the popular forged features, textural features are widely used. ...
Transposed convolution (also known as deconvolution) and nearest-
neighbor interpolation are often used in upsampling modules. ...
doi:10.3390/e23121692
pmid:34945998
pmcid:PMC8700337
fatcat:5iiz3wt3r5hy3bajmiisbq4bk4
Learning Hybrid Sparsity Prior for Image Restoration: Where Deep Learning Meets Sparse Coding
[article]
2018
arXiv
pre-print
To manage the computational complexity, we have developed a novel framework of implementing hybrid structured sparse coding processes by deep convolutional neural networks. ...
Specifically, a structured sparse prior is learned from extrinsic training data via a deep convolutional neural network (in a similar way to previous learning-based approaches); meantime another structured ...
y with zero-padding followed
by convolution with the transposed Gaussian filter. ...
arXiv:1807.06920v2
fatcat:ylmxawajorc75bjd3hiiwrynqu
Equivariant Neural Rendering
[article]
2020
arXiv
pre-print
Through experiments, we show that our model achieves compelling results on these datasets as well as on standard ShapeNet benchmarks. ...
Our formulation allows us to infer and render scenes in real time while achieving comparable results to models requiring minutes for inference. ...
Acknowledgements We thank Shuangfei Zhai, Walter Talbott and Leon Gatys for useful discussions. We also thank Lilian Liang and Leon Gatys for help with running compute jobs. ...
arXiv:2006.07630v2
fatcat:3qw3ibdttjgqfpnvmbsdgjb7kq
A Lightweight Music Texture Transfer System
[article]
2021
arXiv
pre-print
However, present methods for music feature transfer using neural networks are far from practical application. ...
In this paper, we initiate a novel system for transferring the texture of music, and release it as an open source project. ...
Byproduct: Audio Texture Synthesis The task of audio texture synthesis is to extract standalone texture feature from target audio, which is useful in sound restoration and audio classification. ...
arXiv:1810.01248v3
fatcat:4k7gy4kywjap3nrzrhbl4vxeuy
High-dimensional Dense Residual Convolutional Neural Network for Light Field Reconstruction
2019
IEEE Transactions on Pattern Analysis and Machine Intelligence
In contrast, we formulate light field super-resolution (LFSR) as tensor restoration and develop a learning framework based on a two-stage restoration with 4-dimensional (4D) convolution. ...
To train a feasible network, we propose a novel normalization operation based on a group of views in the feature maps, design a stage-wise loss function, and develop the multi-range training strategy to ...
We use four 4D convolution kernels to generate the LR feature map with 4 channels (denoted by 4 main colors in step 2 ). ...
doi:10.1109/tpami.2019.2945027
pmid:31581075
fatcat:jw3nlt237zbdlaoyn4udcnstja
ResViT: Residual vision transformers for multi-modal medical image synthesis
[article]
2022
arXiv
pre-print
Generative adversarial models with convolutional neural network (CNN) backbones have recently been established as state-of-the-art in numerous medical image synthesis tasks. ...
However, CNNs are designed to perform local processing with compact filters, and this inductive bias compromises learning of contextual features. ...
Resolution of g j is increased to match the size of input feature maps via an upsampling block US based on transposed convolutions: g j ∈ R N C ,H,W = US(g j ) (9) where g j ∈ R N C ,H,W are upsampled ...
arXiv:2106.16031v3
fatcat:2tsit33c2nhbfo7ejgg7cwxbze
« Previous
Showing results 1 — 15 out of 540 results