A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
EMOCA: Emotion Driven Monocular Face Capture and Animation
[article]
2022
arXiv
pre-print
Unfortunately, the best recent methods that regress parametric 3D face models from monocular images are unable to capture the full spectrum of facial expression, such as subtle or extreme emotions. ...
On the task of in-the-wild emotion recognition, our purely geometric approach is on par with the best image-based methods, highlighting the value of 3D geometry in analyzing human behavior. ...
Feng Disclosure: MJB has received research gift funds from Adobe, Intel, Nvidia, Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. ...
arXiv:2204.11312v1
fatcat:hjveioqpkrepvh62gukxql6dn4
BANMo: Building Animatable 3D Neural Models from Many Casual Videos
[article]
2021
arXiv
pre-print
BANMo builds high-fidelity, articulated 3D models (including shape and animatable skinning weights) from many monocular casual videos in a differentiable rendering framework. ...
On real and synthetic datasets, BANMo shows higher-fidelity 3D reconstructions than prior works for humans and animals, with the ability to render realistic images from novel viewpoints and poses. ...
Three-d safari: Learning to estimate zebra
Zollhöfer, Christoph Lassner, and Christian Theobalt. Non- pose, shape, and texture from images “in the wild”. ...
arXiv:2112.12761v2
fatcat:creiz2vswzdozoghhury7g5aza
Vid2Actor: Free-viewpoint Animatable Person Synthesis from Video in the Wild
[article]
2020
arXiv
pre-print
Given an "in-the-wild" video of a person, we reconstruct an animatable model of the person in the video. ...
The output model can be rendered in any body pose to any camera view, via the learned controls, without explicit 3D mesh reconstruction. ...
We thank all of the photo owners for allowing us to use their photos; photo credits are given in each figure. ...
arXiv:2012.12884v1
fatcat:iuxduntxtnbppdobkelkqdoauy
ARCH: Animatable Reconstruction of Clothed Humans
[article]
2020
arXiv
pre-print
In contrast, ARCH is a learned pose-aware model that produces detailed 3D rigged full-body human avatars from a single unconstrained RGB image. ...
In this paper, we propose ARCH (Animatable Reconstruction of Clothed Humans), a novel end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image. ...
Figure 1 . 1 Given an image of a subject in arbitrary pose (left), ARCH creates an accurate and animatable avatar with detailed clothing (center). ...
arXiv:2004.04572v2
fatcat:hu5szk3gh5aglnmh2set5o3lwy
HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars
[article]
2021
arXiv
pre-print
In addition, we learn 2D textural features that are fused with rendered volumetric features in image space. ...
First, we learn to encode articulated human motions on a dense UV manifold of the human body surface. ...
Metaavatar: Learning animatable
lor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia clothed human models from few depth images. ArXiv,
Gkioxari. ...
arXiv:2112.10203v1
fatcat:fypyninknbfopb73m343dmeoua
ARCH: Animatable Reconstruction of Clothed Humans
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
In contrast, ARCH is a learned pose-aware model that produces detailed 3D rigged full-body human avatars from a single unconstrained RGB image. ...
In this paper, we propose ARCH (Animatable Reconstruction of Clothed Humans), a novel end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image. ...
Given an image of a subject in arbitrary pose (left), ARCH creates an accurate and animatable avatar with detailed clothing (center). ...
doi:10.1109/cvpr42600.2020.00316
dblp:conf/cvpr/HuangXL0T20
fatcat:pmq4ncbw7bbpnigrgkiclstqqm
Real-Time Neural Character Rendering with Pose-Guided Multiplane Images
[article]
2022
arXiv
pre-print
We propose pose-guided multiplane image (MPI) synthesis which can render an animatable character in real scenes with photorealistic quality. ...
Our method generalizes the image-to-image translation paradigm, which translates the human pose to a 3D scene representation -- MPIs that can be rendered in free viewpoints, using the multi-views captures ...
(a) The device we use to capture data for building an animatable character in a real scene. (b) The built character is controllable. ...
arXiv:2204.11820v1
fatcat:svrgg7sr2vgdfpua74rgo6cqge
3D Clothed Human Reconstruction in the Wild
[article]
2022
arXiv
pre-print
Although much progress has been made in 3D clothed human reconstruction, most of the existing methods fail to produce robust results from in-the-wild images, which contain diverse human poses and appearances ...
This is mainly due to the large domain gap between training datasets and in-the-wild datasets. The training datasets are usually synthetic ones, which contain rendered images from GT 3D scans. ...
. • We present ClothWild, which reconstructs robust 3D clothed humans from a single in-the-wild image. ...
arXiv:2207.10053v1
fatcat:hzbfj5uuvnfyrizy2znbswcati
Collaborative Regression of Expressive Bodies using Moderation
[article]
2021
arXiv
pre-print
To get the best of both worlds, we introduce PIXIE, which produces animatable, whole-body 3D avatars with realistic facial detail, from a single image. For this, PIXIE uses two key observations. ...
Recovering expressive humans from images is essential for understanding human behavior. Methods that estimate 3D bodies, faces, or hands have progressed significantly, yet separately. ...
Conclusion We present PIXIE, a novel expressive whole-body reconstruction method that recovers an animatable 3D avatar with a detailed face from a single RGB image. ...
arXiv:2105.05301v2
fatcat:egno6wwzp5h7hbh7raz7uvpuvi
Pose-Guided Human Animation from a Single Image in the Wild
[article]
2021
arXiv
pre-print
Each modular network is explicitly dedicated to a subtask that can be learned from the synthetic data. ...
The unified representation provides an incomplete yet strong guidance to generating the appearance in response to the pose change. ...
Acknowledgement Christian Theobalt, Vladislav Golyanik, Kripasindhu Sarkar were supported by the ERC Consolidator Grant 4DRepLy (770784). ...
arXiv:2012.03796v2
fatcat:7s2y4ylm2rff3cime4gd5bvygy
Pose-Guided Human Animation from a Single Image in the Wild
2021
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Input Animated person in various poses from a single image Figure 1. We present a new method to synthesize a sequence of animated human images from a single image. ...
The synthesized images are controlled by the poses as shown in the inset image. ...
a 3D animatable textured human model. ...
doi:10.1109/cvpr46437.2021.01479
fatcat:ktqgcnqbybeljjnsmcgbpqvz7i
Structure-aware Editable Morphable Model for 3D Facial Detail Animation and Manipulation
[article]
2022
arXiv
pre-print
Morphable models are essential for the statistical modeling of 3D faces. Previous works on morphable models mostly focus on large-scale facial geometry but ignore facial details. ...
This paper augments morphable models in representing facial details by learning a Structure-aware Editable Morphable Model (SEMM). ...
[25] uses an encoder-decoder to reconstruct an animatable detailed face from a single image. ...
arXiv:2207.09019v1
fatcat:ihdrmubnqvdfxdnztk7ogwtwv4
NeuMan: Neural Human Radiance Field from a Single Video
[article]
2022
arXiv
pre-print
We propose a novel framework to reconstruct the human and the scene that can be rendered with novel human poses and views from just a single in-the-wild video. ...
Our method is able to learn subject specific details, including cloth wrinkles and accessories, from just a 10 seconds video clip, and to provide high quality renderings of the human under novel poses, ...
Our framework learns an animatable human model with realistic details. ...
arXiv:2203.12575v1
fatcat:d5rri2gfnrh2rl5u675sj6voje
AvatarMe++: Facial Shape and BRDF Inference with Photorealistic Rendering-Aware GANs
2021
IEEE Transactions on Pattern Analysis and Machine Intelligence
Over the last years, many face analysis tasks have accomplished astounding performance, with applications including face generation and 3D face reconstruction from a single "in-the-wild" image. ...
In this work, we introduce the first method that is able to reconstruct photorealistic render-ready 3D facial geometry and BRDF from a single "in-the-wild" image. ...
AG acknowledges funding by the EPSRC Early Career Fellowship (EP/N006259/1) and SZ from a Google Faculty Fellowship and the EPSRC Fellowship DEFORM (EP/S010203/1). ...
doi:10.1109/tpami.2021.3125598
pmid:34748477
fatcat:uczq5uykbbelfmfpxjl3e4ruia
LISA: Learning Implicit Shape and Appearance of Hands
[article]
2022
arXiv
pre-print
The model can capture accurate hand shape and appearance, generalize to arbitrary hand subjects, provide dense surface correspondences, be reconstructed from images in the wild and easily animated. ...
For a 3D point in the hand local coordinate, our model predicts the color and the signed distance with respect to each hand bone independently, and then combines the per-bone predictions using predicted ...
Acknowledgements: This work is supported in part by the Spanish government with the project MoHuCo PID2020-120049RB-I00. ...
arXiv:2204.01695v1
fatcat:53v7sunsjze6ndb5k2exeqz6x4
« Previous
Showing results 1 — 15 out of 83 results