A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is application/pdf
.
Filters
Semantically-aware blendshape rigs from facial performance measurements
2016
SIGGRAPH ASIA 2016 Technical Briefs on - SA '16
Abstract We present a framework for automatically generating personalized blendshapes from actor performance measurements, while preserving the semantics of a template facial animation rig. ...
Figure 1 : An artist-created blendshape model is adopted to 63 corresponded facial performance measurements by non-rigidly deforming 35 basis shapes. ...
Facial Capture and Correspondence Our blendshape personalization (described in Section 3) requires performance measurements of the individual as input towards which the blendshapes will be optimized. ...
doi:10.1145/3005358.3005378
fatcat:hcdkpugjqvg6tecsulkb5ywhou
Facial Expression Retargeting from Human to Avatar Made Easy
[article]
2020
arXiv
pre-print
Facial expression retargeting from humans to virtual characters is a useful technique in computer graphics and animation. ...
Traditional methods use markers or blendshapes to construct a mapping between the human and avatar faces. ...
Consequently, we train a domain translation network [45] , [46] using user-annotated triplets (See Fig. 4 ) instead of semantic-aware blendshapes. ...
arXiv:2008.05110v1
fatcat:nt5cq5ygbfhazkqx3sylcfljmi
Investigating perceptually based models to predict importance of facial blendshapes
2020
Motion, Interaction and Games
Blendshape facial rigs are used extensively in the industry for facial animation of virtual humans. ...
However, storing and manipulating large numbers of facial meshes is costly in terms of memory and computation for gaming applications, yet the relative perceptual importance of blendshapes has not yet ...
The most relevant optimisation method for this paper would be blendshape reduction, either removing blendshapes from a rig or from an animation. ...
doi:10.1145/3424636.3426904
dblp:conf/mig/CarriganZDM20
fatcat:ash5q52nnranlmkmljk36xvse4
Model for predicting perception of facial action unit activation using virtual humans
2021
Computers & graphics
Blendshape facial rigs are used extensively in the industry for facial animation of virtual humans. ...
Blendshape rigs are comprised of sets of semanticallymeaningful expressions, which govern how expressive the character will be, often based on Action Units from the Facial Action Coding System (FACS). ...
The most relevant optimisation method for this paper would be blendshape reduction, either removing blendshapes from a rig or from an animation. ...
doi:10.1016/j.cag.2021.07.022
fatcat:hhalaiob4nbfxkg7fkvncoz4gi
Realtime Dynamic 3D Facial Reconstruction for Monocular Video In-the-Wild
2017
2017 IEEE International Conference on Computer Vision Workshops (ICCVW)
It also allows users to move around, perform facial expressions freely without degrading the reconstruction quality. ...
Our method can track the deforming facial geometry and reconstruct external objects that protrude from the face such as glasses and hair. ...
The research leading to these results has received funding from the People Programme ( ...
doi:10.1109/iccvw.2017.97
dblp:conf/iccvw/LiuWYZ17
fatcat:ulhg7jjizfdzvc3zsasjjec6xy
I M Avatar: Implicit Morphable Head Avatars from Videos
[article]
2022
arXiv
pre-print
Inspired by the fine-grained control mechanisms afforded by conventional 3DMMs, we represent the expression- and pose- related deformations via learned blendshapes and skinning fields. ...
A key contribution is our novel analytical gradient formulation that enables end-to-end training of IMavatars from videos. ...
Here, we present a new approach that can recover a higher-fidelity facial rig than prior work, and is controlled by expression blendshapes as well as jaw, neck, and eye pose parameters. ...
arXiv:2112.07471v5
fatcat:px4mx4itbrhsvfhrkzcgfkhuue
Audiovisual Speech Synthesis using Tacotron2
[article]
2021
arXiv
pre-print
We analyze the performance of the two systems and compare them to the ground truth videos using subjective evaluation tests. ...
The reconstructed acoustic speech signal is then used to drive the facial controls of the face model using an independently trained audio-to-facial-animation neural network. ...
In particular, we use an extension of the example-based facial rigging method of [21] , where we modify a generic blendshape model to best match the talent's example facial expressions. ...
arXiv:2008.00620v2
fatcat:cmww55eotffpjp6nwkl5kgmme4
Touch sensing on non-parametric rear-projection surfaces: A physical-virtual head for hands-on healthcare training
2015
2015 IEEE Virtual Reality (VR)
sheet and measurements from the rig. ...
The mesh is textured using one unwrapped image and then rigged using a combination of bones and blendshapes for various animations. ...
doi:10.1109/vr.2015.7223326
dblp:conf/vr/HochreiterDNGW15
fatcat:tphlb73lyfc6xaxplyx4r4qpnm
Leveraging Deepfakes to Close the Domain Gap between Real and Synthetic Images in Facial Capture Pipelines
[article]
2022
arXiv
pre-print
The resulting model is fit with an animation rig, which is then used to track facial performances. ...
We propose an end-to-end pipeline for both building and tracking 3D facial models from personalized in-the-wild (cellphone, webcam, youtube clips, etc.) video data. ...
GPU which was used to run experiments, and Epic Games for their help with the Metahuman rig. ...
arXiv:2204.10746v2
fatcat:4xdsb657pzaavk6tixma7jhrau
MoFA: Model-Based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
2017
2017 IEEE International Conference on Computer Vision (ICCV)
Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. ...
The trained encoder predicts these parameters from a single monocular image, all at once. ...
In [14] , high-quality 3D face rigs are obtained from monocular RGB video based on a multi-layer model. Even real-time facial reconstruction and reenactment has been achieved [54, 20] . ...
doi:10.1109/iccv.2017.401
dblp:conf/iccv/TewariZK0BPT17
fatcat:sxazzew3uva2jgmnvydkx5fwim
MoFA: Model-Based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
2017
2017 IEEE International Conference on Computer Vision Workshops (ICCVW)
Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. ...
The trained encoder predicts these parameters from a single monocular image, all at once. ...
In [14] , high-quality 3D face rigs are obtained from monocular RGB video based on a multi-layer model. Even real-time facial reconstruction and reenactment has been achieved [52, 20] . ...
doi:10.1109/iccvw.2017.153
dblp:conf/iccvw/TewariZK0BPT17
fatcat:fduddae62zfi7l4ptey6ao5ase
MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
[article]
2017
arXiv
pre-print
Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. ...
Our decoder takes as input a code vector with exactly defined semantic meaning that encodes detailed face pose, shape, expression, skin reflectance and scene illumination. ...
In [14] , high-quality 3D face rigs are obtained from monocular RGB video based on a multi-layer model. Even real-time facial reconstruction and reenactment has been achieved [54, 20] . ...
arXiv:1703.10580v2
fatcat:cneur2vcw5hkfefi3dukcpk4xm
Normalized Avatar Synthesis Using StyleGAN and Perceptual Refinement
[article]
2021
arXiv
pre-print
This allows us to generate detailed but normalized facial assets. ...
We introduce a highly robust GAN-based framework for digitizing a normalized 3D avatar of a person from a single unconstrained photo. ...
L LP IP S is the perceptual loss measured by LPIPS distance [74] between I 0 and I, which enables improved matching in terms of robustness and better preservation of semantically meaningful facial features ...
arXiv:2106.11423v1
fatcat:hf6m2dur3fdlvna322wmtrqb54
MakeItTalk: Speaker-Aware Talking-Head Animation
[article]
2020
arXiv
pre-print
Another key component of our method is the prediction of facial landmarks reflecting speaker-aware dynamics. ...
We present a method that generates expressive talking heads from a single facial image with audio as the only input. ...
This research is partially funded by NSF (EAGER-1942069) and a gift from Adobe. ...
arXiv:2004.12992v2
fatcat:ltuptu6knrh5zlyn5bkfvjl7ee
Practical Face Reconstruction via Differentiable Ray Tracing
[article]
2021
arXiv
pre-print
ray-tracing based novel face reconstruction approach where scene attributes - 3D geometry, reflectance (diffuse, specular and roughness), pose, camera parameters, and scene illumination - are estimated from ...
To estimate the face attributes consistently and with practical semantics, a two-stage optimization strategy systematically uses a subset of parametric attributes, where subsequent attribute estimations ...
self-shadows aware. • A robust optimization strategy that extracts semantically meaningful personalized face attributes, from unconstrained images, Sec. 4. ...
arXiv:2101.05356v1
fatcat:3zy4i5jdwrc2plxy2rhuxv2d7e
« Previous
Showing results 1 — 15 out of 33 results