A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Synthesizing Expressive Facial and Speech Animation by Text-to-IPA Translation with Emotion Control
2018
2018 12th International Conference on Software, Knowledge, Information Management & Applications (SKIMA)
Given the complexity of the human facial anatomy, animating facial expressions and lip movements for speech is a very time-consuming and tedious task. In this paper, a new text-toanimation framework for facial animation synthesis is proposed. The core idea is to improve the expressiveness of lip-sync animation by incorporating facial expressions in 3D animated characters. This idea is realized as a plug-in in Autodesk Maya, one of the most popular animation platforms in the industry, such that
doi:10.1109/skima.2018.8631536
dblp:conf/skima/StefPSH18
fatcat:r6ie6h7ttffe3cligpjz53266q