6,093 Hits in 8.4 sec

SceneMaker: Multimodal Visualisation of Natural Language Film Scripts [chapter]

Eva Hanser, Paul Mc Kevitt, Tom Lunney, Joan Condell, Minhua Ma
2010 Lecture Notes in Computer Science  
During the generation of the story content, SceneMaker gives particular attention to emotional aspects and their reflection in fluency and manner of actions, body posture, facial expressions, speech, scene  ...  Our proposed software system, SceneMaker, aims to facilitate this creative process by automatically interpreting natural language film scripts and generating multimodal, animated scenes from them.  ...  Based on the analysis of linguistics and context of dialogue scripts appropriate Multimodal Presentation Mark-up Language (MPML) [13] annotations are automatically added to model speech synthesis, facial  ... 
doi:10.1007/978-3-642-15384-6_46 fatcat:5tfjwz664nfctoeqrdbwaapojq

SceneMaker: Automatic Visualisation of Screenplays [chapter]

Eva Hanser, Paul Mc Kevitt, Tom Lunney, Joan Condell
2009 Lecture Notes in Computer Science  
Our proposed software system, SceneMaker, aims to facilitate the production of plays, films or animations by automatically interpreting natural language film scripts and generating multimodal, animated  ...  During the generation of the story content, SceneMaker will give particular attention to emotional aspects and their reflection in fluency and manner of actions, body posture, facial expressions, speech  ...  The Expression Mark-up Language (EML) [22] integrates environmental expressions like cinematography, illumination and music into the emotion synthesis of virtual humans.  ... 
doi:10.1007/978-3-642-04617-9_34 fatcat:5klaf4aq5bdermvee4yj4vjbkm

SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts [chapter]

Eva Hanser, Paul Mc Kevitt, Tom Lunney, Joan Condell
2010 Lecture Notes in Computer Science  
, body language, facial expressions, speech, voice pitch, scene composition, timing, lighting, music and camera.  ...  Also SceneMaker will enable 3D animation editing via web-based and mobile platforms.  ...  The XML-style mark-up language provides a tagging scheme for the control of predefined motions of a character, generating scripted descriptions of animated virtual characters which can be run in a web  ... 
doi:10.1007/978-3-642-17080-5_17 fatcat:w23elmcseja3lfycqehk5awdju

SceneMaker: Creative Technology for Digital StoryTelling [chapter]

Murat Akser, Brian Bridges, Giuliano Campo, Abbas Cheddad, Kevin Curran, Lisa Fitzpatrick, Linley Hamilton, John Harding, Ted Leath, Tom Lunney, Frank Lyons, Minhua Ma (+12 others)
2017 Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering  
SceneMaker will highlight affective or emotional content in digital storytelling with particular focus on character body posture, facial expressions, speech, non-speech audio, scene composition, timing  ...  SceneMaker will enable the automated production of multimodal animated scenes from film and drama scripts or screenplays.  ...  to control the affective state and body language of the characters.  ... 
doi:10.1007/978-3-319-55834-9_4 fatcat:l6iie5vycng7nga4x2f7eeoxh4

14. Semi-autonomous avatars: A new direction for expressive user embodiment [chapter]

Marco Gillies, Daniel Ballin, Xueni Pan, Neil A. Dodgson
2008 Advances in Consciousness Research  
The authors would like to thank the members of the Cambridge University Computer Lab Rainbow research group, the Radical Multimedia Lab, UCL Virtual Environments and Computer Graphics group, Mel Slater  ...  The rest of this work is funded and carried out at BT and at University College London, funded by the UK Engineering and Physical Sciences Research Council.  ...  Their GESTYLE language provides four levels of mark up for specifying differences in style of non-verbal communication between virtual characters.  ... 
doi:10.1075/aicr.74.17gil fatcat:eom2rvgqsrbjzckan3o5horm3m

Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation

Pengfei Lu, Matt Huenerfauth
2014 Computer Speech and Language  
While there is great potential for sign language animation generation software to improve the accessibility of information for deaf individuals with low written-language literacy, the understandability  ...  of current sign language animation systems is limited.  ...  and the "virtual skeleton" of the animated character being recorded could lead to "retargeting" errors; these errors manifest as body poses of the human that do not match the body poses of the virtual  ... 
doi:10.1016/j.csl.2013.10.004 fatcat:ua47b7cc3nanllc6o44dukw4du

Framework For Lithuanian Speech Animation

Romualdas Bausys, Ingrida Mazonaviciute
2010 Zenodo  
Publication in the conference proceedings of EUSIPCO, Aalborg, Denmark, 2010  ...  mark-up languages for virtual characters.  ...  Cosi [7] proposed a facial animation toolkit implemented in MATLAB created mainly to speed up the procedure for building the LUCIA talking head through motion capture techniques, translated to MPEG-4  ... 
doi:10.5281/zenodo.42014 fatcat:avpkit4zlfc6zkoyifp2bznwue

sEditor: A Prototype for a Sign Language Interfacing System

Beifang Yi, Xusheng Wang, Frederick C. Harris, Sergiu M. Dascalu
2014 IEEE Transactions on Human-Machine Systems  
We propose a sign language interfacing system, as a working platform, that can be used to create virtual human body parts, simulate virtual gestures, and construct, manage, and edit sign language linguistic  ...  The distinctive visual and spatial nature of sign languages makes it difficult to develop an interfacing system as a communication medium platform for sign language users.  ...  cameras, to select and place a character as the virtual sign from a character pool (male and female virtual characters in diverse races). c) Rendering Control: to render the virtual body in a chosen style  ... 
doi:10.1109/tsmc.2014.2316743 fatcat:qofkenge3rfldnofnxvxei77hi

Synthetic characters as multichannel interfaces

Elena Not, Koray Balci, Fabio Pianesi, Massimo Zancanaro
2005 Proceedings of the 7th international conference on Multimodal interfaces - ICMI '05  
In this view, we propose SMIL-AGENT as a representation and scripting language for synthetic characters, which abstracts away from the specific implementation and context of use of the character.  ...  Synthetic characters are an effective modality to convey messages to the user, provide visual feedback about the system internal understanding of the communication, and engage the user in the dialogue  ...  Facial Animation Parameters and Body Animation Parameters used in MPEG-4 for parametrizing deformations on a face or a body with lesser parameters.  ... 
doi:10.1145/1088463.1088499 dblp:conf/icmi/NotBPZ05 fatcat:dkbwy5jxxzhzvebfhfilhyj2w4


Miran Mosmondor, Tomislav Kosutic, Igor S. Pandzic
2005 Proceedings of the 3rd international conference on Mobile systems, applications, and services - MobiSys '05  
The client has a user interface that allows the user to input a facial image and place a simple mask on it to mark the main features.  ...  After a quick manipulation on the phone, a 3D model of that face is created and can be animated simply by typing in some text.  ...  When creating the personalized character the interface allows the user to input a facial image and place a simple mask on it to mark the main features.  ... 
doi:10.1145/1067170.1067173 dblp:conf/mobisys/MosmondorKP05 fatcat:aa5ckwv4znhhdeptrbzcndkr3a

Sign Language Avatars: A Question of Representation

Rosalee Wolfe, John C. McDonald, Thomas Hanke, Sarah Ebling, Davy Van Landuyt, Frankie Picron, Verena Krausneker, Eleni Efthimiou, Evita Fotinea, Annelies Braffort
2022 Information  
Because signed languages have no generally-accepted written form, translating spoken to signed language requires the additional step of displaying the language visually as animation through the use of  ...  With the goal of developing a deeper understanding of the challenges posed by this question, this article gives a summary overview of the unique aspects of signed languages, briefly surveys the technology  ...  The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.  ... 
doi:10.3390/info13040206 fatcat:vx42fiap5bekfh2gyvbc7csm7u

"Avatar to Person" (ATP) Virtual Human Social Ability Enhanced System for Disabled People

Zichun Guo, Zihao Wang, Xueguang Jin
2021 Wireless Communications and Mobile Computing  
The system has been proven effective in the enhancement of the sense of online social participation for people with disabilities through user tests.  ...  Nowadays, with the improvement of affective computing and big data, people have generally adapted to construct social networks relying on social robots and smartphones.  ...  As for this project, virtual avatars are introduced to help deaf and dumb people overcome the difficulty of pronunciation through text-to-speech and facial expression animation.  ... 
doi:10.1155/2021/5098992 doaj:05f5c3ce757d4b899ab4199971caf7a3 fatcat:qr7d5nmm4bhuvjqtfgz4t4nt3m

The role of interoperability in virtual worlds, analysis of the specific cases of avatars

Blagica Jovanova, Marius Preda, Françoise Preteux
2009 Journal of Virtual Worlds Research  
this version is the abstract  ...  Character Mark-up Language (CML) (Arafa, 2003) is an XML-based character attribute definition and animation scripting language designed to aid in the rapid incorporation of life-like characters/agents  ...  Virtual Human Markup Language (VHML) is designed to accommodate the various aspects of Human-Computer Interaction with regards to Facial Animation, Body Animation, Dialogue Manager interaction, Text to  ... 
doi:10.4101/jvwr.v2i3.672 fatcat:gcwamotpuvaknf3gsol4kveneq

Virtual agents for the production of linear animations

Rossana Damiano, Vincenzo Lombardo, Fabrizio Nunnari
2013 Entertainment Computing  
This paper proposes a novel approach to the automatic generation of character animations that draws inspiration from the techniques for the construction of the virtual agents.  ...  The pipeline for the production of animated scenes is based on the mapping between the authorial description of characters' behavior and the actual animation data.  ...  animate the characters, breaking down their actions and editing the animation curves through the use of sophisticate graphic editors (that hide the math and the numbers behind 3D computation).  ... 
doi:10.1016/j.entcom.2013.06.001 fatcat:uhzde2jjezcjdmzcibxecolchi

Virtual character performance from speech

Stacy Marsella, Yuyu Xu, Margaux Lhommet, Andrew Feng, Stefan Scherer, Ari Shapiro
2013 Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation - SCA '13  
The character will perform semantically appropriate facial expressions and body movements that include gestures, lip synchronization to speech, head movements, saccadic eye movements, blinks and so forth  ...  Abstract We demonstrate a method for generating a 3D virtual character performance from the audio signal by inferring the acoustic and semantic properties of the utterance.  ...  for the entire body of a virtual character.  ... 
doi:10.1145/2485895.2485900 dblp:conf/sca/MarsellaXLFSS13 fatcat:72wfbwdugzft7nmbq5psem23zm
« Previous Showing results 1 — 15 out of 6,093 results