418,400 Hits in 6.2 sec

Visualization and Animation of State Estimation Performance

A.P.S. Meliopoulos, G.J. Cokkinides, M. Ingram, S. Bell, S. Mathews
Proceedings of the 38th Annual Hawaii International Conference on System Sciences  
For any state estimator, it is important to develop techniques for monitoring the performance of the state estimator and identification of potential problems such as bad sensors, consistent errors, modeling  ...  The causes of this poor performance have been identified in earlier work by the authors and alternative robust state estimators have been proposed.  ...  Visualization methods provide a variety of state estimator performance quantities in 3-D or 2-D displays.  ... 
doi:10.1109/hicss.2005.678 dblp:conf/hicss/MeliopoulosCIBM05 fatcat:nszwm7owazcpdkmaoditzjjh4a

Animation of generic 3D head models driven by speech

Lucas Terissi, Mauricio Cerda, Juan C. Gomez, Nancy Hitschfeld-Kahler, Bernard Girau, Renato Valenzuela
2011 2011 IEEE International Conference on Multimedia and Expo  
The resulting animation is evaluated in terms of intelligibility of visual speech through subjective tests, showing a promising performance.  ...  Estimated visual speech features are used to animate a simple face model.  ...  Concerning the relative visual contribution of the animated avatar (C V ), the results in Table 1 indicate that the visual performance of the animated avatar reaches the 59%-73% of the visual performance  ... 
doi:10.1109/icme.2011.6011861 dblp:conf/icmcs/TerissiCGHGV11 fatcat:hhvcmkxr4ncw5kdcfq4f3oe5l4

A comprehensive system for facial animation of generic 3D head models driven by speech

Lucas D Terissi, Mauricio Cerda, Juan C Gómez, Nancy Hitschfeld-Kahler, Bernard Girau
2013 EURASIP Journal on Audio, Speech, and Music Processing  
Visual features are estimated from the speech signal based on the inversion of the AV-HMM. The estimated visual speech features are used to animate a simple face model.  ...  The resulting animation is evaluated in terms of intelligibility of visual speech through perceptual tests, showing a promising performance.  ...  University of Rosario and CIFASIS-CONICET, Argentina.  ... 
doi:10.1186/1687-4722-2013-5 fatcat:lm4emm2ltzhg5czm7qywwpsmjq

Speech-Driven 3D Facial Animation with Implicit Emotional Awareness: A Deep Learning Approach

Hai X. Pham, Samuel Cheung, Vladimir Pavlovic
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
We introduce a long short-term memory recurrent neural network (LSTM-RNN) approach for real-time facial animation, which automatically estimates head rotation and facial action unit activations of a speaker  ...  Experiments on an evaluation dataset of different speakers across a wide range of affective states demonstrate promising results of our approach in real-time speech-driven facial animation.  ...  Poor estimation performance on "happy" sequences in general, as shown in Table 1 , can be accounted for by a couple of reasons.  ... 
doi:10.1109/cvprw.2017.287 dblp:conf/cvpr/PhamCP17 fatcat:2pgkt24qjfg7vl2iciqujrexte

Audio/visual mapping with cross-modal hidden Markov models

Shengli Fu, R. Gutierrez-Osuna, A. Esposito, P.K. Kakumanu, O.N. Garcia
2005 IEEE transactions on multimedia  
Our results show that HMMI provides the best performance, both on synthetic and experimental audio-visual data.  ...  The audio/visual mapping problem of speech-driven facial animation has intrigued researchers for years.  ...  Moore and K. Das for their earlier contributions to the speech-driven facial animation system. We would also like to thank I. Rudomin, K. Murphy, and I.  ... 
doi:10.1109/tmm.2005.843341 fatcat:rnv6erpl2neq3cpdscznalp7im

Investigation of robust visual reaction and functional connectivity in the rat brain induced by rocuronium bromide with functional MRI

Wenchang Zhou, Aoling Cai, Binbin Nie, Wen Zhang, Ting Yang, Ning Zheng, Anne Manyande, Xuxia Wang, Fuqiang Xu, Xuebi Tian, Jie Wang
2020 American journal of translational research  
The results of the fMRI showed that there were increased functional connectivity and well-round visual responses in the RB induced state.  ...  When applied to animal studies, anesthesia is always used to reduce the movement of the animal and also reduce the impacts on the results of fMRI.  ...  Acknowledgements This work was supported by the National Natural Science Foundation of China (317-71193, 81974170, 81671770) and the Youth Innovation Promotion Association of Chinese Academy of Sciences  ... 
pmid:32655779 pmcid:PMC7344061 fatcat:reojpfxvcnbxhdlpbugna6xxxy

Speech-To-Video Synthesis Using MPEG-4 Compliant Visual Features

P.S. Aleksic, A.K. Katsaggelos
2004 IEEE transactions on circuits and systems for video technology (Print)  
The visual speech is represented in terms of the facial animation parameters (FAPs), supported by the MPEG-4 standard.  ...  Temporal accuracy experiments, comparison of the synthesized FAPs to the original FAPs, and audio-visual automatic speech recognition (AV-ASR) experiments utilizing the synthesized visual speech were performed  ...  Finally, the visual speech parameters are estimated for each state of the optimal state sequence. Bregler et al. [8] created an HMM-based speech-driven facial animation system called Video Rewrite.  ... 
doi:10.1109/tcsvt.2004.826760 fatcat:ay64q4kyfnffrmcjrgf7on6qdq

Cognitive modulation of interacting corollary discharges in the visual cortex [article]

Mohammad Abdolrahmani, Dmitry R Lyamzin, Ryo Aoki, Andrea Benucci
2019 bioRxiv   pre-print
These rules depended on the cognitive state of the animal: when the animal was most engaged in a visual discrimination task, cortical states had large variability accompanied by increased reliability in  ...  Modeling results suggest these states permit independent encoding of CDs and sensory signals and efficient read-out by downstream networks for improved visual perception.  ...  Acknowledgments Yuki Goya, Yuka Iwamoto, and Rie Nishiyama for their support with animal surgeries and behavioral training. Dr. Fujisawa at RIKEN-CBS for sharing the PV-cre line, Dr.  ... 
doi:10.1101/615229 fatcat:nzc4p7nxyfbitodg63idezrqte

Multimodal emotion estimation and emotional synthesize for interaction virtual agent

Minghao Yang, Jianhua Tao, Hao Li, Kaihui Mu
2012 2012 IEEE 2nd International Conference on Cloud Computing and Intelligence Systems  
This agent estimates users' emotional state by combining the information from the audio and facial expression with CART and boosting.  ...  The synchronous visual information of agent, including facial expression, head motion, gesture and body animation, are generated by multi-modal mapping from motion capture database.  ...  The element in i-th row and j-th column in it represents the percentage of samples whose real emotion state is i while the estimated emotion state is j.  ... 
doi:10.1109/ccis.2012.6664394 dblp:conf/ccis/YangTLM12 fatcat:fiyjm7wdojfyhflr2lnrnuczwa

Animated Transitions in Statistical Data Graphics

Jeffrey Heer, George Robertson
2007 IEEE Transactions on Visualization and Computer Graphics  
We then propose design principles for creating effective transitions and illustrate the application of these principles in DynaVis, a visualization system featuring animated data graphics.  ...  In this paper we investigate the effectiveness of animated transitions between common statistical data graphics such as bar charts, pie charts, and scatter plots.  ...  ACKNOWLEDGEMENTS The authors wish to thank Danyel Fisher, Desney Tan, Mary Czerwinski, Steven Drucker, Roland Fernandez, Maneesh Agrawala, and Daniela Rosner for their insights and assistance.  ... 
doi:10.1109/tvcg.2007.70539 pmid:17968070 fatcat:bg7fhilizbaqfofrhx7ibejuiu

A coupled HMM approach to video-realistic speech animation

Lei Xie, Zhi-Qiang Liu
2007 Pattern Recognition  
The ph-vi-CHMM system, which adopts different state variables (phoneme states and viseme states) in the audio and visual modalities, performs the best.  ...  We have compared the animation performance of the CHMM with the HMMs, the multi-stream HMMs and the factorial HMMs both objectively and subjectively.  ...  estimated visual parameters.  ... 
doi:10.1016/j.patcog.2006.12.001 fatcat:yytzbt5xofeh5ejsvrh5ptkowa

Advanced Virtual Medicine: Techniques and Applications for Medicine Oriented Computer Graphics [article]

H. Delingette, A. Linney, N. Magnenat-Thalmann, Yin Wu, D. Bartz, M. Hauth, K. Mueller
2004 Eurographics State of the Art Reports  
The course will introduce techniques of modelling and simulating human tissue for medical applications.  ...  The course includes basic and advanced techniques of segmentation, registration, reconstruction and motion simulation in medical applications from disease detection to surgery simulation and surgery in  ...  Vallée from the Radiology department and Dr. H. Sadri from the Orthopedic department at the Geneva University Hospital for their support and collaboration.  ... 
doi:10.2312/egt.20041035 fatcat:xik36tqetfe57lrzzspmgeraca

Raptors lack lower-field myopia

Christopher J. Murphy, Monica Howland, Howard C. Howland
1995 Vision Research  
The presence of lower-field myopia (described in chickens, pigeons, quail and amphibians) allows these animals to keep the ground in focus while performing other visual tasks.  ...  These findings suggest that the presence or absence of a lower-field myopia is a function of the visual ecology of the animal.  ...  We thank Terry Schulz of the University of California--Davis Raptor Center and Joanne Paul-Murphy, Section of Zoological Medicine, School of Veterinary Medicine, University of California--Davis for help  ... 
doi:10.1016/0042-6989(94)00240-m pmid:7610576 fatcat:2ylpmqi7cfg53cc4bt3sl6wawq

Emotion Dependent Facial Animation from Affective Speech [article]

Rizwan Sadiq, Sasan AsadiAbadi, Engin Erzin
2019 arXiv   pre-print
The proposed emotion dependent facial shape model performs better in terms of the Mean Squared Error (MSE) loss and in generating the landmark animations, as compared to training a universal model regardless  ...  Objective and subjective evaluations are performed over the SAVEE dataset.  ...  in mapping HMM states to visual parameters.  ... 
arXiv:1908.03904v1 fatcat:olupfm2egreqdge66ez5jye3ay

Psychophysical measurement of contrast sensitivity in the behaving mouse

Mark H. Histed, Lauren A. Carvalho, John H. R. Maunsell
2012 Journal of Neurophysiology  
To study neural responses in the face of this complexity, we trained mice to do a task where they perform hundreds of trials daily and perceptual thresholds can be measured.  ...  We designed this task to permit neurophysiological studies of behavior in cerebral cortex, where activity is variable from trial to trial and neurons encode many types of information simultaneously.  ...  Ruff, and J. E. Dowling for useful comments on the manuscript. A. Thavikulwat, S. Sleboda, and T. Wang helped with training and contributed to development of training protocols.  ... 
doi:10.1152/jn.00609.2011 pmid:22049334 pmcid:PMC3289478 fatcat:aqhrvgazp5csjlq4andwa776ra
« Previous Showing results 1 — 15 out of 418,400 results