36,458 Hits in 6.7 sec

Agent-based recognition of facial expressions

Pablo Suau, Mar Pujol, Ramon Rizo, Simon Caton, Omer F. Rana, Bruce Batchelor, Francisco Pujol
2005 Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems - AAMAS '05  
Description of a system to detect facial expressions using an agent-based approach is presented.  ...  The system utilizes interaction between Matlab-based image filters and a JADEbased agent implementation. The system is demonstrated using a feature recognition example.  ...  To improve such systems and allow better recognition capability, the work described here considers the implementation of a facial expression recognition scheme based on intelligent agents.  ... 
doi:10.1145/1082473.1082831 dblp:conf/atal/PerezLACRBL05 fatcat:vx3tlae7mncyjekvkadukxeihi

Younger and older users׳ recognition of virtual agent facial expressions

Jenay M. Beer, Cory-Ann Smarr, Arthur D. Fisk, Wendy A. Rogers
2015 International Journal of Human-Computer Studies  
of negative facial expressions.  ...  If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic  ...  Both studies were submitted in partial fulfillment of the Master of Science degree at Georgia Institute of Technology (Beer, 2010; Smarr, 2011). Support.  ... 
doi:10.1016/j.ijhcs.2014.11.005 pmid:25705105 pmcid:PMC4331019 fatcat:mxv3236mwvh5pecwychytfut6m

On Constrained Local Model Feature Normalization for Facial Expression Recognition [chapter]

Zhenglin Pan, Mihai Polceanu, Christine Lisetti
2016 Lecture Notes in Computer Science  
Real time user independent facial expression recognition is important for virtual agents but challenging.  ...  In this paper, we present a new approach that instead of using the traditional base face normalization on whole face shapes, performs normalization on the point cloud of each landmark.  ...  While tasks such as face recognition require differentiating between individuals based on facial features, facial expression recognition relies on variations of these facial features and on their dynamics  ... 
doi:10.1007/978-3-319-47665-0_35 fatcat:ieyf7pqckjgtrekxdv7kzkbjdu

E-Learning Assistant System Based on Virtual Human Interaction Technology [chapter]

Xue Weimin, Xia Wenhong
2007 Lecture Notes in Computer Science  
This paper introduces a new virtual human interaction module based on multi-agents. The affective interactive model is built according to the human cerebrum control pattern.  ...  The multimodal detection agents are able to help tutor to better understand the emotional and motivational state of the learner throughout the learning process.  ...  It can dialog to the student face to face through the facial expression recognition system and the voice recognition system.  ... 
doi:10.1007/978-3-540-72588-6_90 fatcat:ulglkug5lvgdddznonjdjy22qu

Human–Computer Communication Using Recognition and Synthesis of Facial Expression

Yasunari Yoshitomi
2021 Journal of Robotics, Networking and Artificial Life (JRNAL)  
Narumoto, Professor of the Kyoto Prefectural University of Medicine, Dr. M. Tabuse, Professor of the Kyoto Prefectural University, and Dr. T.  ...  Asada, Associate Professor of the Kyoto Prefectural University, for their valuable cooperation during the course of this research.  ...  Sensor Fusion Sensor fusion is a promising way to improve the recognition accuracy of facial expression or emotion recognition.  ... 
doi:10.2991/jrnal.k.210521.003 fatcat:qj45xovo2raopmhszy24utdrku

Multimodal emotion estimation and emotional synthesize for interaction virtual agent

Minghao Yang, Jianhua Tao, Hao Li, Kaihui Mu
2012 2012 IEEE 2nd International Conference on Cloud Computing and Intelligence Systems  
The synchronous visual information of agent, including facial expression, head motion, gesture and body animation, are generated by multi-modal mapping from motion capture database.  ...  In this study, we create a 3D interactive virtual character based on multi-modal emotional recognition and rule based emotional synthesize techniques.  ...  any emotion recognition for users and without emotion output for agent; 2) Emotion based speech conversation with bimodal emotion recognition for users facial expressions and audio input.  ... 
doi:10.1109/ccis.2012.6664394 dblp:conf/ccis/YangTLM12 fatcat:fiyjm7wdojfyhflr2lnrnuczwa

Representing Affective Facial Expressions for Robots and Embodied Conversational Agents by Facial Landmarks

Caixia Liu, Jaap Ham, Eric Postma, Cees Midden, Bart Joosten, Martijn Goudbeek
2013 International Journal of Social Robotics  
This study focusses on the human recognition of emotional expressions from landmark sequences.  ...  The implications of our findings for the virtual generation of facial expressions in robots and embodied conversational agents are discussed.  ...  Acknowledgements We wish to express our gratitude to Ruud Mattheij, and Peter Ruijten, and the Persuasive Technology Lab Group at TU/e for the fruitful discussions about this work.  ... 
doi:10.1007/s12369-013-0208-9 fatcat:l4svlep6kngtnk7sgnt5d2ljmu

2019 Index IEEE Transactions on Affective Computing Vol. 10

2020 IEEE Transactions on Affective Computing  
-March 2019 32-47 Multi-Objective Based Spatio-Temporal Feature Representation Learning Robust to Expression Intensity Variations for Facial Expression Recognition.  ...  -March 2019 18-31 Multi-Objective Based Spatio-Temporal Feature Representation Learning Robust to Expression Intensity Variations for Facial Expression Recognition.  ... 
doi:10.1109/taffc.2019.2957904 fatcat:55yc25prhrgelmtih2dlf3ilsq

Active Agent Oriented Multimodal Interface System

Osamu Hasegawa, Katsunobu Itou, Takio Kurita, Satoru Hayamizu, Kazuyo Tanaka, Kazuhiko Yamamoto, Nobuyuki Otsu
1995 International Joint Conference on Artificial Intelligence  
This paper presents a prototype of an interface system with an active human-like agent. In usual human communication, non-verbal expressions play important roles.  ...  Our human-like agent with its realistic facial expressions identifies the user by sight and interacts actively and individually to each user in spoken language.  ...  Hiroshi Harashima (Univ.of Tokyo) and his group for granting permission to use their 3D facial model on our prototype system.  ... 
dblp:conf/ijcai/HasegawaIKHTYO95 fatcat:mmup5zfp3rb4ham26zph2etwtm

Emotion Recognition of Virtual Agents Facial Expressions: The Effects of Age and Emotion Intensity

Jenay M. Beer, Arthur D. Fisk, Wendy A. Rogers
2009 Proceedings of the Human Factors and Ergonomics Society Annual Meeting  
identification of virtual agent facial expressions has been largely unexplored.  ...  The purpose of the current research was to investigate whether age-related differences in facial emotion recognition can extend to emotion-expressive virtual agents.  ...  ACKNOWLEDGEMENTS This research was supported in part by a grant from the National Institutes of Health (National Institute on Aging) Grant P01 AG17211 under the auspices of the Center for Research and  ... 
doi:10.1177/154193120905300205 pmid:25552896 pmcid:PMC4278580 fatcat:7bhogegrkzdotc76d2nohy3wpq

Multi-Pose Facial Expression Recognition Using Hybrid Deep Learning Model with Improved Variant of Gravitational Search Algorithm

Yogesh Kumar, Shashi Kant Verma, Sandeep Sharma
2022 ˜The œinternational Arab journal of information technology  
This research work addresses the problem of expression recognition from different facial poses at the yaw angle.  ...  The recognition of human facial expressions with the variation of poses is one of the challenging tasks in real-time applications such as human physiological interaction detection, intention analysis,  ...  It has been found that the recognition rate of facial expressions decreases with the movement of the face at some yaw angle.  ... 
doi:10.34028/iajit/19/2/15 fatcat:ooqugkpavrghfcssu6j3sy5j74

The Affective Tutoring System

Mohamed Ben Ammar, Mahmoud Neji, Adel. M. Alimi, Guy Gouardères
2010 Expert systems with applications  
The main goal is to analyses learner facial expressions and show how Affective Computing could contribute for this interaction, being part of the complete student tracking (traceability) to monitor student  ...  This paper represents a study about the integration of this new area in the intelligent tutoring system.  ...  The focus scope is to allow better recognition capability. The work described here considers the implementation of a facial expression recognition system based on intelligent agents.  ... 
doi:10.1016/j.eswa.2009.09.031 fatcat:ulbho3e5p5g7jkounn6b5bpj3u

An E-learning System based on Affective Computing

Sun Duo, Lu Xue Song
2012 Physics Procedia  
style based on his personality trait.  ...  In this paper, we construct an emotional intelligent e-learning system based on "Affective computing".  ...  To achieve the goal, abundant and effective data of facial expressions is necessary, and methodology of multiple facial expression recognition is to be studied.  ... 
doi:10.1016/j.phpro.2012.02.278 fatcat:34szzxkc5vgenfgxfppqshasva

Multimodal Intelligent Tutoring Systems [chapter]

Xia Mao, Zheng Li
2012 E-Learning-Organizational Infrastructure and Tools for Specific Areas  
Based on the cues of sources and characteristics of facial expression, we propose a novel model of fuzzy facial expression generation, as seen in figure 6.  ...  The ability to handle occluded facial features is most important for achieving robustness of facial expression recognition.  ... 
doi:10.5772/29041 fatcat:wvvxnq44hjhrfcbo556f37toxq

Galatea: Open-Source Software for Developing Anthropomorphic Spoken Dialog Agents [chapter]

Shin-ichi Kawamoto, Hiroshi Shimodaira, Tsuneo Nitta, Takuya Nishimoto, Satoshi Nakamura, Katsunobu Itou, Shigeo Morishima, Tatsuo Yotsukura, Atsuhiko Kai, Akinobu Lee, Yoichi Yamashita, Takao Kobayashi (+7 others)
2004 Cognitive Technologies  
Galatea employs model-based speech and facial animation[ facial-image ] synthesizers whose model parameters are adapted easily to those for an existing person if his/her training data is given.  ...  In order to easily integrate the modules of different characteristics including speech recognizer, speech synthesizer, facial animation synthesizer[ facial-image synthesizer ] and dialog controller, each  ...  We also extended the original specification of VoiceXML to add some commands, including the facial expression controls of anthropomorphic dialogue agents.  ... 
doi:10.1007/978-3-662-08373-4_9 fatcat:o4rp5vundvc23cfzsnxapjcunm
« Previous Showing results 1 — 15 out of 36,458 results