Filters








3,153 Hits in 7.4 sec

Glances, glares, and glowering: how should a virtual human express emotion through gaze?

Brent Lance, Stacy Marsella
2009 Autonomous Agents and Multi-Agent Systems  
While the results do not provide a complete mapping between gaze and emotion, they do provide a basis for a generative model of expressive gaze.  ...  We have generated a set of animations by composing low-level gaze attributes culled from the nonverbal behavior literature. Then, subjects judged the animations displaying these attributes.  ...  Army Research, Development, and Engineering Command (RDECOM), and the content does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.  ... 
doi:10.1007/s10458-009-9097-6 fatcat:spbjb4wn55cxzjmyjey74jsmra

Virtual Reflexes [chapter]

Catholijn M. Jonker, Joost Broekens, Aske Plaat
2014 Lecture Notes in Computer Science  
Virtual actors in such systems have to show appropriate social behavior including emotions, gaze, and keeping distance. The behavior must be realistic and real-time.  ...  Current approaches are usually based on heavy information processing in terms of behavior planning, scripting of behavior and use of predefined animations.  ...  , and Jaap van den Herik.  ... 
doi:10.1007/978-3-319-09767-1_28 fatcat:jbgbqszpurezndztwpypqbrmu4

Growing on the inside: Soulful characters for video games

Rafael Bidarra, Robert Schaap, Kim Goossens
2010 Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games  
The PAD model is based on the view of human emotions as an input-output system, and it proposes three independent emotional dimensions or traits: Pleasure, Arousal and Dominance.  ...  Based on observation, it is common to consider emotional behavior as mainly rooted on three major basic elements: emotion, mood and personality.  ... 
doi:10.1109/itw.2010.5593335 dblp:conf/cig/BidarraSG10 fatcat:ltwir3rlavgxbarpirqzxpgfqe

An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual Humans

Michael Nixon, Steve DiPaola, Ulysses Bernardet
2018 2018 IEEE Conference on Computational Intelligence and Games (CIG)  
based on the variation in social status.  ...  We describe the validation of the model's parameters, including the length of eye contact and gazes, movement velocity, equilibrium response, and head and body posture.  ...  Arousal and dominance dimensions are used to drive the parameters of the gaze and head movement models.  ... 
doi:10.1109/cig.2018.8490373 dblp:conf/cig/NixonDB18 fatcat:bl4ri2m43nbu5mt2bmozfkfbem

Virtual Humans in Serious Games

Nadia Magnenat-Thalmann, Zerrin Kasap
2009 2009 International Conference on CyberWorlds  
This paper gives an overview of serious games applications and mention about research on socially intelligent virtual characters and their use in serious games.  ...  Thus, they should be equipped with properties such as social and cognitive intelligence, personality, emotions and user awareness in order to engage the users to the game and to be sensitive to the users  ...  For more expressive and natural behavior generation, we plan to use PAD-driven facial expression and head/gaze behavior generators since pleasure-arousaldominance factors can be well-linked with the quality  ... 
doi:10.1109/cw.2009.17 dblp:conf/vw/Magnenat-ThalmannK09 fatcat:r7vldxujcbgfpoozvasyfxybzy

A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception

K. Ruhland, C. E. Peters, S. Andrist, J. B. Badler, N. I. Badler, M. Gleicher, B. Mutlu, R. McDonnell
2015 Computer graphics forum (Print)  
Furthermore, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye gaze, during the expression of emotion or during conversation  ...  A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: 'The face is the portrait of the mind; the eyes, its informers'.  ...  This research was also supported by the NSF award 1208632 and ERC advanced grant EXPRESSIVE, the U.S. National Science Foundation awards 1017952 and 1149970 and the U.S.  ... 
doi:10.1111/cgf.12603 fatcat:padlj57psvdwjeubna23cetnyi

Virtual Reflexes [article]

Catholijn Jonker, Joost Broekens, Aske Plaat
2014 arXiv   pre-print
Virtual actors in such systems have to show appropriate social behavior including emotions, gaze, and keeping distance. The behavior must be realistic and real-time.  ...  Here we present a virtual reflexes architecture, explain how emotion and cognitive modulation are embedded, detail its workings, and give an example description of an aggression training application.  ...  This work has further benefited from discussions with Otto Adang, Ron Boelsma, Willem-Paul Brinkman, Koen Hindriks, and Birna van Riemsdijk.  ... 
arXiv:1404.3920v1 fatcat:ctppsndxffcblaah4atst76kxm

Indifferent or Enthusiastic? Virtual Audiences Animation and Perception in Virtual Reality

Yann Glémarec, Jean-Luc Lugrin, Anne-Gwenn Bosser, Aryana Collins Jackson, Cédric Buche, Marc Erich Latoschik
2021 Frontiers in Virtual Reality  
Each rule specifies a nonverbal behavior category: posture, head movement, facial expression and gaze direction as well as three parameters: type, frequency and proportion.  ...  In a second user-study, we used these parameters to design different types of virtual audiences with our nonverbal behavior rules and evaluated their perceptions.  ...  Based on the nonverbal behavior rules using values of valence and arousal and the results from our first user evaluation, we designed different VAs.  ... 
doi:10.3389/frvir.2021.666232 fatcat:qf4damzbvfcxfmug3kvyivzbqe

Regulation and Entrainment in Human—Robot Interaction

Cynthia Breazeal
2002 The international journal of robotics research  
We present evidence for mutual regulation and entrainment of the interaction, and we discuss how this benefits the interaction as a whole.  ...  A critical capability of such robots is their ability to interact with humans, and in particular, untrained users.  ...  Acknowledgments Support for this research was provided by ONR and DARPA under MURI N00014-95-1-0600, by DARPA under contract DABT 63-99-1-0012, and by NTT.  ... 
doi:10.1177/0278364902021010096 fatcat:bkennwlf4rhunfb6yer7uwergq

Regulation and Entrainment in Human-Robot Interaction

Cynthia Breazeal
2002 The international journal of robotics research  
We present evidence for mutual regulation and entrainment of the interaction, and we discuss how this benefits the interaction as a whole.  ...  A critical capability of such robots is their ability to interact with humans, and in particular, untrained users.  ...  Acknowledgments Support for this research was provided by ONR and DARPA under MURI N00014-95-1-0600, by DARPA under contract DABT 63-99-1-0012, and by NTT.  ... 
doi:10.1177/027836402128964134 fatcat:tigvss7uffhatddqio72ndafxm

Toward understanding social cues and signals in human–robot interaction: effects of robot gaze and proxemic behavior

Stephen M. Fiore, Travis J. Wiltshire, Emilio J. C. Lobato, Florian G. Jentsch, Wesley H. Huang, Benjamin Axelrod
2013 Frontiers in Psychology  
Cues associated with the robot's proxemic behavior were found to significantly affect participant perceptions of the robot's social presence and emotional state while cues associated with the robot's gaze  ...  Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it.  ...  understand the impact of proxemic behavior, gaze, and time on the CMS.  ... 
doi:10.3389/fpsyg.2013.00859 pmid:24348434 pmcid:PMC3842160 fatcat:xkq2qcjmcngw7gjd3nn42cqsau

Social content and emotional valence modulate gaze fixations in dynamic scenes

Marius Rubo, Matthias Gamer
2018 Scientific Reports  
Using recordings of eye movements, we quantified to what degree social information, emotional valence and low-level visual features influenced gaze allocation using generalized linear mixed models.  ...  However, non-social cues like text 9,10 and the center of the screen 11-13 can also serve as predictors for gaze behavior.  ...  One may therefore object that models based on viewing behavior might not reflect general patterns in attentional allocation, but rather reflect idiosyncrasies of the individual video clips used.  ... 
doi:10.1038/s41598-018-22127-w pmid:29491440 pmcid:PMC5830578 fatcat:3yydrmb2izdhnp4va63acllj3i

Facial Communicative Signals

Christian Lang, Sven Wachsmuth, Marc Hanheide, Heiko Wersing
2012 International Journal of Social Robotics  
This paper investigates facial communicative signals (head gestures, eye gaze, and facial expressions) as nonverbal feedback in human-robot interaction.  ...  The first one is a static approach yielding baseline results, whereas the second considers the temporal dynamics and achieved classification rates comparable to the human performance.  ...  The authors thank the anonymous reviewers for their helpful comments on an earlier draft of this paper.  ... 
doi:10.1007/s12369-012-0145-z fatcat:mkds45ibqvb53ief7xc7rbbfki

The prototypical pride expression: Development of a nonverbal behavior coding system

Jessica L. Tracy, Richard W. Robins
2007 Emotion  
Study 1 manipulated behavioral movements relevant to pride (e.g., expanded posture and head tilt) to identify the most prototypical pride expression and determine the specific components that are necessary  ...  Rosenberg, 1997) for assessing "basic" emotions from observable nonverbal behaviors.  ...  Chance was conservatively set at 33% to ensure that pride recognition was not simply based on accurate discriminations between positively and negatively valenced emotions or between high-and low-arousal  ... 
doi:10.1037/1528-3542.7.4.789 pmid:18039048 fatcat:hap4ddgoxra2hhgeiqubbupwoq

Subtleties of facial expressions in embodied agents

Catherine Pelachaud, Isabella Poggi
2002 Journal of Visualization and Computer Animation  
In this paper we are interested in assessing and managing what happens, at the meaning and signal levels of multimodal communicative behavior, when different communicative functions have to be displayed  ...  we show how this tool allows us to combine facial expressions of different communicative functions and to display complex and subtle expressions.  ...  Expressive Idiolect The values we have attributed to BN factors are based on studies on gaze [3, 24] , on eyebrows [9] , on head movements [13] and mouth [11] .  ... 
doi:10.1002/vis.299 fatcat:3kpj4nrndbhdjkcybo43kqzqni
« Previous Showing results 1 — 15 out of 3,153 results