Filters








34,861 Hits in 4.8 sec

Automatic Interpretation of Affective Facial Expressions in the Context of Interpersonal Interaction

Emilia I. Barakova, Roman Gorbunov, Matthias Rauterberg
2015 IEEE Transactions on Human-Machine Systems  
Genetic programming was used to find the locations, types, and intensities of the emotional events as well as the way the recorded facial expressions represented reactions to them.  ...  This paper proposes a method for interpretation of the emotions detected in facial expressions in the context of the events that cause them.  ...  ACKNOWLEDGMENT The authors would like to thank the workgroup of the MECA project, particularly Mark Neerincx, for their support and collaboration.  ... 
doi:10.1109/thms.2015.2419259 fatcat:46n72ikz65c7hnchhj3otcjooq

Generalizability of Goal Recognition Models in Narrative-Centered Learning Environments [chapter]

Alok Baikadi, Jonathan Rowe, Bradford Mott, James Lester
2014 Lecture Notes in Computer Science  
We investigate the impact of discovery event representations on goal recognition accuracy and efficiency.  ...  encode relations between problem-solving goals and discovery events, domainspecific representations of user progress in narrative-centered learning environments.  ...  The authors wish to thank members of the IntelliMedia Group for their assistance, as well as Valve Software for access to the Source TM engine and SDK.  ... 
doi:10.1007/978-3-319-08786-3_24 fatcat:akah4c7e7rhetpofjnufzuwmuu

Towards Animated Visualization of Actors and Actions in a Learning Environment [chapter]

Oleksandr Kolomiyets, Marie-Francine Moens
2014 Advances in Intelligent Systems and Computing  
The technique employs a natural language processing pipeline for sophisticated syntactic and semantic analysis of text, and extracts information about events, actors and their roles in events, as well  ...  as temporal ordering of the events and spatial roles.  ...  Acknowledgments The presented research was supported by the TERENCE (EU FP7-257410) and MUSE (EU FP7-296703) projects.  ... 
doi:10.1007/978-3-319-07698-0_25 fatcat:hab4mzxuxvdtfmtgylk4wclsja

Multiperson Visual Focus of Attention from Head Pose and Meeting Contextual Cues

S O Ba, J Odobez
2011 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Meeting contextual information is represented by the location of people, conversational events identifying floor holding patterns, and a presentation activity variable.  ...  This paper introduces a novel contextual model for the recognition of people's visual focus of attention (VFOA) in meetings from audio-visual perceptual cues.  ...  RELATED WORK Multiperson VFOA and conversational event recognition relates to the automatic recognition of human interactions among small groups in face-to-face meetings.  ... 
doi:10.1109/tpami.2010.69 pmid:21088322 fatcat:werqgfbsyvf7fpa74iwcgb4a6u

Comparing Comprehension of a Long Text Read in Print Book and on Kindle: Where in the Text and When in the Story?

Anne Mangen, Gérard Olivier, Jean-Luc Velay
2019 Frontiers in Psychology  
: engagement, recall, capacities to locate events in the text and reconstructing the plot of the story.  ...  Fifty participants (24 years old) read a 28 page (∼1 h reading time) long mystery story on Kindle or in a print pocket book and completed several tests measuring various levels of reading comprehension  ...  AUTHOR CONTRIBUTIONS AM and J-LV conceived and designed the experiments. GO and J-LV performed the experiments. J-LV analyzed the data. AM and J-LV wrote the manuscript.  ... 
doi:10.3389/fpsyg.2019.00038 pmid:30828309 pmcid:PMC6384527 fatcat:twcqozno7vbj3ffysv67brge6i

Communicative Signals Promote Object Recognition Memory and Modulate the Right Posterior STS

Elizabeth Redcay, Ruth S. Ludlum, Kayla R. Velnoskey, Simren Kanwal
2016 Journal of Cognitive Neuroscience  
For both the behavioral and fMRI experiments, participants viewed a series of videos of an actress acting on one of two objects in front of her.  ...  Participants then completed a recognition memory task with old (target and nontarget) objects and novel objects.  ...  Acknowledgments We thank Nina Lichtenberg for assistance with stimuli creation, Brieana Viscomi for assistance with data collection, and Dustin Moraczewski for assistance with data analyses.  ... 
doi:10.1162/jocn_a_00875 pmid:26351992 fatcat:u73tllzzufgmzbyvgcgbitkv54

Learning Activity Predictors from Sensor Data: Algorithms, Evaluation, and Applications

Bryan David Minor, Janardhan Rao Doppa, Diane J. Cook
2017 IEEE Transactions on Knowledge and Data Engineering  
This approach allows us to leverage powerful regression learners that can reason about the relational structure of the problem with negligible computational overhead.  ...  We also embed the learned predictor into a mobile-device-based activity prompter and evaluate the app for 9 participants living in smart homes.  ...  Acknowledgments This material is based upon work supported by the National Science Foundation under Grants 0900781 and Biographies  ... 
doi:10.1109/tkde.2017.2750669 pmid:29456436 pmcid:PMC5813841 fatcat:wzf5e452wff3zlyc6nnl5sjcwy

Storyline Representation of Egocentric Videos with an Applications to Story-Based Search

Bo Xiong, Gunhee Kim, Leonid Sigal
2015 2015 IEEE International Conference on Computer Vision (ICCV)  
We construct such a storyline with very limited annotation data (a list of map locations and weak knowledge of what events may be possible at each location), by bootstrapping the process with data obtained  ...  objects and events, depicted on a timeline.  ...  Gunhee Kim is partially supported by Basic Science Research Program through National Research Foundation of Korea (2015R1C1A1A02036562).  ... 
doi:10.1109/iccv.2015.514 dblp:conf/iccv/XiongKS15 fatcat:ebbhrf7f3zai5efmtrtd7e34fq

A History and Theory of Textual Event Detection and Recognition

Yanping Chen, Zehua Ding, Qinghua Zheng, Yongbin Qin, Ruizhang Huang, Nazaraf Shah
2020 IEEE Access  
The tasks of the BioNLP are similar to those of the ACE, e.g.; named entity recognition, relation recognition and event recognition.  ...  Finally, four subtasks for supporting frame event recognition are discussed individually and are named entity recognition, coreference resolution, relation recognition and event recognition. A.  ... 
doi:10.1109/access.2020.3034907 fatcat:ng7mbplve5dttao7ro6e2623ti

Attentional focus affects how events are segmented and updated in narrative reading

Heather R. Bailey, Christopher A. Kurby, Jesse Q. Sargent, Jeffrey M. Zacks
2017 Memory & Cognition  
In Experiment 2, participants read narratives and responded to recognition probes throughout the texts.  ...  In Experiment 1, participants read and segmented narrative texts into events. Some readers were oriented to pay specific attention to characters or space.  ...  The proportion of participants who segmented is plotted for each sentence in each story in Fig. S1 of the supplementary materials.  ... 
doi:10.3758/s13421-017-0707-2 pmid:28653273 pmcid:PMC8684441 fatcat:q4hzf3c73vgqtid2pkd3heeuoq

HAKE: A Knowledge Engine Foundation for Human Activity Understanding [article]

Yong-Lu Li, Xinpeng Liu, Xiaoqian Wu, Yizhuo Li, Zuoyu Qiu, Liang Xu, Yue Xu, Hao-Shu Fang, Cewu Lu
2022 arXiv   pre-print
Human activity understanding is of widespread interest in artificial intelligence and spans diverse applications like health care and behavior analysis.  ...  The object recognition-like solutions usually try to map pixels to semantics directly, but activity patterns are much different from object patterns, thus hindering another success.  ...  Besides, [52] proposed neural logic machines for relational reasoning and decisionmaking tasks.  ... 
arXiv:2202.06851v1 fatcat:vibr3ur6n5dw3auofcdnvqrdfm

Investigating multimodal real-time patterns of joint attention in an hri word learning task

Chen Yu, Matthias Scheutz, Paul Schermerhorn
2010 Proceeding of the 5th ACM/IEEE international conference on Human-robot interaction - HRI '10  
Using novel data analysis techniques, we are able to demonstrate that the temporal details of human attentional behavior are critical for understanding human expectations of joint attention in HRI and  ...  Joint attention -the idea that humans make inferences from observable behaviors of other humans by attending to the objects and events that these others humans attend tohas been recognized as a critical  ...  ACKNOWLEDGMENT The authors would like to thank You-Wei Cheah and Amanda Favata for working on data collection, and Thomas Smith, Ruj Akavipat, and Ikhyun Park for working on data preprocessing.  ... 
doi:10.1145/1734454.1734561 dblp:conf/hri/YuSS10 fatcat:hk3grjvcazfyhlayfgo5vveyki

Investigating multimodal real-time patterns of joint attention in an HRI word learning task

Chen Yu, Matthias Scheutz, Paul Schermerhorn
2010 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI)  
Using novel data analysis techniques, we are able to demonstrate that the temporal details of human attentional behavior are critical for understanding human expectations of joint attention in HRI and  ...  Joint attention -the idea that humans make inferences from observable behaviors of other humans by attending to the objects and events that these others humans attend tohas been recognized as a critical  ...  ACKNOWLEDGMENT The authors would like to thank You-Wei Cheah and Amanda Favata for working on data collection, and Thomas Smith, Ruj Akavipat, and Ikhyun Park for working on data preprocessing.  ... 
doi:10.1109/hri.2010.5453181 fatcat:n5g6suvv5nfjvfgdouui5ai2d4

Recollection rejection: False-memory editing in children and adults

C. J. Brainerd, V. F. Reyna, Ron Wright, A. H. Mojardin
2003 Psychological review  
Mechanisms for editing false events out of memory reports have fundamental implications for theories of false memory and for best practice in applied domains in which false reports must be minimized (e.g  ...  Empirical support comes from 2 qualitative phenomena: recollective suppression of semantic false memory and inverted-U relations between retrieval time and semantic false memory.  ...  ROC analysis involves plotting the hit rate for targets and the false-alarm rate for unrelated distractors as a joint function of confidence level.  ... 
doi:10.1037/0033-295x.110.4.762 pmid:14599242 fatcat:6v6l5herybffpbeu2jodybqlju

Behavior and Personality Analysis in a Nonsocial Context Dataset

Dario Dotti, Mirela Popa, Stylianos Asteriadis
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
In this paper we introduce a novel dataset for behavior understanding and personality recognition in a nonsocial context.  ...  Forty-six participants were recorded in an unconstrained indoor space, related to a smart home environment, performing six tasks resembling Activities of Daily Living (ADL).  ...  Additionally, we would like to thank all the participants in our experiment, who were students as well as employees from the University of Maastricht (The Netherlands).  ... 
doi:10.1109/cvprw.2018.00312 dblp:conf/cvpr/DottiPA18 fatcat:a4d4qfi5bjgzdjdfheabjhvamm
« Previous Showing results 1 — 15 out of 34,861 results