A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
RECOGNIZING SIMPLE HUMAN ACTIONS USING 3D HEAD MOVEMENT
2007
Computational intelligence
Although human actions can be inferred from a wide range of data, it has been demonstrated that simple human actions can be inferred by tracking the movement of the head in 2D. ...
We present experimental results to demonstrate the potential of using 3D head trajectory information to distinguish among simple but common human actions independently of viewpoint. ...
CONCLUSIONS AND FUTURE WORK We presented a system for recognizing several simple human actions by analyzing the movement of the head in 3D. ...
doi:10.1111/j.1467-8640.2007.00317.x
fatcat:xwj6hkkspvh5jbi7qshptgawiy
Human Action Recognition by Inference of Stochastic Regular Grammars
[chapter]
2004
Lecture Notes in Computer Science
This recognition method is tested by using 900 actions of human upper body. ...
In this paper, we present a new method of recognizing human actions by inference of stochastic grammars for the purpose of automatic analysis of nonverbal actions of human beings. ...
In this experiment we don't use the ANMC because head actions have all same movement complexity. 40 action data are used for learning and 32 action data for testing. ...
doi:10.1007/978-3-540-27868-9_41
fatcat:7fjb3jiiqfdpjpwuwmayf5lfpi
Guest Editorial: Human–Computer Interaction: Real-Time Vision Aspects of Natural User Interfaces
2012
International Journal of Computer Vision
of head movement. ...
frames that must be viewed before the action can be recognized. • In their paper "Random forests for real time 3D face analysis," G. ...
doi:10.1007/s11263-012-0603-y
fatcat:rlcel2bdjba3tnwjigiccwvptu
A High-Fidelity Open Embodied Avatar with Lip Syncing and Expression Capabilities
2019
2019 International Conference on Multimodal Interaction on - ICMI '19
The avatar has lip syncing (phoneme control), head gesture and facial expression (using either facial action units or cardinal emotion categories) capabilities. ...
We release code and models to illustrate how the avatar can be controlled like a puppet or used to create a simple conversational agent using public application programming interfaces (APIs). ...
We use the facial action unit probability scores from the Facial Action Unit Recognizer to control the avatar's facial action controls and mimic the human facial expressions. ...
doi:10.1145/3340555.3353744
dblp:conf/icmi/AnejaMS19
fatcat:45fnoas545akzd2627pws4xtny
Human action recognition by fast dense trajectories
2013
Proceedings of the 21st ACM international conference on Multimedia - MM '13
We evaluate the method on the dataset of Huawei/3DLife -3D human reconstruction and action recognition Grand Challenge in ACM Multimedia 2013. ...
In this paper, we propose the fast dense trajectories algorithm for human action recognition. ...
The methods proposed in [12] obtained the activities model using the data captured from the joint movement to recognize the actions. ...
doi:10.1145/2502081.2508123
dblp:conf/mm/HaoZIS13
fatcat:jxnjch5y7zhehacuy363vzyjmu
Body expression recognition from animated 3D skeleton
2016
2016 International Conference on 3D Imaging (IC3D)
We present a novel and generic framework for the recognition of body expressions using human postures. ...
Features proposed in this article are computationally simple and intuitive to understand. ...
Taking inspiration from psychology domain state of the art, we have proposed simple and representative features to detect body expression from temporal 3D postures, even in complex cases: jump, run, kick ...
doi:10.1109/ic3d.2016.7823448
dblp:conf/ic3d/CrennKMB16
fatcat:4z4r3p7ijbcffdyyrenjzkckfi
Developing visual competencies for socially assistive robots
2013
Proceedings of the 6th International Conference on PErvasive Technologies Related to Assistive Environments - PETRA '13
We present the key modules of independent motion detection, object detection, body localization, person tracking, head pose estimation and action recognition and we explain how they serve the goal of natural ...
We show how we integrated several vision modules using a layered architectural scheme. ...
., independent motion detection, object detection, body localization, tracking, head pose estimation and human action recognition. ...
doi:10.1145/2504335.2504395
dblp:conf/petra/PapoutsakisPNSZKA13
fatcat:hjfha7dwj5byfp4gzu3aql2tnm
Active eye contact for human-robot communication
2004
Extended abstracts of the 2004 conference on Human factors and computing systems - CHI '04
Thus, there has been a great deal of research on using gaze or eye movements for human interfaces, which can be considered as communication between humans and machines. ...
Then, we present a robot that can recognize hand gestures after making eye contact with the human to show the effectiveness of eye contact as a means of controlling communication. ...
We use a mobile robot Pioneer 2 by ActivMedia. A laptop PC is placed on it so that a 3D CG human head is shown at an appropriate height. ...
doi:10.1145/985921.985998
dblp:conf/chi/MiyauchiSNK04
fatcat:m5q2n27c2nb65fdksnzef2vouq
Chinese Shadow Puppetry with an Interactive Interface Using the Kinect Sensor
[chapter]
2012
Lecture Notes in Computer Science
A performer can conduct simple actions such as turning the head, stretching the arms or kicking the legs. ...
Therefore we define some special postures to represent these difficult movements. ...
A performer can conduct simple actions such as turning the head, stretching the arms or kicking the legs. ...
doi:10.1007/978-3-642-33863-2_35
fatcat:mnzjkmx6kbgblcw7inna5ek3my
Ballistic Hand Movements
[chapter]
2006
Lecture Notes in Computer Science
This puts appearance-based techniques at a disadvantage for modelling and recognizing them. Psychological studies indicate that these actions are ballistic in nature. ...
Their trajectories have simple structures and are determined to a great degree by the starting and ending positions. ...
Introduction We consider the problem of recognizing human actions commonly observed in surveillance situations. ...
doi:10.1007/11789239_16
fatcat:2ahoxddq4zemzjwh7ggwxf6ijy
A survey of advances in vision-based human motion capture and analysis
2006
Computer Vision and Image Understanding
Progress has also been made towards automatic understanding of human actions and behavior. ...
This survey reviews recent trends in video based human capture and analysis, as well as discussing open problems for future research to achieve automatic visual analysis of human movement. ...
This gives rise to local and global descriptors that are used for recognizing simple actions. ...
doi:10.1016/j.cviu.2006.08.002
fatcat:7vsbfnczrzgsbdbbmlzvfprh54
Designing Robots With Movement in Mind
2014
Journal of Human-Robot Interaction
We then relate our approach to the design of non-anthropomorphic robots and robotic objects, a design strategy that could facilitate the feasibility of realworld human-robot interaction. ...
To illustrate our design approach, we discuss four case studies: a social head for a robotic musician, a robotic speaker dock listening companion, a desktop telepresence robot, and a service robot performing ...
Movement is a highly salient, yet widely under-recognized, aspect of human-robot interaction. ...
doi:10.5898/jhri.3.1.hoffman
fatcat:sdu5tyim4zh43f6v7wjsglpkka
Comprehensive Model and Image-Based Recognition of Hand Gestures for Interaction in 3D Environments
2011
International Journal of Virtual Reality
poses, movements and location and for segmentation. ...
The paper also describes an unencumbered gesture recognition system built using this model and recognition strategy, a single low-cost camera and relatively simple image-based algorithms to classify hand ...
Finally, the segmentation of the objects of interest in the image, user hands and head, is done using a simple pixel-by-pixel skin color classification using band-pass filters for hue and saturation and ...
doi:10.20870/ijvr.2011.10.4.2825
fatcat:b4nwsvlk65eajfbjyqjopevmz4
Swordplay: Innovating Game Development through VR
2006
IEEE Computer Graphics and Applications
This 3D interaction style provides a natural mapping from human movement to gameplay controls we could not realize with traditional controllers. ...
Interface With Swordplay, we emphasized player control using natural human movement rather than artificial control mechanisms. ...
doi:10.1109/mcg.2006.137
pmid:17120909
fatcat:atwosxnxebhaplarldhvspkvt4
Robot anticipation of human intentions through continuous gesture recognition
2013
2013 International Conference on Collaboration Technologies and Systems (CTS)
In this paper, we propose a method to recognize human body movements and we combine it with the contextual knowledge of human-robot collaboration scenarios provided by an object affordances framework that ...
We consider simple actions that characterize a human-robot collaboration scenario with objects being manipulated on a table: inspired from automatic speech recognition techniques, we train a statistical ...
[18] aims at recognizing complex actions (e.g. sitting on the floor, jumping) using angles between human body parts as features, then clustering them with Gaussian Mixture Models, partitioning the physical ...
doi:10.1109/cts.2013.6567232
dblp:conf/cts/SaponaroSB13
fatcat:bearzzwpjbe6nibca323gzycq4
« Previous
Showing results 1 — 15 out of 30,725 results