Filters








5,534 Hits in 11.1 sec

Full-motion recovery from multiple video cameras applied to face tracking and recognition

Josh Harguess, Changbo Hu, J. K. Aggarwal
2011 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops)  
We apply this approach to the tracking of faces in multiple video cameras and utilize the 3D cylinder model to realize the motion calculation.  ...  Our contribution in this work is a novel approach to object tracking by robustly and accurately recovering the full motion of the object from multiple cameras.  ...  Acknowledgments The authors would like to thank the reviewers for their valuable comments which have helped to improve the quality of the paper. This work is partially supported by Instituto  ... 
doi:10.1109/iccvw.2011.6130479 dblp:conf/iccvw/HarguessHA11 fatcat:gmdex7i4cbaqzlunxktpafrsd4

Fusing face recognition from multiple cameras

Josh Harguess, Changbo Hu, J. K. Aggarwal
2009 2009 Workshop on Applications of Computer Vision (WACV)  
We propose a methodology to use cylinder head models (CHMs) to track the face of a subject in multiple cameras.  ...  Results of tracking are further aggregated to produce 100% accuracy using video taken from two cameras in our lab.  ...  By fusing results from multiple cameras, the recognition of faces is improved from 67.4% to 94.4% in our videos.  ... 
doi:10.1109/wacv.2009.5403055 dblp:conf/wacv/HarguessHA09 fatcat:4kbfzheaubhstcy6xtxfm6ffz4

Occlusion robust multi-camera face tracking

Josh Harguess, Changbo Hu, J. K. Aggarwal
2011 CVPR 2011 WORKSHOPS  
Comparisons are made between single camera tracking, multi-camera tracking and occlusion robust multi-camera tracking using results from pose estimation.  ...  The approach is applied to face tracking using a 3D cylinder head model, but any 3D rigid object may be tracked using this approach.  ...  Introduction Face tracking, especially the full-motion recovery of the face (3 translations and 3 rotations) is necessary for many computer vision tasks such as human computer interaction and surveillance  ... 
doi:10.1109/cvprw.2011.5981790 dblp:conf/cvpr/HarguessHA11 fatcat:h5qh573c3bgptavuxanyzrakt4

Guest Editors' Introduction to the Special Issue on Multimodal Human Pose Recovery and Behavior Analysis

Sergio Escalera, Jordi Gonzalez, Xavier Baro, Jamie Shotton
2016 IEEE Transactions on Pattern Analysis and Machine Intelligence  
The set of 16 accepted papers can be split into three main categories within M2HuPBA: (i) human pose recovery and tracking; (ii) action and gesture recognition; and (iii) datasets.  ...  recognition, and driver assistance technology, to mention just a few.  ...  For information on obtaining reprints of this article, please send e-mail to: reprints@ieee.org, and reference the Digital Object Identifier below.  ... 
doi:10.1109/tpami.2016.2557878 fatcat:ee3j7nre4fgdtjrozavgexvhi4

Author Index

2010 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition  
Generating Sharp Panoramas from Motion-blurred Videos Huynh, Cong Phuoc Shape and Refractive Index Recovery from Single-View Polarisation Images Hwang, Sung Ju Reading Between The Lines: Object Localization  ...  , Matthias Motion Fields to Predict Play Evolution in Dynamic Sport Scenes Discontinuous Seam-Carving for Video Retargeting Player Localization Using Multiple Static Cameras for Sports Visualization  ... 
doi:10.1109/cvpr.2010.5539913 fatcat:y6m5knstrzfyfin6jzusc42p54

Key Issues in Modeling of Complex 3D Structures from Video Sequences

Shengyong Chen, Yuehui Wang, Carlo Cattani
2012 Mathematical Problems in Engineering  
Reconstruction of a scene object from video sequences often takes the basic principle of structure from motion with an uncalibrated camera.  ...  Construction of three-dimensional structures from video sequences has wide applications for intelligent video analysis.  ...  , 2010C33095 , and Zhejiang Provincial Natural Science Foundation R1110679 .  ... 
doi:10.1155/2012/856523 fatcat:2tcnq4tllvf4rpgurxuli2vci4

What and Where: 3D Object Recognition with Accurate Pose [chapter]

Iryna Gordon, David G. Lowe
2006 Lecture Notes in Computer Science  
In this chapter, we describe a system for constructing 3D metric models from multiple images taken with an uncalibrated handheld camera, recognizing these models in new images, and precisely solving for  ...  This is demonstrated in an augmented reality application where objects must be recognized, tracked, and superimposed on new images taken from arbitrary viewpoints without perceptible jitter.  ...  Acknowledgements We would like to gratefully acknowledge the financial support of the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Institute for Robotics and Intelligent  ... 
doi:10.1007/11957959_4 fatcat:7t2lpab6avg3dmzufgwn2geoiq

A review of motion analysis methods for human Nonverbal Communication Computing

Dimitris Metaxas, Shaoting Zhang
2013 Image and Vision Computing  
They include face tracking, expression recognition, body reconstruction, and group activity analysis.  ...  It uses image sequences to detect and track people, and 0262-8856/$see front matter  ...  We also would like to thank our long time collaborators Judee Burgoon (UA), David Dinges (UPENN), and Carol Neidle (BU). Metaxas would like to thank his previous PhD students Ioannis Kakadiaris  ... 
doi:10.1016/j.imavis.2013.03.005 fatcat:ylxt5bph2jfgrfd5a4c22qn66u

An Efficient Video to Video Face Recognition using Neural Networks

Wilson S., Lenin Fred
2017 International Journal of Computer Applications  
In biometrics video based face recovery is vital and this paper proposes an efficient algorithmic mode which achieves high recovery rate.  ...  The face recognition system proposed in this paper comprises of three stages video partitioning, feature extraction and neural network for recognition.  ...  In the recognition of people from videos, efficient fusion of face, body traits and motion are done.  ... 
doi:10.5120/ijca2017914924 fatcat:ulpjl4mrfrdjrn46t5mk5gvtnm

A Survey on Visual Surveillance of Object Motion and Behaviors

W. Hu, T. Tan, L. Wang, S. Maybank
2004 IEEE Transactions on Systems Man and Cybernetics Part C (Applications and Reviews)  
Index Terms-Behavior understanding and description, fusion of data from multiple cameras, motion detection, personal identification, tracking, visual surveillance.  ...  and description of behaviors, human identification, and fusion of data from multiple cameras.  ...  Xie, and G. Xu from the NLPR for their valuable suggestions and assistance in preparing this paper.  ... 
doi:10.1109/tsmcc.2004.829274 fatcat:cozxn2ogtrew3pybyuxcrj2rhi

Flexible surveillance system architecture for prototyping video content analysis algorithms

R. G. J. Wijnhoven, E. G. T. Jaspers, P. H. N. de With, Edward Y. Chang, Alan Hanjalic, Nicu Sebe
2006 Multimedia Content Analysis, Management, and Retrieval 2006  
From these requirements, specifications for a prototyping architecture are derived.  ...  To build flexible prototyping systems of low cost, a distributed system with scalable processing power is therefore required.  ...  The authors would like to thank X. Desurmont ** , J. Hamaide ** , B. Lienard ** , M. Barais † † , F. Zuo ‡ ‡ , R. Verhoeven ‡ ‡ and R. Albers ‡ ‡ § § for their valuable contributions.  ... 
doi:10.1117/12.649520 fatcat:hmogs3ybfne67p7lbou4yzddbi

Gesture Control Using Single Camera for PC

N. Lalithamani
2016 Procedia Computer Science  
The Face Recognition model uses Viola and Jones method for detection of face and PCA (Principal Component Analysis) for recognition and identification of algorithms.  ...  We use single web camera as input device to recognize gestures of hand.  ...  On usage of multi-view by multiple cameras these are reduced and pose variations can be included into consideration. The usage of multiple cameras overcomes the challenges of single camera.  ... 
doi:10.1016/j.procs.2016.02.024 fatcat:7egtqlxplndaljrlmziu44dlbu

MoveBox: Democratizing MoCap for the Microsoft Rocketbox Avatar Library

Mar Gonzalez-Franco, Zelia Egan, Matthew Peachey, Angus Antley, Tanmay Randhavane, Payod Panda, Yaying Zhang, Cheng Yao Wang, Derek F. Reilly, Tabitha C Peck, Andrea Stevenson Won, Anthony Steed (+1 others)
2020 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)  
Motion capture is performed in real-time using a single depth sensor, such as Azure Kinect or Windows Kinect V2, or extracted from existing RGB videos offline leveraging deep-learning computer vision techniques  ...  A selection of avatars animated with the MoveBox system. a) Person doing live MoCap from Azure Kinect onto a Microsoft Rocketbox avatar. b) Playback of two avatar animations created to represent a social  ...  Recovery from Existing Videos: Movebox includes an external tool for 3D multi-person human pose estimation from RGB videos.  ... 
doi:10.1109/aivr50618.2020.00026 fatcat:rpqndibgsndllcieot6yj4gstm

The Visual Analysis of Human Movement: A Survey

D.M Gavrila
1999 Computer Vision and Image Understanding  
The scope of this survey is limited to work on whole-body or hand motion; it does not include work on human faces.  ...  The ability to recognize humans and their activities by vision is key for a machine to interact intelligently and effortlessly with a human-inhabited environment.  ...  A first capability would be to sense if a human is indeed present. This might be followed by face recognition for the purpose of access control and person tracking across multiple cameras.  ... 
doi:10.1006/cviu.1998.0716 fatcat:ep75ekg4h5elzebmm3pasqlqdy

Action Datasets and MHI [chapter]

Md. Atiqur Rahman Ahad
2012 SpringerBriefs in Computer Science  
There are a number of benchmark datasets for action, activity, gesture, and gait recognition. In this chapter, we present mainly those which are used to evaluate the MHI or its variants.  ...  Rosales R, Sclaroff S (1999) 3D trajectory recovery for tracking multiple objects and trajectory guided recognition of actions. IEEE Comp Vis Pattern Recognit 2:117-123 361.  ...  However, [11] recognize the actions per camera basis and the average recognition results are-65.4, 70.0, 54.3, 66.0, and 33.6 % from camera 1 to 5.  ... 
doi:10.1007/978-1-4471-4730-5_4 fatcat:kvnpmq3zgbeftjrnnscq4gxed4
« Previous Showing results 1 — 15 out of 5,534 results