A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2016; you can also visit the original URL.
The file type is application/pdf
.
Filters
Efficient ConvNet-based marker-less motion capture in general scenes with a low number of cameras
2015
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
We present a novel method for accurate marker-less capture of articulated skeleton motion of several subjects in general scenes, indoors and outdoors, even from input filmed with as few as two cameras. ...
In combination, this enables to track full articulated joint angles at state-of-the-art accuracy and temporal stability with a very low number of cameras. ...
This paper describes a new method to fuse marker-less skeletal motion tracking with body part detections from a convolutional network (ConvNet) for efficient and accurate marker-less motion capture with ...
doi:10.1109/cvpr.2015.7299005
dblp:conf/cvpr/ElhayekAJTPABST15
fatcat:xbyvdqf7czh5lfuq6wyotg3ioe
EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras
[article]
2016
arXiv
pre-print
Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. ...
Our inside-in method captures full-body motion in general indoor and outdoor scenes, and also crowded scenes with many people in close vicinity. ...
Introduction Traditional optical skeletal motion-capture methods -both markerbased and marker-less -use several cameras typically placed around a scene in an outside-in arrangement, with camera views approximately ...
arXiv:1609.07306v1
fatcat:xatetlqtpbbsbclzgv3ik2fxee
EgoCap
2016
ACM Transactions on Graphics
Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. ...
Our approach combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a large new dataset. ...
Introduction Traditional optical skeletal motion-capture methods -both markerbased and marker-less -use several cameras typically placed around a scene in an outside-in arrangement, with camera views approximately ...
doi:10.1145/2980179.2980235
fatcat:kx3rcoljurb3xgewan4acsb2qa
A Versatile Scene Model with Differentiable Visibility Applied to Generative Pose Estimation
2015
2015 IEEE International Conference on Computer Vision (ICCV)
We demonstrate the advantages of our versatile scene model in several generative pose estimation problems, namely marker-less multi-object pose estimation, marker-less human motion capture with few cameras ...
Proper handling of occlusions is a big challenge, since the visibility function that indicates if a surface point is seen from a camera can often not be formulated in closed form, and is in general discrete ...
We demonstrate the advantages of our approach in several scenarios: marker-less human motion capture with a low number of cameras compared to state-of-the-art methods that lack rigorous visibility modeling ...
doi:10.1109/iccv.2015.94
dblp:conf/iccv/RhodinRRST15
fatcat:xjcrrczs6vealamjkri4rvwooa
A Versatile Scene Model with Differentiable Visibility Applied to Generative Pose Estimation
[article]
2016
arXiv
pre-print
We demonstrate the advantages of our versatile scene model in several generative pose estimation problems, namely marker-less multi-object pose estimation, marker-less human motion capture with few cameras ...
Proper handling of occlusions is a big challenge, since the visibility function that indicates if a surface point is seen from a camera can often not be formulated in closed form, and is in general discrete ...
We demonstrate the advantages of our approach in several scenarios: marker-less human motion capture with a low number of cameras compared to state-of-the-art methods that lack rigorous visibility modeling ...
arXiv:1602.03725v1
fatcat:yjrtut2qp5fqfnmjxubmaputpi
General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues
[article]
2016
arXiv
pre-print
The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. ...
Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension and/or body shape and appearance to successfully track a person. ...
Acknowledgements We thank PerceptiveCode, in particular Arjun Jain and Jonathan Tompson, for providing and installing the ConvNet detector, Ahmed Elhayek, Jürgen Gall, Peng Guan, Hansung Kim, Armin Mustafa ...
arXiv:1607.08659v2
fatcat:ye37f3vy7jgp5fsvekw7mcwe7m
Body joints regression using deep convolutional neural networks
2016
2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC)
The presented method relies on the utilisation of the state-of-the-art data generation pipeline to generate large, realistic, and highly varied synthetic set of training images. ...
Deep convolutional neural networks are achieving the state-of-the-art in visual object recognition, localisation, detection. ...
Also, these results confirm with the quantitative scores in terms of the low sensitivity to hands and foots positions. None of these real images is included during the training phase. ...
doi:10.1109/smc.2016.7844740
dblp:conf/smc/AbobakrHN16
fatcat:37dko4bryrdafgc6rupyqcjhfu
Real-Time Continuous Pose Recovery of Human Hands Using Convolutional Networks
2014
ACM Transactions on Graphics
Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction ...
As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model. ...
INTRODUCTION Inferring the pose of articulable objects from depth video data is a difficult problem in markerless motion capture. ...
doi:10.1145/2629500
fatcat:wnumayuaszho5gi7tnm2sfhyte
Acting thoughts: Towards a mobile robotic service assistant for users with limited communication skills
2017
2017 European Conference on Mobile Robots (ECMR)
In this paper, we present a novel framework, that allows these users to interact with a robotic service assistant in a closed-loop fashion, using only thoughts. ...
As our results demonstrate, our system is capable of adapting to frequent changes in the environment and reliably completing given tasks within a reasonable amount of time. ...
Framework unifying decoding of neuronal signals, high-level task planning, low-level motion and manipulation planning, scene perception with a centralized knowledge base at its core. ...
doi:10.1109/ecmr.2017.8098658
dblp:conf/ecmr/BurgetFKVASDBNB17
fatcat:puy3lojgvzht7gpj7rvyw4gdty
Recovering 3D Human Mesh from Monocular Images: A Survey
[article]
2022
arXiv
pre-print
With the same goal of obtaining well-aligned and physically plausible mesh results, two paradigms have been developed to overcome challenges in the 2D-to-3D lifting process: i) an optimization-based paradigm ...
We start with the introduction of body models and then elaborate recovery frameworks and training objectives by providing in-depth analyses of their strengths and weaknesses. ...
Marker-less Multi-view MoCap CMU Panoptic [212] is a large-scale multi-person dataset captured by 480 synchronized cameras in the Panoptic studio. ...
arXiv:2203.01923v2
fatcat:vb6xa5wdsrhdxd2ebvg54qq2m4
Learning Multi-Human Optical Flow
[article]
2019
arXiv
pre-print
We use a 3D model of the human body and motion capture data to synthesize realistic flow fields in both single- and multi-person images. ...
We demonstrate that our trained networks are more accurate than a wide range of top methods on held-out test data and that they can generalize well to real image sequences. ...
Acknowledgements We thank Yiyi Liao for helping us with optical flow evaluation. We thank Cristian Sminchisescu for the Human3.6M MoCap marker data. ...
arXiv:1910.11667v1
fatcat:weqfdsuygfatzkn64hskpdzw7m
RGB-D-based Human Motion Recognition with Deep Learning: A Survey
[article]
2018
arXiv
pre-print
In this paper, a detailed overview of recent advances in RGB-D-based motion recognition is presented. ...
As a survey focused on the application of deep learning to RGB-D-based motion recognition, we explicitly discuss the advantages and limitations of existing techniques. ...
HDM05 Motion Capture Database HDM05 [94] (http://resources.mpi-inf.mpg.de/HDM05/) was captured by an optical marker-based technology with the frequency of 120 Hz, which contains 2337 sequences for 130 ...
arXiv:1711.08362v2
fatcat:cugugpqeffcshnwwto4z2aw4ti
Fusing Visual and Inertial Sensors with Semantics for 3D Human Pose Estimation
2018
International Journal of Computer Vision
We release the new hybrid MVV dataset (TotalCapture) comprising of multi-viewpoint video, IMU and accurate 3D skeletal joint ground truth derived from a commercial motion capture system. ...
We propose an approach to accurately estimate 3D human pose by fusing multi-viewpoint video (MVV) with inertial measurement unit (IMU) sensor data, without optical markers, a complex hardware setup or ...
The work was supported in part by the Visual Media project (EU H2020 Grant 687800) and through the donation of GPU hardware by Nvidia. ...
doi:10.1007/s11263-018-1118-y
fatcat:n7k67bu5szaztgwqsft5lhvas4
Portable 3-D modeling using visual pose tracking
2018
Computers in industry (Print)
This work deals with the passive tracking of the pose of a close-range 3-D modeling device using its own high-rate images in realtime, concurrently with customary 3-D modeling of the scene. ...
Ideally, objects are completely digitized by browsing around the scene; in the event of closing the motion loop, a hybrid graph optimization takes place, which delivers highly accurate motion history to ...
Actively projecting marker points onto a scene is 185 inconvenient and, furthermore, limits flexibility since the cameras must see the markers the entire time. ...
doi:10.1016/j.compind.2018.03.009
fatcat:36vnyhc5o5dcvnfknndumb4ufi
Articulated Hand Pose Estimation Review
[article]
2016
arXiv
pre-print
With the increase number of companies focusing on commercializing Augmented Reality (AR), Virtual Reality (VR) and wearable devices, the need for a hand based input mechanism is becoming essential in order ...
Hand pose estimation has progressed drastically in recent years due to the introduction of commodity depth cameras. ...
Acknowledgment I would like to thank Professor John Ronald Kender for giving me the opportunity to investigate in-depth hand pose estimation algorithms and gain breadth of knowledge for most vision based ...
arXiv:1604.06195v1
fatcat:wbk4lzdkong2nfccgejtggq54a
« Previous
Showing results 1 — 15 out of 82 results