Filters








2,257 Hits in 7.5 sec

Multisensor-fusion for 3D full-body human motion capture

Gerard Pons-Moll, Andreas Baak, Thomas Helten, Meinard Muller, Hans-Peter Seidel, Bodo Rosenhahn
2010 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition  
In this work, we present an approach to fuse video with orientation data obtained from extended inertial sensors to improve and stabilize full-body human motion capture.  ...  Therefore, we propose a hybrid tracker that combines video with a small number of inertial units to compensate for the drawbacks of each sensor type: on the one hand, we obtain drift-free and accurate  ...  This work has been supported by the German Research Foundation (DFG CL 64/5-1 and DFG MU 2686/3-1). Meinard Müller is funded by the Cluster of Excellence on Multimodal Computing and Interaction.  ... 
doi:10.1109/cvpr.2010.5540153 dblp:conf/cvpr/Pons-MollBHMSR10 fatcat:wpxk5cjhc5bffefzebb6u5pmrm

Pose estimation algorithm for mobile augmented reality based on inertial sensor fusion

Mir Suhail Alam, Malik Arman Morshidi, Teddy Surya Gunawan, Rashidah Funke Olanrewaju, Fatchul Arifin
2022 International Journal of Power Electronics and Drive Systems (IJPEDS)  
We evaluated the performance of augmenting the 3D object using the techniques, vision-based, and incorporating the sensor data using the video data.  ...  <span>Augmented reality (AR) applications have become increasingly ubiquitous as it integrates virtual information such as images, 3D objects, video, and more to the real world, which further enhances  ...  The authors would also like to express their gratitude to the International Islamic University Malaysia, University of New South Wales, and Universitas Negeri Yogyakarta.  ... 
doi:10.11591/ijece.v12i4.pp3620-3631 fatcat:wgkp4mskhrhrbhxhnumtjhwgfi

Hierarchical Sampling based Particle Filter for Visual-inertial Gimbal in the Wild [article]

Xueyang Kang and Ariel Herrera and Henry Lema
2022 arXiv   pre-print
Usually a gimbal is mainly composed of sensors and actuator parts. The orientation measurements from sensor can be inputted directly to actuator to steer camera towards proper pose.  ...  The gimbal platform has been widely used in photogrammetry and robot perceptual module to stabilize the camera pose, thereby improving the captured video quality.  ...  We are also grateful to Chinese Scholarship Council (CSC), to fund Xueyang for PhD research, and EPN international mobility scholarship to fund Ariel for his internship in Belgium.  ... 
arXiv:2206.10981v1 fatcat:iyz7bz3f2fehzmw2oeqshakf5q

Orientation tracking for outdoor augmented reality registration

S. You, U. Neumann, R. Azuma
1999 IEEE Computer Graphics and Applications  
Inertial gyroscope data can increase the robustness and computing efficiency of a vision system by providing a relative frame-to-frame estimate of camera orientation. 2.  ...  System overview The system fuses the outputs of these sensors to determine a user's orientation.  ... 
doi:10.1109/38.799738 fatcat:qd2exrtvufahtnsyo5mhx3r5aq

Video image registration evaluation for a layered sensing environment

Olga Mendoza-Schrock, James A. Patrick, Erik P. Blasch
2009 Proceedings of the IEEE 2009 National Aerospace & Electronics Conference (NAECON)  
A fundamental requirement in layered sensing is to first register, stabilize, and normalize the data from each of the individual sensors.  ...  In this paper, several methods to register and stabilize a motion imagery video sequence under the layered sensing concept are evaluated.  ...  Greg Arnold for technical direction and editorial help with this effort. Finally, they would also like to acknowledge Dr.  ... 
doi:10.1109/naecon.2009.5426624 fatcat:xocz5uvrc5btpf63uptk772o64

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors [article]

Vladimir Guzov, Aymen Mir, Torsten Sattler, Gerard Pons-Moll
2021 arXiv   pre-print
We introduce (HPS) Human POSEitioning System, a method to recover the full 3D pose of a human registered with a 3D scan of the surrounding environment using wearable sensors.  ...  The former provides drift-free but noisy position and orientation estimates while the latter is accurate in the short-term but subject to drift over longer periods of time.  ...  The IMU pose θ I j and position t I j estimate of each subsequent frame are aligned to the 3D scene reference frame by θ I,G j = (log(R * A exp(θ I,G j ))) ∨ , t I j = R * A t I j . (13) Dataset HPS  ... 
arXiv:2103.17265v1 fatcat:niq57tnugbdvzox7lfa36eypmm

High Definition, Inexpensive, Underwater Mapping [article]

Bharat Joshi, Marios Xanthidis, Sharmin Rahman, Ioannis Rekleitis
2022 arXiv   pre-print
Data collected at an artificial wreck of the coast of South Carolina and in caverns and caves in Florida demonstrate the robustness of the proposed approach in a variety of conditions.  ...  The GoPro 9 camera provides high definition video in synchronization with an Inertial Measurement Unit (IMU) data stream encoded in a single mp4 file.  ...  The authors are with the Computer Science and Engineering Department, University of South Carolina, Columbia, SC, USA, 29208, {bjoshi,mariosx,srahman}@email.sc.edu, yiannisr@cse.sc.edu.  ... 
arXiv:2203.05640v1 fatcat:twdkodjbg5ep7mgzrtuaiz2uqa

Preservation and Gamification of Traditional Sports [chapter]

Yvain Tisserand, Nadia Magnenat-Thalmann, Luis Unzueta, Maria T. Linaza, Amin Ahmadi, Noel E. O'Connor, Nikolaos Zioulis, Dimitrios Zarpalas, Petros Daras
2017 Mixed Reality and Gamification for Cultural Heritage  
This chapter reviews an example of preservation and gamification scenario applied to traditional sports. In the first section we describe a preservation technique to capture intangible content.  ...  It describes an interactive scenario integrated in a platform that includes a multi-modal capturing system, a motion comparison and analysis as well as a semantic based feedback system.  ...  Acknowledgments This project has received funding from the European Union's Seventh Framework Programme for research, technological development and demonstration under grant agreement FP7-  ... 
doi:10.1007/978-3-319-49607-8_17 fatcat:mbrgeao23jb7tmsxt5u3nbhp4u

Motion and Structure Estimation Using Fusion of Inertial and Vision Data for Helmet Tracker

Se-Jong Heo, Ok-Shik Shin, Chan-Gook Park
2010 International Journal of Aeronautical and Space Sciences  
The problem of estimating and predicting the position and orientation of the helmet is approached by fusing measurements from inertial sensors and stereo vision system.  ...  This algorithm is tested with using synthetic and real data. And the results show that the result of sensor fusion is successful.  ...  Acknowledgement This work was supported by Defense Acquisition Program Administration and Agency for Defense Development under the contract UD070041AD and the Ministry of Education, Science and Technology  ... 
doi:10.5139/ijass.2010.11.1.031 fatcat:xczyh4sir5bynj7bqvuqf67egm

Video Stabilization and Mosaicing [chapter]

Mahesh Ramachandran, Ashok Veeraraghavan, Rama Chellappa
2009 The Essential Guide to Video Processing  
A sequence of temporal images acquired by a single sensor adds a whole new dimension to two-dimensional (2D) image data.  ...  The global motion that occurs across the entire image frame is typically a result of camera motion and can often be described in terms of a low-order model whose parameters are the unknowns.  ...  Acknowledgements This work was partially funded thanks to Army Research Office MURI ARMY-W911NF0410176 under the technical monitorship of Dr. Tom Doligalski. We also thank Dr. Gaurav Aggarwal and Dr.  ... 
doi:10.1016/b978-0-12-374456-2.00006-2 fatcat:4i7mebeqqvbcxl3d6mlzuihjrm

Real-Time Body Tracking with One Depth Camera and Inertial Sensors

Thomas Helten, Meinard Muller, Hans-Peter Seidel, Christian Theobalt
2013 2013 IEEE International Conference on Computer Vision  
We also contribute by new algorithmic solutions to best fuse depth and inertial data in both trackers.  ...  In contrast to previous work, both trackers employ data from a low number of inexpensive body-worn inertial sensors.  ...  For a qualitative evaluation of our tracker, also in comparison to previous approaches, we refer to Fig. 1 and the accompanying video.  ... 
doi:10.1109/iccv.2013.141 dblp:conf/iccv/HeltenMST13 fatcat:a4khe4py2vguboof2s6ycfh57q

THERMAL 3D MODELS ENHANCEMENT BASED ON INTEGRATION WITH VISIBLE IMAGERY

F. Dadras Javan, M. Savadkouhi
2019 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences  
But, low geometric quality and resolution of thermal images is a main drawback that 3D thermal modelling are encountered with.  ...  Thermal imaging sensors are mostly used for interpretation and monitoring purposes because of lower geometric resolution.  ...  Key Frame Selection: Extracting key frames of a video sequence is a useful way for video abstracting.  ... 
doi:10.5194/isprs-archives-xlii-4-w18-263-2019 fatcat:rm7duo6lizhbdbreha5r3tvhya

Integration of Vision and Inertial Sensors for 3D Arm Motion Tracking in Home-based Rehabilitation

Yaqin Tao, Huosheng Hu, Huiyu Zhou
2007 The international journal of robotics research  
This paper introduces a real-time hybrid solution to articulated 3D arm motion tracking for home-based rehabilitation by combining visual and inertial sensors.  ...  The second is a probabilistic method based on an Extended Kalman Filter (EKF) in which data from two sensors is fused in a predict-correct manner in order to deal with sensor noise and model inaccuracy  ...  Acknowledgements We would like to thank Charnwood Dynamics for their CODA motion tracking system, and Dr Martin H.  ... 
doi:10.1177/0278364907079278 fatcat:vmimtym6sncffdhqvupv54t6ji

Ambiguity-Free Optical–Inertial Tracking for Augmented Reality Headsets

Fabrizio Cutolo, Virginia Mamone, Nicola Carbonaro, Vincenzo Ferrari, Alessandro Tognetti
2020 Sensors  
To improve the accuracy of optical self-tracking and its resiliency to marker occlusions, degraded camera calibrations, and inconsistent lighting, in this work we propose a sensor fusion approach based  ...  Experimental results show that the proposed solution improves the head-mounted display (HMD) tracking accuracy by one third and improves the robustness by also capturing the orientation of the target scene  ...  Orientation alignment between optical frame and IMU reference frame is obtained by solving a standard hand-eye calibration method.  ... 
doi:10.3390/s20051444 pmid:32155808 pmcid:PMC7085738 fatcat:pdzbqktz4renzf7okmqvlbylie

Towards Urban 3D Reconstruction from Video

A. Akbarzadeh, J.-M. Frahm, P. Mordohai, B. Clipp, C. Engels, D. Gallup, P. Merrell, M. Phelps, S. Sinha, B. Talton, L. Wang, Q. Yang (+6 others)
2006 Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)  
The paper introduces a data collection system and a processing pipeline for automatic geo-registered 3D reconstruction of urban scenes from video.  ...  We present the main considerations in designing the system and the steps of the processing pipeline. We show results on real video sequences captured by our system.  ...  Acknowledgement This work is partially supported by DARPA under the UrbanScape project, which is lead by the Geo-Spatial Technologies Information Division of SAIC.  ... 
doi:10.1109/3dpvt.2006.141 dblp:conf/3dpvt/AkbarzadehFMCEGMPSTWYSYWTNP06 fatcat:mcwdxl47rvfbzerp3zv43afexq
« Previous Showing results 1 — 15 out of 2,257 results