Filters








57 Hits in 3.1 sec

DeepMix: Mobility-aware, Lightweight, and Hybrid 3D Object Detection for Headsets [article]

Yongjie Guan and Xueyu Hou and Nan Wu and Bo Han and Tao Han
2022 arXiv   pre-print
Motivated by our analysis and evaluation of state-of-the-art 3D object detection models, DeepMix intelligently combines edge-assisted 2D object detection and novel, on-device 3D bounding box estimations  ...  DeepMix not only improves detection accuracy by 9.1--37.3% but also reduces end-to-end latency by 2.68--9.15x, compared to the baseline that uses existing 3D object detection models.  ...  In the cache, we store the 6DoF pose and 3D dimension of detected objects.  ... 
arXiv:2201.08812v2 fatcat:p2hltxagg5fddm3mn4fidv3kwu

Light Chisel: 6DOF Pen Tracking

V. Bubník, V. Havran
2015 Computer graphics forum (Print)  
(left middle) User interaction with the Light Chisel in an augmented reality setup, (right middle) a close-up of our direct 3D modeling application work space, and (right) improvement of the 6DOF pose  ...  Its form factor is well suited for a screwdriver or chisel grip, allowing the Light Chisel to be rolled between the fingers.  ...  We are indebted to David Sedláček for help with Opti-Track and with the video. We thank all the anonymous reviewers for their insightful comments, which helped to improve our paper.  ... 
doi:10.1111/cgf.12563 fatcat:6h3mlg5vbrafhdxupmow4kqxva

A leap-supported, hybrid AR interface approach

Holger Regenbrecht, Jonny Collins, Simon Hoermann
2013 Proceedings of the 25th Australian Computer-Human Interaction Conference on Augmentation, Application, Innovation, Collaboration - OzCHI '13  
A Leap motion controller is used to track the users' fingers and a webcam overlay allows for an augmented view.  ...  ABSTRACT We present a novel interface approach which combines 2D video-based AR with a partial voxel model allowing for more convincing interactions with 3D objects and worlds.  ...  ACKNOWLEDGMENTS We'd like to thank all people who helped to test and improve VoxelAR. Thanks to Leap Motion for letting us be a part of the motion controller developer program.  ... 
doi:10.1145/2541016.2541053 dblp:conf/ozchi/RegenbrechtCH13 fatcat:jarbozzwa5fddfkf2nr4urvhsy

AN S-PI VISION-BASED TRACKING SYSTEM FOR OBJECT MANIPULATION IN AUGMENTED REALITY

Ajune Wanis Ismail, Mark Bilinghust, Mohd Shahrizal Sunar
2015 Jurnal Teknologi  
This allows the user to look at virtual objects from different viewing angles in the AR interface and perform 3D object manipulation.  ...  A paddle pose pattern is constructed in a one-time calibration process and through vertex-based calculation of the camera pose relative to the paddle we can show 3D graphics on top of it.  ...  Acknowledgement We would to express our appreciation to the staff and students at the Human Interface Technology Laboratory New Zealand (HITLabNZ) at University of Canterbury.  ... 
doi:10.11113/jt.v75.5060 fatcat:7pgoaj5j6jg4dni46dslsqqb7a

A Real Application of an Autonomous Industrial Mobile Manipulator within Industrial Context

Jose Luis Outón, Ibon Merino, Iván Villaverde, Aitor Ibarguren, Héctor Herrero, Paul Daelman, Basilio Sierra
2021 Electronics  
In this paper, we report the project's tackle in a paradigmatic industrial application combining accurate autonomous navigation with deep learning-based 3D perception for pose estimation to locate and  ...  manipulate different industrial objects in an unstructured environment.  ...  Used Method Our system uses PVN3D [40] to estimate the pose with 6DoF.  ... 
doi:10.3390/electronics10111276 fatcat:wuzvqgbrl5bonexq6tjgapthom

Bare-Hand Volume Cracker for Raw Volume Data Analysis

Bireswar Laha, Doug A. Bowman, John J. Socha
2016 Frontiers in Robotics and AI  
Analysis of raw volume data generated from different scanning technologies faces a variety of challenges, related to search, pattern recognition, spatial understanding, quantitative estimation, and shape  ...  We evaluated our asymmetric BHVC technique against standard 2D and widely used 3DI techniques with experts analyzing scanned beetle datasets.  ...  Mapping the corresponding gestures directly with the Leap was not possible, because (1) the detection of a closed-fist orientation with the Leap was far from reliable, but the grab gesture needed 6DOF  ... 
doi:10.3389/frobt.2016.00056 fatcat:y27twl5zirhzplmuhcorxrfotu

MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images [article]

Benjamin Attal, Selena Ling, Aaron Gokaslan, Christian Richardt, James Tompkin
2020 arXiv   pre-print
Together, these can quickly lead to VR sickness when viewing content. One solution is to try and generate a format suitable for 6DoF rendering, such as by estimating depth.  ...  We introduce a method to convert stereo 360 (omnidirectional stereo) imagery into a layered, multi-sphere image representation for six degree-of-freedom (6DoF) rendering.  ...  s architecture and our desire to estimate an MSI at the center of the camera system from ODS imagery. Double-plane-sweep baseline.  ... 
arXiv:2008.06534v1 fatcat:vdgpevz7brc6tbaq5sgiqa6ylm

Automated avatar creation for 3D games

Andrew Hogue, Sunbir Gill, Michael Jenkin
2007 Proceedings of the 2007 conference on Future Play - Future Play '07  
We propose a dramatic leap forward in avatar customization through the use of an inexpensive, non-invasive, portable stereo video camera to extract model geometry of real objects, including people, and  ...  Current video games allow character customizability via techniques such as hue adjustment for stock models, or the ability to select from a variety of physical features, clothing and accessories in existing  ...  To produce an efficient model representation we use an automatic UV Parameterization algorithm to map the vertices of the mesh from 3D to a 2D plane. This is often referred to as a mesh unwrapping.  ... 
doi:10.1145/1328202.1328234 dblp:conf/fplay/HogueGJ07 fatcat:uaeguqxxxbalvb67bln56nu4py

Monte-Carlo Tree Search for Efficient Visually Guided Rearrangement Planning [article]

Yann Labbé, Sergey Zagoruyko, Igor Kalevatykh, Ivan Laptev, Justin Carpentier, Mathieu Aubry, Josef Sivic
2020 arXiv   pre-print
Second, to precisely localize movable objects in the scene, we develop an integrated approach for robust multi-object workspace state estimation from a single uncalibrated RGB camera using a deep neural  ...  We address the problem of visually guided rearrangement planning with many movable objects, i.e., finding a sequence of actions to move a set of objects from an initial arrangement to a desired one, while  ...  Predicting 6DoF pose of unseen objects precise enough for robotic manipulation remains an open problem. C.  ... 
arXiv:1904.10348v2 fatcat:3tdelx4l3zcotijt5dqeqb5xk4

RoboCup@Home: Summarizing achievements in over eleven years of competition

Mauricio Matamoros, Viktor Seib, Raphael Memmesheimer, Dietrich Paulus
2018 2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)  
In this paper we summarize and discuss the differences between the achievements claimed by teams in their team description papers, and the results observed during the competition^1 from a qualitative perspective  ...  We conclude with a set of important challenges to be conquered first in order to take robots to people's homes.  ...  However, markers were banned in the second year and the object recognition pipelines evolved from pure detection over color segmentation to well known 2D feature descriptors like SIFT [15] , [16] .  ... 
doi:10.1109/icarsc.2018.8374181 dblp:conf/icarsc/MatamorosSMP18 fatcat:zqpxmj6l4fbuxokevj52nf2maa

Survey on Urban Warfare Augmented Reality

Xiong You, Weiwei Zhang, Meng Ma, Chen Deng, Jian Yang
2018 ISPRS International Journal of Geo-Information  
Accessing information via an Augmented Reality system can elevate combatants' situational awareness to effectively improve the efficiency of decision-making and reduce the injuries.  ...  This paper begins with the concept of Urban Warfare Augmented Reality (UWAR) and illuminates the objectives of developing UWAR, i.e., transparent battlefield, intuitional perception and natural interaction  ...  (b) Based on the building outline to estimate the camera pose.  ... 
doi:10.3390/ijgi7020046 fatcat:gsju6jtwwvfedfwd5vpayxpac4

Mobile Augmented Reality: User Interfaces, Frameworks, and Intelligence [article]

Jacky Cao, Kit-Yung Lam, Lik-Hang Lee, Xiaoli Liu, Pan Hui, Xiang Su
2021 arXiv   pre-print
MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and performs seamless transitions from the physical world to a mixed world with digital entities.  ...  Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices.  ...  Reducing energy drain for MAR [12] Object detection DNN HoloLens surgical navigation [243] Object detection and pose estimation CNN AR inspection framework for industry [177] Object detection R-CNN  ... 
arXiv:2106.08710v1 fatcat:ppqfr4nfljamxnyowklth57wae

The Implicit Values of A Good Hand Shake: Handheld Multi-Frame Neural Depth Refinement [article]

Ilya Chugunov, Yuxuan Zhang, Zhihao Xia, Xuaner Zhang, Jiawen Chen, Felix Heide
2022 arXiv   pre-print
-- textured objects at close range.  ...  Modern smartphones can continuously stream multi-megapixel RGB images at 60Hz, synchronized with high-quality 3D pose information and low-resolution LiDAR-driven depth estimates.  ...  This has culminated in their leap from industrial and automotive applications [44, 11] to the space of mobile phones.  ... 
arXiv:2111.13738v2 fatcat:ite3rpbtwrbkhe76vnimg6kyvq

Gracker: A Graph-Based Planar Object Tracker

Tao Wang, Haibin Ling
2018 IEEE Transactions on Pattern Analysis and Machine Intelligence  
However, these approaches rarely utilize structure information of the object, and are thus suffering from various perturbation factors.  ...  In this paper, we proposed a graph-based tracker, named Gracker, which is able to fully explore the structure information of the object to enhance tracking performance.  ...  , but fail to obtain accurate pose estimation.  ... 
doi:10.1109/tpami.2017.2716350 pmid:28641246 fatcat:2nm7ikvqtvbahoxmj7up7f66gy

Single-Handed vs. Two Handed Manipulation in Virtual Reality: A Novel Metaphor and Experimental Comparisons [article]

Fabio Marco Caputo, Marco Emporio, Andrea Giachetti
2017 Smart Tools and Applications in Graphics  
Furthermore, it introduces a novel metaphor, the "knob", to map hand rotation onto object rotation around selected axes.  ...  The solution was tested with users on a classical visualization task related to finding a point of interest in a 3D object and compared with the well known "Handlebar" metaphor.  ...  the task and gestural comfort to estimate the perceived fatigue.  ... 
doi:10.2312/stag.20171225 dblp:conf/egItaly/CaputoEG17 fatcat:nrjowsuohrh4thvkv7wgigkguq
« Previous Showing results 1 — 15 out of 57 results