Filters








12,174 Hits in 5.2 sec

3D Shape Reconstruction from Vision and Touch [article]

Edward J. Smith, Roberto Calandra, Adriana Romero, Georgia Gkioxari, David Meger, Jitendra Malik, Michal Drozdzal
2020 arXiv   pre-print
a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects.  ...  However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored.  ...  Acknowledgments We would like to acknowledge the NSERC Canadian Robotics Network, the Natural Sciences and Engineering Research Council, and the Fonds de recherche du Québec -Nature et Technologies for  ... 
arXiv:2007.03778v2 fatcat:7ahxzvybbzb4tk6ka2ov65emsa

Active 3D Shape Reconstruction from Vision and Touch [article]

Edward J. Smith and David Meger and Luis Pineda and Roberto Calandra and Jitendra Malik and Adriana Romero and Michal Drozdzal
2021 arXiv   pre-print
Humans build 3D understandings of the world through active object exploration, using jointly their senses of vision and touch.  ...  Inactive touch sensing for 3D reconstruction, the goal is to actively select the tactile readings that maximize the improvement in shape reconstruction accuracy.  ...  4: Our pipeline to 3D object reconstruction from vision and touch.  ... 
arXiv:2107.09584v2 fatcat:yzov7jgphnga7k3fzcpvd42mzq

3D Shape Perception from Monocular Vision, Touch, and Shape Priors [article]

Shaoxiong Wang, Jiajun Wu, Xingyuan Sun, Wenzhen Yuan, William T. Freeman, Joshua B. Tenenbaum, Edward H. Adelson
2018 arXiv   pre-print
We use vision first, applying neural networks with learned shape priors to predict an object's 3D shape from a single-view color image.  ...  In contrast, touch gets precise local shape information, though its efficiency for reconstructing the entire shape could be low.  ...  METHOD We reconstruct the 3D shapes of the objects from both vision and touch.  ... 
arXiv:1808.03247v1 fatcat:a44oxvisgrdfpp2jyvtuopyq5q

ObjectFolder: A Dataset of Objects with Implicit Visual, Auditory, and Tactile Representations [article]

Ruohan Gao, Yen-Yu Chang, Shivani Mall, Li Fei-Fei, Jiajun Wu
2021 arXiv   pre-print
, 3D reconstruction, and robotic grasping.  ...  Second, ObjectFolder employs a uniform, object-centric, and implicit representation for each object's visual textures, acoustic simulations, and tactile readings, making the dataset flexible to use and  ...  3D shape reconstruction [45] .  ... 
arXiv:2109.07991v3 fatcat:hlk3bplo3nfh7ob4rym2udqk2i

3D Reconstruction Using a Linear Laser Scanner and a Camera [article]

Rui Wang
2021 arXiv   pre-print
With the rapid development of computer graphics and vision, several three-dimensional (3D) reconstruction techniques have been proposed and used to obtain the 3D representation of objects in the form of  ...  This study systematically reviews some basic types of 3D reconstruction technology and introduces an easy implementation using a linear laser scanner, a camera, and a turntable.  ...  [14] , such as SFS (Shape from shading) and PSFS (Perspective shape from shading).  ... 
arXiv:2112.00557v1 fatcat:bpwedlpwhrfs5kqs47yb22pxvm

Vision-Based Motion Capture of Interacting Multiple People [chapter]

Hiroaki Egashira, Atsushi Shimada, Daisaku Arita, Rin-ichiro Taniguchi
2009 Lecture Notes in Computer Science  
Several experimental studies show that the proposed method acquires human postures of multiple people correctly and efficiently even when they touch each otter.  ...  Vision-based motion capture is getting popular for acquiring human motion information in various interactive applications.  ...  However, when two people are interacting with each other, they often touch each other and their reconstructed 3D shape becomes one connected component.  ... 
doi:10.1007/978-3-642-04146-4_49 fatcat:gmauqjqyifb2hlb7monpbzwpwa

ObjectFolder 2.0: A Multisensory Object Dataset for Sim2Real Transfer [article]

Ruohan Gao, Zilin Si, Yen-Yu Chang, Samuel Clarke, Jeannette Bohg, Li Fei-Fei, Wenzhen Yuan, Jiajun Wu
2022 arXiv   pre-print
and shape reconstruction.  ...  ObjectFolder 2.0 offers a new path and testbed for multisensory learning in computer vision and robotics. The dataset is available at https://github.com/rhgao/ObjectFolder.  ...  We thank Sudharshan Suresh, Mark Rau, Doug James, and Stephen Tian for helpful discussions.  ... 
arXiv:2204.02389v1 fatcat:k4txwj4g7rh7bgl5niz7apc6du

Complex human motion estimation using visibility

Tomoyuki Mukasa, Arata Miyamoto, Shohei Nobuhara, Atsuto Maki, Takashi Matsuyama
2008 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition  
This paper presents a novel algorithm for estimating complex human motion from 3D video.  ...  Our algorithm shows improvements over naive surface matching algorithm on both synthesized and real 3D video.  ...  Acknowledgements This research was supported by "Development of High Fidelity Digitization Software for Large-Scale and Intangible Cultural Assets" project and GCOE program: "Informatics Education and  ... 
doi:10.1109/afgr.2008.4813309 dblp:conf/fgr/MukasaMNMM08 fatcat:57barkewkzczxjopjrzyte2g2e

Curiosity Driven Self-supervised Tactile Exploration of Unknown Objects [article]

Yujie Lu, Jianren Wang, Vikash Kumar
2022 arXiv   pre-print
We also established the generality of tSLAM by training only on 3D Warehouse objects and testing on ContactDB objects.  ...  To focus our investigation, we study the problem of scene reconstruction where touch is the only available sensing modality.  ...  All acknowledgments go at the end of the paper, including thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided  ... 
arXiv:2204.00035v1 fatcat:gexv6rw2ifdgtkfgz4rbxbueuq

Visual and Tactile 3D Point Cloud Data from Real Robots for Shape Modeling and Completion

Yasemin Bekiroglu, Mårten Björkman, Gabriela Zarzar Gandler, Carl Henrik Ek, Danica Kragic, Johannes Exner
2020 Data in Brief  
Representing 3D geometry for different tasks, e.g. rendering and reconstruction, is an important goal in different fields, such as computer graphics, computer vision and robotics.  ...  Robotic applications often require perception of object shape information extracted from sensory data that can be noisy and incomplete.  ...  Example readings from objects in the dataset, for the first robot setup. Tactile and visual readings are plotted in red and black, respectively, for box1, cyl1 and spray1.  ... 
doi:10.1016/j.dib.2020.105335 pmid:32258263 pmcid:PMC7125316 fatcat:c6rxbal2z5fs5fifheiezzm7ly

Haptic perception disambiguates visual perception of 3D shape

Maarten W. A. Wijntjes, Robert Volcic, Sylvia C. Pont, Jan J. Koenderink, Astrid M. L. Kappers
2009 Experimental Brain Research  
The results revealed that observers perceived a shape that was different from the vision-only sessions and closer to the veridical shape.  ...  Whereas, in general, vision is subject to ambiguities that arise from interpreting the retinal projection, our study shows that haptic input helps to disambiguate and reinterpret the visual input more  ...  Acknowledgments This research was supported by grants from the Netherlands Organisation for Scientific Research (NWO) and a grant from the EU (FP7-ICT-217077-Eyeshots).  ... 
doi:10.1007/s00221-009-1713-9 pmid:19199097 fatcat:4anfx3tvojf2pbff364huzhebq

Tac3D: A Novel Vision-based Tactile Sensor for Measuring Forces Distribution and Estimating Friction Coefficient Distribution [article]

Lunwei Zhang, Yue Wang, Yao Jiang
2022 arXiv   pre-print
In order to break through this predicament, we propose a new vision-based tactile sensor, the Tac3D sensor, for measuring the three-dimensional contact surface shape and contact force distribution.  ...  Further, combined with the global position of the tactile sensor, the 3D model of the object with friction coefficient distribution is reconstructed.  ...  The application of depth cameras greatly improved the accuracy and robustness in measuring the shape and displacement field and reduced the computational costs of 3D reconstruction.  ... 
arXiv:2202.06211v1 fatcat:cjpovh4rcvfrpielhu2l7rs4ba

Vision-Guided Active Tactile Perception for Crack Detection and Reconstruction [article]

Jiaqi Jiang, Guanqun Cao, Daniel Fernandes Gomes, Shan Luo
2021 arXiv   pre-print
In this paper, we propose a novel approach to detect and reconstruct cracks in concrete structures using vision-guided active tactile perception.  ...  To address the uncertainty in vision, human inspectors actively touch the surface of the structures, guided by vision, which has not been explored in autonomous crack detection.  ...  shape in 3D space.  ... 
arXiv:2105.06325v1 fatcat:qqceqf3gkfcjbgpxpgtaplu33m

Elastic Tactile Simulation Towards Tactile-Visual Perception [article]

Yikai Wang, Wenbing Huang, Bin Fang, Fuchun Sun, Chang Li
2021 arXiv   pre-print
The fusion method exhibits superiority regarding the 3D geometric reconstruction task.  ...  Tactile sensing plays an important role in robotic perception and manipulation tasks.  ...  Several previous works combine vision and touch for shape reconstruction which rely on the given point cloud and depth data [1, 5, 13, 32] .  ... 
arXiv:2108.05013v2 fatcat:xdqzqinhzrajvizeneii2wvt5a

Toward spontaneous interaction with the Perceptive Workbench

L. Hedges, B. Singletary, J. Weeks, D. Krum, Z. Wartell, W. Ribarsky, T. Starner, B. Leibe
2000 IEEE Computer Graphics and Applications  
Acknowledgments This work is supported in part by a contract from the Army Research Lab, an NSF grant, an ONR AASert grant, and funding from Georgia Institute of Technology's Broadband Institute.  ...  We thank Brygg Ullmer, Jun Rekimoto, and Jim Davis for their discussions and assistance.  ...  contour, I recognition and quantification of hand and arm gestures, and I full 3D reconstruction of object shapes on the desk surface from shadows cast by the ceiling light-sources.  ... 
doi:10.1109/38.888008 fatcat:srjbgv4revgprag5ewm6z6a2u4
« Previous Showing results 1 — 15 out of 12,174 results