3,505 Hits in 2.1 sec

ODE-CNN: Omnidirectional Depth Extension Networks [article]

Xinjing Cheng, Peng Wang, Yanqi Zhou, Chenye Guan, Ruigang Yang
2020 arXiv   pre-print
To accurately recover the missing depths, we design an omnidirectional depth extension convolutional neural network(ODE-CNN), in which a spherical feature transform layer(SFTL) is embedded at the end of  ...  In this paper, we propose a low-cost 3D sensing system that combines an omnidirectional camera with a calibrated projective depth camera, where the depth from the limited FoV can be automatically extended  ...  To validate our proposed depth extension setup, we perform various study over the recently proposed dataset with omnidirectional images [7] , where we show firstly by adopting additional one depth senor  ... 
arXiv:2007.01475v1 fatcat:sz6eicbf6vbbnl6z6stockshdm

OmniSLAM: Omnidirectional Localization and Dense Mapping for Wide-baseline Multi-camera Systems [article]

Changhee Won, Hochang Seok, Zhaopeng Cui, Marc Pollefeys, Jongwoo Lim
2020 arXiv   pre-print
For more practical and accurate reconstruction, we first introduce improved and light-weighted deep neural networks for the omnidirectional depth estimation, which are faster and more accurate than the  ...  existing networks.  ...  Evaluation of Omnidirectional Depth Estimation We evaluate our proposed networks on the synthetic datasets as [15] with the ground-truth depths.  ... 
arXiv:2003.08056v1 fatcat:js4hjssugbfaldzrw2h3vyhwla

SweepNet: Wide-baseline Omnidirectional Depth Estimation [article]

Changhee Won, Jongbin Ryu, Jongwoo Lim
2019 arXiv   pre-print
In this paper, we propose a novel wide-baseline omnidirectional stereo algorithm which computes the dense depth estimate from the fisheye images using a deep convolutional neural network.  ...  Instead of estimating depth maps from multiple sets of rectified images and stitching them, our approach directly generates one dense omnidirectional depth map with full 360-degree coverage at the rig  ...  The extensive experiments show that the proposed network outperforms the conventional local matching methods.  ... 
arXiv:1902.10904v1 fatcat:73ikhhvghffklkyavbgmwdiivq

Distortion-Tolerant Monocular Depth Estimation On Omnidirectional Images Using Dual-cubemap [article]

Zhijie Shen, Chunyu Lin, Lang Nie, Kang Liao, Yao zhao
2022 arXiv   pre-print
Extensive experiments demonstrate the superiority of our method over other state-of-the-art solutions.  ...  In DCDE module, we present a rotation-based dual-cubemap model to estimate the accurate NFoV depth, reducing the distortion at the cost of boundary discontinuity on omnidirectional depths.  ...  Then, the depth of cubemap is converted back to the equirectangular format, and an encoder-decoder network revises the coarse omnidirectional depth in the boundary revision module.  ... 
arXiv:2203.09733v1 fatcat:i3ktfsa5hneonnpfrsuee3fpiu

OmniMVS: End-to-End Learning for Omnidirectional Stereo Matching [article]

Changhee Won, Jongbin Ryu, Jongwoo Lim
2019 arXiv   pre-print
In this paper, we propose a novel end-to-end deep neural network model for omnidirectional depth estimation from a wide-baseline multi-view stereo setup.  ...  The 3D encoder-decoder block takes the aligned feature volume to produce the omnidirectional depth estimate with regularization on uncertain regions utilizing the global context information.  ...  Omnidirectional Depth Estimation Various algorithms and systems have been proposed for the omnidirectional depth estimation [6, 25, 29] , but very few use deep neural networks. Schönbein et al.  ... 
arXiv:1908.06257v1 fatcat:vutpubwcpzfs5jsxs3sccfbcry

GLPanoDepth: Global-to-Local Panoramic Depth Estimation [article]

Jiayang Bai, Shuichang Lai, Haoyu Qin, Jie Guo, Yanwen Guo
2022 arXiv   pre-print
In this paper, we propose a learning-based method for predicting dense depth values of a scene from a monocular omnidirectional image.  ...  However, fully-convolutional networks that most current solutions rely on fail to capture rich global contexts from the panorama.  ...  With the popularity of omnidirectional cameras, researches on depth estimation for omnidirectional images have emerged [48, 53, 58] .  ... 
arXiv:2202.02796v2 fatcat:mtf5cdolwbhw5frhxublwycdwa

Dense disparity estimation from omnidirectional images

Zafer Arican, Pascal Frossard
2007 2007 IEEE Conference on Advanced Video and Signal Based Surveillance  
Omnidirectional imaging certainly represents important advantages for the representation and processing of the plenoptic function in 3D scenes for applications in localization, or depth estimation for  ...  The proposed method shows promising performances for dense disparity estimation and can be extended efficiently to networks of several camera sensors.  ...  Experimental results are presented in Section 4, and the extension of the algorithm to networks of three cameras is presented in Section 5.  ... 
doi:10.1109/avss.2007.4425344 dblp:conf/avss/AricanF07 fatcat:6gayafdujbeb5klpn5js2c32le

PanoDepth: A Two-Stage Approach for Monocular Omnidirectional Depth Estimation [article]

Yuyan Li, Zhixin Yan, Ye Duan, Liu Ren
2022 arXiv   pre-print
In this paper, we propose a novel, model-agnostic, two-stage pipeline for omnidirectional monocular depth estimation.  ...  We conducted extensive experiments and ablation studies to evaluate PanoDepth with both the full pipeline as well as the individual modules in each stage.  ...  We conducted extensive experiments and ablation studies to evaluate PanoDepth with both the full pipeline and the individual networks in each stage on several public benchmark datasets.  ... 
arXiv:2202.01323v1 fatcat:nwbb2tiferfm5hmm4x3otphoye

Distortion-aware Monocular Depth Estimation for Omnidirectional Images [article]

Hong-Xiang Chen and Kunhong Li and Zhiheng Fu and Mengyi Liu and Zonghao Chen and Yulan Guo
2020 arXiv   pre-print
In this work, we propose a Distortion-Aware Monocular Omnidirectional (DAMO) dense depth estimation network to address this challenge on indoor panoramas with two steps.  ...  First, we introduce a distortion-aware module to extract calibrated semantic features from omnidirectional images.  ...  Experiments We conduct extensive experiments on a widely used omnidirectional dataset to evaluate the performance of our DAMO network.  ... 
arXiv:2010.08942v2 fatcat:q6ipvyzlbvdxtjxue6qgbvcfiq

Learning to compose 6-DoF omnidirectional videos using multi-sphere images [article]

Jisheng Li, Yuze He, Yubin Hu, Yuxing Han, Jiangtao Wen
2021 arXiv   pre-print
The system utilizes conventional omnidirectional VR camera footage directly without the need for a depth map or segmentation mask, thereby significantly simplifying the overall complexity of the 6-DoF  ...  omnidirectional video composition.  ...  Lately, studies that used convolutional neural networking (CNN) show promising results for depth estimation and view synthesis [12] [13] [14] .  ... 
arXiv:2103.05842v1 fatcat:hlkquhoixrg7lap6a6a3v4lbl4

HiMODE: A Hybrid Monocular Omnidirectional Depth Estimation Model [article]

Masum Shah Junayed, Arezoo Sadeghzadeh, Md Baharul Islam, Lai-Kuan Wong, Tarkan Aydin
2022 arXiv   pre-print
Monocular omnidirectional depth estimation is receiving considerable research attention due to its broad applications for sensing 360 surroundings.  ...  Extensive experiments conducted on three datasets; Stanford3D, Matterport3D, and SunCG, demonstrate that HiMODE can achieve state-of-the-art performance for 360 monocular depth estimation.  ...  This motivates researchers to conduct further studies on omnidirectional MDE. Several approaches based on Convolutional Neural Networks (CNNs) have been proposed for omnidirectional depth estimation.  ... 
arXiv:2204.05007v1 fatcat:drotdhhw2bcwlliiq6ujw4z3ya

OmniFlow: Human Omnidirectional Optical Flow [article]

Roman Seidel, André Apitzsch, Gangolf Hirtz
2021 arXiv   pre-print
Our paper presents OmniFlow: a new synthetic omnidirectional human optical flow dataset.  ...  Optical flow is the motion of a pixel between at least two consecutive video frames and can be estimated through an end-to-end trainable convolutional neural network.  ...  A dataset which focuses on the application of crowd analysis was created in [19] an omnidirectional synthetic dataset which contains bounding boxes, segmentation masks and depth maps in indoor scenarios  ... 
arXiv:2104.07960v1 fatcat:vv3kjlnjx5crtci3g5kgjkshiu

LGT-Net: Indoor Panoramic Room Layout Estimation with Geometry-Aware Transformer Network [article]

Zhigang Jiang, Zhongzheng Xiang, Jinhua Xu, Ming Zhao
2022 arXiv   pre-print
We present that using horizon-depth along with room height can obtain omnidirectional-geometry awareness of room layout in both horizontal and vertical directions.  ...  3D room layout estimation by a single panorama using deep neural networks has made great progress.  ...  The network estimates the room layout from a single panorama using the omnidirectional-geometry aware loss of horizon-depth and room height and the planar-geometry aware loss of normals and gradients of  ... 
arXiv:2203.01824v2 fatcat:swhrl3jstbhjjlrz3yeqhqvx5m

Learning from THEODORE: A Synthetic Omnidirectional Top-View Indoor Dataset for Deep Transfer Learning

Tobias Scheck, Roman Seidel, Gangolf Hirtz
2020 2020 IEEE Winter Conference on Applications of Computer Vision (WACV)  
We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images.  ...  Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs).  ...  Beyond the segmentation and detection masks we intend to create omnidirectional depth, skeletons and optical flow ground truth from rendered scenes.  ... 
doi:10.1109/wacv45572.2020.9093563 dblp:conf/wacv/ScheckSH20 fatcat:rhukf4jervgmlf7sxfducbraia

Cinematic Virtual Reality With Motion Parallax From a Single Monoscopic Omnidirectional Image

Gregoire Dupont de Dinechin, Alexis Paljic
2018 2018 3rd Digital Heritage International Congress (DigitalHERITAGE) held jointly with 2018 24th International Conference on Virtual Systems & Multimedia (VSMM 2018)  
In this paper, we present a novel solution for cinematic VR with motion parallax that instead only uses a single monoscopic omnidirectional image as input.  ...  We notably propose using a VR interface to manually generate a 360-degree depth map, visualized as a 3D mesh and modified by the operator in real-time.  ...  mesh from the omnidirectional color-depth image pair.  ... 
doi:10.1109/digitalheritage.2018.8810116 dblp:conf/dh/DinechinP18 fatcat:jsh3homyzzb57a3bnpvj4tbv24
« Previous Showing results 1 — 15 out of 3,505 results