Filters








6 Hits in 1.8 sec

Learning Instance Motion Segmentation with Geometric Embedding

Zhen Leng, Jing Chen, Songnan Lin
2021 IEEE Access  
Recently, convolutional neural networks (CNNs) have been developed for motion segmentation and shown promising results.  ...  Recently, [13] proposes a way to the fusion of the networks for deep semantic and deep optical flow.  ... 
doi:10.1109/access.2021.3062673 fatcat:6wq5eiubtfctblhl5lswnrq7ce

Dynamic Object Removal and Spatio-Temporal RGB-D Inpainting via Geometry-Aware Adversarial Learning [article]

Borna Bešić, Abhinav Valada
2022 arXiv   pre-print
We introduce a large-scale hyperrealistic dataset with RGB-D images, semantic segmentation labels, camera poses as well as groundtruth RGB-D information of occluded regions.  ...  We optimize our architecture using adversarial training to synthesize fine realistic textures which enables it to hallucinate color and depth structure in occluded regions online in a spatially and temporally  ...  We representf with a feed-forward deep neural network that operates in a recurrent manner.  ... 
arXiv:2008.05058v4 fatcat:vi37iulxizaxpmudq7zpo7hv5a

RP-VIO: Robust Plane-based Visual-Inertial Odometry for Dynamic Environments [article]

Karnik Ram, Chaitanya Kharyal, Sudarshan S. Harithas, K. Madhava Krishna
2021 arXiv   pre-print
Current best solutions merely filter dynamic objects as outliers based on the semantics of the object category.  ...  Such an approach does not scale as it requires semantic classifiers to encompass all possibly-moving object classes; this is hard to define, let alone deploy.  ...  Neira, “Dynaslam ii: segmentation using deep convolutional neural networks,” in 2017 Tightly-coupled multi-object  ... 
arXiv:2103.10400v2 fatcat:tlpg4v6hxrcw5cw45ta5wdp4bm

LiCaNext: Incorporating Sequential Range Residuals for Additional Advancement in Joint Perception and Motion Prediction

Yasser H. Khalil, Hussein T. Mouftah
2021 IEEE Access  
In contrast to LiCaNet, we introduce sequential range residual images into the multi-modal fusion network to further improve performance, with minimal increase in inference time.  ...  Autonomous driving can obtain accurate perception and reliable motion prediction with the support of multi-modal fusion.  ...  SMSnet [18] is a method that leverages a convolutional neural network (CNN) and depends on two sequential camera images to perform pixel-wise category labeling and motion detection.  ... 
doi:10.1109/access.2021.3123169 fatcat:drai35gmijclxf6be7qnrgy4li

Learning Video Object Segmentation from Limited Labelled Data

Mennatullah Siam
2021
The expense and massive computing needs and annotation cost creates a barrier to use deep learning with large-scale labelled data in e.g. developing countries.  ...  However, the focus of current video semantic segmentation work is on learning from large-scale datasets.  ...  Fully Convolutional Networks (FCN) The initial direction in semantic segmentation using convolutional neural networks was towards patch-wise training to yield the final segmentation.  ... 
doi:10.7939/r3-knj9-f527 fatcat:mtomvoeqafeqtc3zsoww4zj3mi

Exploiting Multi-Modal Fusion for Urban Autonomous Driving Using Latent Deep Reinforcement Learning [article]

Yasser Khalil, University, My
2022
To that end, this Ph.D. research initially develops LiCaNext, a novel real-time multi-modal fusion network to produce accurate joint perception and motion prediction at a pixel level.  ...  Deep reinforcement learning (DRL) has shown great potential in learning complex tasks. Recently, researchers investigated various DRL-based approaches for autonomous driving.  ...  SMSnet [161] is a method that leverages a convolutional neural network (CNN) and depends on two sequential camera images to perform pixel-wise category labeling and motion detection.  ... 
doi:10.20381/ruor-27745 fatcat:nvpt2ufokfgxzlmdddm3vstpwm