Filters








1,904 Hits in 6.4 sec

Sparse Representations for Object and Ego-motion Estimation in Dynamic Scenes [article]

Hirak J Kashyap, Charless Fowlkes, Jeffrey L Krichmar
<span title="2019-03-09">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Dynamic scenes that contain both object motion and egomotion are a challenge for monocular visual odometry (VO).  ...  Another issue with monocular VO is the scale ambiguity, i.e. these methods cannot estimate scene depth and camera motion in real scale.  ...  Sparse Representations for Object and Ego-motion Estimation in Dynamic Scenes  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1903.03731v1">arXiv:1903.03731v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ftuxktlyqvh7lc7gmw47lah6ku">fatcat:ftuxktlyqvh7lc7gmw47lah6ku</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200914021339/https://arxiv.org/pdf/1903.03731v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c5/2c/c52c0cd3e6c5ce23a0f4a9555111b10133fa6a07.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1903.03731v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Sparse Representations for Object- and Ego-Motion Estimations in Dynamic Scenes

Hirak J. Kashyap, Charless C. Fowlkes, Jeffrey L. Krichmar
<span title="2020-07-20">2020</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/j6amxna35bbs5p42wy5crllu2i" style="color: black;">IEEE Transactions on Neural Networks and Learning Systems</a> </i> &nbsp;
Disentangling the sources of visual motion in a dynamic scene during self-movement or ego motion is important for autonomous navigation and tracking.  ...  field (EMF) basis set, which eliminates the irrelevant components in both static and dynamic segments for the task of ego-motion estimation.  ...  Taken together, the existing monocular ego-and object-motion methods, except for [11] , cannot estimate both 6DoF ego-motion and unconstrained pixelwise object motion in complex dynamic scenes.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tnnls.2020.3006467">doi:10.1109/tnnls.2020.3006467</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/32687472">pmid:32687472</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vyzezhpwzbhrrjebw6so5ury2q">fatcat:vyzezhpwzbhrrjebw6so5ury2q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210428192951/https://escholarship.org/content/qt7p53p8zv/qt7p53p8zv.pdf?t=qgb7mw" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4a/0c/4a0c0097978903d1790f578a0f81bdd43b8c7123.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tnnls.2020.3006467"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Robust Ego and Object 6-DoF Motion Estimation and Tracking [article]

Jun Zhang and Mina Henein and Robert Mahony and Viorela Ila
<span title="2020-07-28">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The problem of tracking self-motion as well as motion of objects in the scene using information from a camera is known as multi-body visual odometry and is a challenging task.  ...  This paper proposes a robust solution to achieve accurate estimation and consistent track-ability for dynamic multi-body visual odometry.  ...  ACKNOWLEDGMENT This research is supported by the Australian Research Council through the Australian Centre of Excellence for Robotic Vision (CE140100016).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.13993v1">arXiv:2007.13993v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3oitjujzencrtlyuc5lhu5vvsu">fatcat:3oitjujzencrtlyuc5lhu5vvsu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200730131542/https://arxiv.org/pdf/2007.13993v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.13993v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Static and Dynamic Objects Analysis as a 3D Vector Field

Cansen Jiang, Danda Pani Paudel, Yohan Fougerolle, David Fofi, Cedric Demonceaux
<span title="">2017</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/fnmyhnfycff7tiixnks6dqb6sy" style="color: black;">2017 International Conference on 3D Vision (3DV)</a> </i> &nbsp;
In the context of scene modelling, understanding, and landmark-based robot navigation, the knowledge of static scene parts and moving objects with their motion behaviours plays a vital role.  ...  Experiments show that the proposed Flow Field Analysis algorithm and Sparse Flow Clustering approach are highly effective for motion detection and segmentation, and yield high quality reconstructed static  ...  For a mobile camera system, both foreground and background observations are observed as moving objects due to the camera ego-motion.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/3dv.2017.00035">doi:10.1109/3dv.2017.00035</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/3dim/JiangPFFD17.html">dblp:conf/3dim/JiangPFFD17</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jpwomwkbebg3nehtm4m4vxpaaq">fatcat:jpwomwkbebg3nehtm4m4vxpaaq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180727231424/https://hal.archives-ouvertes.fr/hal-01584238/file/Static%20and%20Dynamic%20Objects%20Analysis%20as%20a%203D%20Vector%20Field%20Jiang%20et%20al.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b0/b3/b0b3aff4f4f8dd49ab511c6d9d5f71a79f64f724.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/3dv.2017.00035"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Dynamic Dense RGB-D SLAM using Learning-based Visual Odometry [article]

Shihao Shen, Yilin Cai, Jiayi Qiu, Guangzhao Li
<span title="2022-05-12">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
TartanVO, like other direct methods rather than feature-based, estimates camera pose through dense optical flow, which only applies to static scenes and disregards dynamic objects.  ...  Moreover, we rerender the input frames such that the dynamic pixels are removed and iteratively pass them back into the visual odometry to refine the pose estimate.  ...  Michael Kaess and his PhD student, Wei Dong, from Carnegie Mellon University for their advise.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.05916v1">arXiv:2205.05916v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/p6xojvqeebcanl3mmigrngwoza">fatcat:p6xojvqeebcanl3mmigrngwoza</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220514221612/https://arxiv.org/pdf/2205.05916v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f0/7f/f07f3f77b9c029d387cea265f7b083f353a1c0e7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.05916v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

SceneFlowFields: Dense Interpolation of Sparse Scene Flow Correspondences [article]

René Schuster, Oliver Wasenmüller, Georg Kuschk, Christian Bailer, Didier Stricker
<span title="2017-10-27">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
For application in an automotive context, we further show that an optional ego-motion model helps to boost performance and blends smoothly into our approach to produce a segmentation of the scene into  ...  While most scene flow methods use either variational optimization or a strong rigid motion assumption, we show for the first time that scene flow can also be estimated by dense interpolation of sparse  ...  A rigid plane model performs poorly when applied to deformable objects, and ego-motion estimation for highly dynamic scenes is hard.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1710.10096v1">arXiv:1710.10096v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bw4rk6gq3fgy3j7wmrxmun54lm">fatcat:bw4rk6gq3fgy3j7wmrxmun54lm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191012223813/https://arxiv.org/pdf/1710.10096v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1e/82/1e822391a1a2f7c99dcf9a833ed681943068f4c2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1710.10096v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Self-Supervised Pillar Motion Learning for Autonomous Driving [article]

Chenxu Luo, Xiaodong Yang, Alan Yuille
<span title="2021-04-18">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we seek to answer the research question of whether the abundant unlabeled data collections can be utilized for accurate and efficient motion learning.  ...  Current motion estimation methods usually require vast amount of annotated training data from self-driving scenes.  ...  F t ego (u, v) is the motion caused by ego-vehicle motion, and F t obj (u, v) is the true object motion.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.08683v1">arXiv:2104.08683v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/g7xx35yltzafrobimjwnwxvcvm">fatcat:g7xx35yltzafrobimjwnwxvcvm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210421093339/https://arxiv.org/pdf/2104.08683v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/94/c2/94c22c98d38d983fdbd41d75488e2de5176082aa.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.08683v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Any Motion Detector: Learning Class-agnostic Scene Dynamics from a Sequence of LiDAR Point Clouds [article]

Artem Filatov, Andrey Rykov, Viacheslav Murashkin
<span title="2020-04-24">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Object detection and motion parameters estimation are crucial tasks for self-driving vehicle safe navigation in a complex urban environment.  ...  In this work we propose a novel real-time approach of temporal context aggregation for motion detection and motion parameters estimation based on 3D point cloud sequence.  ...  Ego-Motion Compensation Layer To estimate the motion of dynamic objects in the scene we need to aggregate temporal context.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.11647v1">arXiv:2004.11647v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ffohtbditjexldl2u2433ibuiq">fatcat:ffohtbditjexldl2u2433ibuiq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200501204708/https://arxiv.org/pdf/2004.11647v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.11647v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Enhanced Initialization for Track-before-Detection based Multibody Motion Segmentation from a Moving Camera

Hernan Gonzalez, Arjun Balakrishnan, Sergio A. Rodriguez F., Abdelhafid Elouardi
<span title="">2019</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/cfmch5qrm5ckxpkho4uhbkgznm" style="color: black;">2019 IEEE Intelligent Transportation Systems Conference (ITSC)</a> </i> &nbsp;
The method relies on epipolar geometry, RANSAC formulation and motion estimation for segmenting ego-motion and eoru-motions.  ...  Vision-based motion segmentation provides key information for dynamic scene understanding and decision making in autonomous navigation.  ...  This stage outputs a first estimation of the size, location, and number of dynamic objects in the scene.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/itsc.2019.8917018">doi:10.1109/itsc.2019.8917018</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/itsc/GonzalezBFE19.html">dblp:conf/itsc/GonzalezBFE19</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cjxqyuc4wvh63adx5jzm5rbtcu">fatcat:cjxqyuc4wvh63adx5jzm5rbtcu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200510154917/https://hal.archives-ouvertes.fr/hal-02393264/file/Enhanced_Initialization_for_Track_before_Detection_based_Multibody_Motion_Segmentation_from_a_Moving_Camera_VF_HAL_watermark.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1d/04/1d04affa290dd93821e0feb0637aea288012eab1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/itsc.2019.8917018"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Robust Stereo Visual Odometry Based on Probabilistic Decoupling Ego-Motion Estimation and 3D SSC

Yan Wang, Hui-qi Miao, Lei Guo
<span title="">2019</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
This paper presents a robust stereo visual odometry by decoupled ego-motion estimation based on probabilistic matches and rejecting the outliers of dynamic objects through motion segmentation.  ...  The results show that our method is more robust as it can detect outliers more accurately in dynamic environments and achieve higher precision in motion estimation.  ...  Hence, in this paper, we design a probabilistic decoupled framework based robust ego-motion estimation algorithm to estimate rotation, with translation calculated after dynamic objects rejected by motion  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2018.2886824">doi:10.1109/access.2018.2886824</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ucrph2egvjhtxawsyfwgmhkr44">fatcat:ucrph2egvjhtxawsyfwgmhkr44</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201107121056/https://ieeexplore.ieee.org/ielx7/6287639/8600701/08576514.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/84/7f/847f6043ba77f4afc9d77b4f908fd6ee0cf34798.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2018.2886824"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

Pseudo-labels for Supervised Learning on Dynamic Vision Sensor Data, Applied to Object Detection under Ego-motion [article]

Nicholas F. Y. Chen
<span title="2018-03-14">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We show, for the first time, event-based car detection under ego-motion in a real environment at 100 frames per second with a test average precision of 40.3% relative to our annotated ground truth.  ...  However, event-based data sets are scarce and labels are even rarer for tasks such as object detection.  ...  simple objects in a controlled environment or detecting objects without camera ego-motion. 2.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1709.09323v3">arXiv:1709.09323v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/sfkfbvgnv5hn3a27isk7zngno4">fatcat:sfkfbvgnv5hn3a27isk7zngno4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191022030951/https://arxiv.org/pdf/1709.09323v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/fa/10/fa1041cb36549a9f3c08a8c85e607f4c0317bc5f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1709.09323v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Speed and Memory Efficient Dense RGB-D SLAM in Dynamic Scenes

Bruce Canovas, Michele Rombaut, Amaury Negre, Denis Pellerin, Serge Olympieff
<span title="2020-10-24">2020</span> <i title="IEEE"> 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) </i> &nbsp;
In this paper we propose a fast RGB-D SLAM articulated around a rough and lightweight 3D representation for dense compact mapping in dynamic indoor environment, targeting mainstream computing platforms  ...  Many of them are also limited to static environments and small inter-frame motions.  ...  Moving Object Detection 1) Ego-motion Compensation: To detect dynamic elements, we chose to model the camera ego-motion in image space as a 2D perspective transformation matrix H ∈ SE (2) because ego-motion  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iros45743.2020.9341542">doi:10.1109/iros45743.2020.9341542</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/na7wddpekzd6jei4zfbyta53yy">fatcat:na7wddpekzd6jei4zfbyta53yy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210716110313/https://hal.univ-grenoble-alpes.fr/hal-03143986/document" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/19/49/1949b03e72fc10d60074537b245dd19a16fd30c7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iros45743.2020.9341542"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Towards Semantic SLAM: 3D Position and Velocity Estimation by Fusing Image Semantic Information with Camera Motion Parameters for Traffic Scene Analysis

Mostafa Mansour, Pavel Davidson, Oleg Stepanov, Robert Piché
<span title="2021-01-23">2021</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/kay2tsbijbawliu45dnhvyvgsq" style="color: black;">Remote Sensing</a> </i> &nbsp;
In this paper, an EKF (Extended Kalman Filter)-based algorithm is proposed to estimate 3D position and velocity components of different cars in a scene by fusing the semantic information and car model,  ...  extracted from successive frames with camera motion parameters.  ...  By doing so, a 3D object-based map for the scene, with respect to the ego car frame, can be created and updated over time.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/rs13030388">doi:10.3390/rs13030388</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gry5f6wq3bfkzg6d77vcqb7de4">fatcat:gry5f6wq3bfkzg6d77vcqb7de4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210129121946/https://res.mdpi.com/d_attachment/remotesensing/remotesensing-13-00388/article_deploy/remotesensing-13-00388-v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/50/47/50473cc2a15de2e6c96a4a4747c2525b4ff2e3f3.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/rs13030388"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

PointFlowNet: Learning Representations for Rigid Motion Estimation From Point Clouds

Aseem Behl, Despoina Paschalidou, Simon Donne, Andreas Geiger
<span title="">2019</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</a> </i> &nbsp;
In a single forward pass, our model jointly predicts 3D scene flow as well as the 3D bounding box and rigid body motion of objects in the scene.  ...  We show that the traditional global representation of rigid body motion prohibits inference by CNNs, and propose a translation equivariant representation to circumvent this problem.  ...  network splits up in three branches for respectively ego-motion estimation, 3D object detection and 3D scene flow estimation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2019.00815">doi:10.1109/cvpr.2019.00815</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/BehlPDG19.html">dblp:conf/cvpr/BehlPDG19</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kskqbnc6grbwbjkzfytocasvqm">fatcat:kskqbnc6grbwbjkzfytocasvqm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190707151927/http://openaccess.thecvf.com/content_CVPR_2019/papers/Behl_PointFlowNet_Learning_Representations_for_Rigid_Motion_Estimation_From_Point_Clouds_CVPR_2019_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/5d/24/5d2421c59d383ca3ce9b162c75d97c63a9a8a3db.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2019.00815"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Pseudo-Labels for Supervised Learning on Dynamic Vision Sensor Data, Applied to Object Detection Under Ego-Motion

Nicholas F. Y. Chen
<span title="">2018</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</a> </i> &nbsp;
We show, for the first time, event-based car detection under ego-motion in a real environment at 100 frames per second with a test average precision of 40.3% relative to our annotated ground truth.  ...  However, event-based data sets are scarce and labels are even rarer for tasks such as object detection.  ...  We hope that this work will encourage researchers to use pseudo-labels for supervised learning techniques on DVS data and advance the frontiers of this field, and to publish more data sets containing synchronized  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvprw.2018.00107">doi:10.1109/cvprw.2018.00107</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/Chen18.html">dblp:conf/cvpr/Chen18</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/5qgvqmvogrcopacmbh3cg6yvgy">fatcat:5qgvqmvogrcopacmbh3cg6yvgy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321170459/http://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w12/Chen_Pseudo-Labels_for_Supervised_CVPR_2018_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b8/33/b83351d861933a85a005ee0e2a6a605419905524.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvprw.2018.00107"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 1,904 results