A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
An open-source navigation system for micro aerial vehicles
2013
Autonomous Robots
This paper presents an open-source indoor navigation system for quadrotor micro aerial vehicles(MAVs), implemented in the ROS framework. The system requires a minimal set of sensors including a planar laser range-finder and an inertial measurement unit. We address the issues of autonomous control, state estimation, path-planning, and teleoperation, and provide interfaces that allow the system to seamlessly integrate with existing ROS navigation tools for 2D SLAM and 3D mapping. All components
doi:10.1007/s10514-012-9318-8
fatcat:cwh5qojkgnbr7lhwlc2eowv65y
more »
... n in real time onboard the MAV, with state estimation and control operating at 1 kHz. A major focus in our work is modularity and abstraction, allowing the system to be both flexible and hardware-independent. All the software and hardware components which we have developed, as well as documentation and test data, are available online.
Keeping a Good Attitude: A Quaternion-Based Orientation Filter for IMUs and MARGs
2015
Sensors
Valenti and Ivan Dryanovski conceived the mathematical derivation of the presented approach and wrote the implementation of the algorithm; Roberto G. ...
doi:10.3390/s150819302
pmid:26258778
pmcid:PMC4570372
fatcat:y77ugzn465a4didxuyscf53yym
Semantic Indoor Navigation with a Blind-User Oriented Augmented Reality
2013
2013 IEEE International Conference on Systems, Man, and Cybernetics
The aim of this paper is to design an inexpensive conceivable wearable navigation system that can aid in the navigation of a visually impaired user. A novel approach of utilizing the floor plan map posted on the buildings is used to acquire a semantic plan. The extracted landmarks such as room numbers, doors, etc act as a parameter to infer the way points to each room. This provides a mental mapping of the environment to design a navigation framework for future use. A human motion model is used
doi:10.1109/smc.2013.611
dblp:conf/smc/JosephZDXYT13
fatcat:uzkbpx75nfbzxo7klagza2wbuq
more »
... to predict a path based on how real humans ambulate towards a goal by avoiding obstacles. We demonstrate the possibilities of augmented reality (AR) as a blind user interface to perceive the physical constraints of the real world using haptic and voice augmentation. The haptic belt vibrates to direct the user towards the travel destination based on the metric localization at each step. Moreover, travel route is presented using voice guidance, which is achieved by accurate estimation of the user's location and confirmed by extracting the landmarks, based on landmark localization. The results show that it is feasible to assist a blind user to travel independently by providing the constraints required for safe navigation with user oriented augmented reality.
Chisel: Real Time Large Scale 3D Reconstruction Onboard a Mobile Device using Spatially Hashed Signed Distance Fields
2015
Robotics: Science and Systems XI
We describe CHISEL: a system for real-time housescale (300 square meter or more) dense 3D reconstruction onboard a Google Tango [1] mobile device by using a dynamic spatially-hashed truncated signed distance field[2] for mapping, and visual-inertial odometry for localization. By aggressively culling parts of the scene that do not contain surfaces, we avoid needless computation and wasted memory. Even under very noisy conditions, we produce high-quality reconstructions through the use of space
doi:10.15607/rss.2015.xi.040
dblp:conf/rss/KlingensmithDSX15
fatcat:pvjjptcp3jgphict4amakwjcwe
more »
... rving. We are able to reconstruct and render very large scenes at a resolution of 2-3 cm in real time on a mobile device without the use of GPU computing. The user is able to view and interact with the reconstruction in real-time through an intuitive interface. We provide both qualitative and quantitative results on publicly available RGB-D datasets [3] , and on datasets collected in real-time from two devices.
Editor's note: Special Issue on Robotics: Science and Systems, 2015
2017
Autonomous Robots
Jingru Luo, Kris Hauser • Autonomy infused teleoperation with application to brain Large-scale, real-time 3D scene reconstruction on a mobile device, Ivan Dryanovski, Matthew Klingensmith, Siddhartha S ...
doi:10.1007/s10514-017-9647-8
fatcat:xmdpgbob6rdcbkqqu7ew2t5baq
TSDF-based change detection for consistent long-term dense reconstruction and dynamic object discovery
2017
Fig. 1: Change Detection Algorithm: (left) One of 10 reconstructed scene observations. (center) Reconstruction of the static environment after 10 observations. (right) Discovered dynamic objects. Abstract-Robots that are operating for extended periods of time need to be able to deal with changes in their environment and represent them adequately in their maps. In this paper, we present a novel 3D reconstruction algorithm based on an extended Truncated Signed Distance Function (TSDF) that
doi:10.3929/ethz-b-000189737
fatcat:hhmzduwsvzfypjhd7qzqg74ri4
more »
... to continuously refine the static map while simultaneously obtaining 3D reconstructions of dynamic objects in the scene. This is a challenging problem because map updates happen incrementally and are often incomplete. Previous work typically performs change detection on point clouds, surfels or maps, which are not able to distinguish between unexplored and empty space. In contrast, our TSDF-based representation naturally contains this information and thus allows us to more robustly solve the scene differencing problem. We demonstrate the algorithms performance as part of a system for unsupervised object discovery and class recognition. We evaluated our algorithm on challenging datasets that we recorded over several days with RGB-D enabled tablets. To stimulate further research in this area, all of our datasets are publicly available 3 .
Visual-Inertial Teach and Repeat for Aerial Inspection
[article]
2018
arXiv
pre-print
Dryanovski, Simon Lynen, and Konstantine Tsotsos. ...
presented experiments received support from members of the Autonomous Systems Lab and Google Tango, most importantly: Michael Burri, Helen Oleynikova, Zachary Taylor, Fabian Blöchliger, Mingyang Li, Ivan ...
arXiv:1803.09650v1
fatcat:46urxwwabfe4xgwqb6nbhs6t4q
A Humanoid Robot Companion for Wheelchair Users
[chapter]
2013
Lecture Notes in Computer Science
Node written by Ivan Dryanovski and William Morris, available from http://www.ros.org/wiki/laser scan matcher. 5 Node written by Brian Gerkey and Andrew Howards, available from http://www.ros.org/wiki ...
doi:10.1007/978-3-319-02675-6_43
fatcat:r5zorl22kreytni3yaz4njp6pa
Shape Completion Using 3D-Encoder-Predictor CNNs and Shape Synthesis
2017
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
We want to thank Ivan Dryanovski and Jürgen Sturm for their valuable feedback and help during this project, and Wenzel Jakob for the Mitsuba raytracer [13] . [2] . ...
doi:10.1109/cvpr.2017.693
dblp:conf/cvpr/DaiQN17
fatcat:th4cmo4jlnh4fpe4ilcbdm6hsq
Building Optimal Radio-Frequency Signal Maps
2014
2014 22nd International Conference on Pattern Recognition
ACKNOWLEDGEMENTS The authors would like to acknowledge the helpful contributions by Ravishankar Palaniappan, Ivan Dryanovski and Hao Tong during the experimental (data acquisition) phase. ...
doi:10.1109/icpr.2014.178
dblp:conf/icpr/MirowskiHW14
fatcat:bfpdwlmmqfbjtaaniqn2amsh7u
A flight altitude estimator for multirotor UAVs in dynamic and unstructured indoor environments
2017
2017 International Conference on Unmanned Aircraft Systems (ICUAS)
Ivan Dryanovski et al. [16] proposes an approach for robust altitude estimation of the UAV by deflecting the rays from a horizontally mounted laser to the ground using a mirror. ...
doi:10.1109/icuas.2017.7991467
fatcat:fvgjdsixjjfr3duymxh2rfp24u
A Wearable Indoor Navigation System with Context Based Decision Making For Visually Impaired
2016
International Journal of Advanced Robotics and Automation
Ivan Dryanovski for his works on visual odometry. Prof. ...
doi:10.15226/2473-3032/1/3/00115
fatcat:uaixslfwdzbr3jleasdf4rx65u
Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis
[article]
2017
arXiv
pre-print
We want to thank Ivan Dryanovski and Jürgen Sturm for their valuable feedback and help during this project, and Wenzel Jakob for the Mitsuba raytracer [15] . ...
arXiv:1612.00101v2
fatcat:j2y5z5hv6vbypf767eu2kbrzj4
Learning Shared Control by Demonstration for Personalized Wheelchair Assistance
2018
IEEE Transactions on Haptics
Software, written by Ivan Dryanovski and William Morris, is available from http://www.ros.org/wiki/laser scan matcher.3. ...
doi:10.1109/toh.2018.2804911
pmid:29994370
fatcat:gm5qv7ugtzdnbi4ie474qfpvwq
NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video
[article]
2021
arXiv
pre-print
Barron, Neal
Wadhwa, Max Dzitsiuk, Michael Schoenberg, Vivek Verma,
Ambrus Csaszar, Eric Turner, Ivan Dryanovski, Joao
Afonso, Jose Pascoal, Konstantine Tsotsos, Mira Leung,
Mirko Schmidt ...
arXiv:2104.00681v1
fatcat:gsuhqfhiirbhdeelqolnbwn6hu
« Previous
Showing results 1 — 15 out of 17 results