A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
A Model-free Deep Reinforcement Learning Approach To Maneuver A Quadrotor Despite Single Rotor Failure
[article]
2021
arXiv
pre-print
In this paper, we develop a model-free deep reinforcement learning approach for a quadrotor to recover from a single rotor failure. ...
The approach is based on Soft-actor-critic that enables the vehicle to hover, land, and perform complex maneuvers. ...
CONCLUSIONS In this paper, we modeled and evaluated a model free deep reinforcement learning based algorithm that uses Soft Actor-Critic methods to handle a single rotor failure in quadrotors. ...
arXiv:2109.10488v1
fatcat:2yxq6cyaejhk7enex6zxf3wzsm
Model Predictive Control for Micro Aerial Vehicles: A Survey
[article]
2020
arXiv
pre-print
Furthermore, an overview of recent research trends on the combined application of modern deep reinforcement learning techniques and model predictive control for multirotor vehicles is presented. ...
learning methods have been utilized and if the controller refers to free-flight or other tasks such as physical interaction or load transportation. ...
, and deep neural networks-based reinforcement learning approaches. ...
arXiv:2011.11104v1
fatcat:dil4kdnfcvfmxc7j6n3pqytlyi
Deep Reinforcement Learning-Based Adaptive Controller for Trajectory Tracking and Altitude Control of an Aerial Robot
2022
Applied Sciences
The proposed controlling approach employs a reinforcement learning-based algorithm to actively estimate the controller parameters of the aerial robot. ...
This research study presents a new adaptive attitude and altitude controller for an aerial robot. ...
In addition to the aforementioned optimal control and active tuning approaches (for PIDs), Reinforcement Learning (RL) and Deep Reinforcement Learning (DRL) are other new approaches that have recently ...
doi:10.3390/app12094764
fatcat:hl7lsicz4zfjrd5m5pdtsdmdl4
A Review on Comparative Remarks, Performance Evaluation and Improvement Strategies of Quadrotor Controllers
2021
Technologies
This paper conducts a thorough analysis of the current literature on the effects of multiple controllers on quadrotors, focusing on two separate approaches: (i) controller hybridization and (ii) controller ...
The quadrotor is an ideal platform for testing control strategies because of its non-linearity and under-actuated configuration, allowing researchers to evaluate and verify control strategies. ...
Freddi et al. (2014) designed a quadrotor model in any failure case of a rotor by using feedback linearization. ...
doi:10.3390/technologies9020037
fatcat:ju7url6qo5bkhkses7a2raom7m
A Survey on Fault Diagnosis and Fault-Tolerant Control Methods for Unmanned Aerial Vehicles
2021
Machines
Therefore, a fault-monitoring system must be specifically designed to supervise and debug each of these subsystems, so that any faults can be addressed before they lead to disastrous consequences. ...
Typically, a UAV consists of three types of subsystems: actuators, main structure and sensors. ...
A deep learning approach that utilized a hybrid Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM) technique was developed in [71] , for the fault diagnosis of actuators on a six-rotor ...
doi:10.3390/machines9090197
fatcat:53feevbq25ggzdbe4kx3x7kdl4
Learning Deep Control Policies for Autonomous Aerial Vehicles with MPC-Guided Policy Search
[article]
2016
arXiv
pre-print
Reinforcement learning can in principle forego the need for explicit state estimation and acquire a policy that directly maps sensor readings to actions, but is difficult to apply to unstable systems that ...
This data is used to train a deep neural network policy, which is allowed to access only the raw observations from the vehicle's onboard sensors. ...
However, model-free RL is difficult to apply to unstable systems such as quadrotors, due to the possibility of catastrophic failure during training. ...
arXiv:1509.06791v2
fatcat:oy2gr3zsxbhazfbp7eqotih63e
Learning deep control policies for autonomous aerial vehicles with MPC-guided policy search
2016
2016 IEEE International Conference on Robotics and Automation (ICRA)
Reinforcement learning can in principle forego the need for explicit state estimation and acquire a policy that directly maps sensor readings to actions, but is difficult to apply to unstable systems that ...
This data is used to train a deep neural network policy, which is allowed to access only the raw observations from the vehicle's onboard sensors. ...
However, model-free RL is difficult to apply to unstable systems such as quadrotors, due to the possibility of catastrophic failure during training. ...
doi:10.1109/icra.2016.7487175
dblp:conf/icra/ZhangKLA16
fatcat:4mryusn4sbfxdpbifmo7gmguau
Laser-Based Reactive Navigation for Multirotor Aerial Robots using Deep Reinforcement Learning
2018
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
In this paper, we present a fast reactive navigation algorithm using Deep Reinforcement Learning applied to multirotor aerial robots. ...
Taking as input the 2D-laser range measurements and the relative position of the aerial robot with respect to the desired goal, the proposed algorithm is successfully trained in a Gazebo-based simulation ...
ACKNOWLEDGMENT The authors would like to thank the UPM and the MON-CLOA Campus of International Excellence for funding the predoctoral contract of the corresponding author. ...
doi:10.1109/iros.2018.8593706
dblp:conf/iros/SampedroBRPC18
fatcat:zaywz6tdp5gqpk5jf7jcmjhire
Final Program
2020
2020 International Conference on Unmanned Aircraft Systems (ICUAS)
and a great pleasure to welcome you to this year's conference. ...
., and in my capacity as the President of the Association, it is a privilege, a great pleasure and an honor to welcome you to the 2020 International Conference on Unmanned Aircraft Systems (ICUAS'20). ...
This approach searches for obstacle-free, low computational cost, smooth, and dynamically feasible paths by analyzing a point cloud of the target environment, using a modified connect RRT-based path planning ...
doi:10.1109/icuas48674.2020.9214039
fatcat:7jr6chhfija47kgtwoxqmfmmoe
Drone Deep Reinforcement Learning: A Review
2021
Electronics
In this paper, we described the state of the art of one subset of these algorithms: the deep reinforcement learning (DRL) techniques. ...
To name a few; infrastructure inspection, traffic patrolling, remote sensing, mapping, surveillance, rescuing humans and animals, environment monitoring, and Intelligence, Surveillance, Target Acquisition ...
Acknowledgments: We would like to show our gratitude to Prince Sultan University, Riyadh, Kingdom of Saudi Arabia. ...
doi:10.3390/electronics10090999
doaj:57ededb7d1a0445eaf34975cb6625c1f
fatcat:kya3fbblszd27i4exlybnji4ni
Human-in-the-Loop Methods for Data-Driven and Reinforcement Learning Systems
[article]
2020
arXiv
pre-print
Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. ...
This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still require thousands or millions of data samples to converge to a satisfactory policy and ...
model-free approach. ...
arXiv:2008.13221v1
fatcat:aofoenmwcvckvagbttrkskevty
A Review on IoT Deep Learning UAV Systems for Autonomous Obstacle Detection and Collision Avoidance
2019
Remote Sensing
In this regard, Deep Learning (DL) techniques have arisen as a promising alternative for improving real-time obstacle detection and collision avoidance for highly autonomous UAVs. ...
Advances in Unmanned Aerial Vehicles (UAVs), also known as drones, offer unprecedented opportunities to boost a wide array of large-scale Internet of Things (IoT) applications. ...
and videos from a DL-UAV), but it is also more expensive and heavier than alternatives like single-rotor UAVs. ...
doi:10.3390/rs11182144
fatcat:54xs26xnvzf7rfa5b64tuzkz44
A Survey on Swarming With Micro Air Vehicles: Fundamental Challenges and Constraints
2020
Frontiers in Robotics and AI
Once the operations of the single MAV are sufficiently secured for a task, the subsequent challenge is to allow the MAVs to sense one another within a neighborhood of interest. ...
Robustness is often hailed as a pillar of swarm robotics, and a minimum level of local reliability is needed for it to propagate to the global level. ...
In this approach, a clustering algorithm is used to learn a model of the robots' expected behavior. ...
doi:10.3389/frobt.2020.00018
pmid:33501187
pmcid:PMC7806031
fatcat:p3zp5y3r65cn7nilxzu6gedxaq
Survey on Coverage Path Planning with Unmanned Aerial Vehicles
2019
Drones
The surveyed coverage approaches are classified according to a classical taxonomy, such as no decomposition, exact cellular decomposition, and approximate cellular decomposition. ...
This paper aims to explore and analyze the existing studies in the literature related to the different approaches employed in coverage path planning problems, especially those using UAVs. ...
The first step is to build a 3D terrain model using control points in order to obtain an analytical model. ...
doi:10.3390/drones3010004
fatcat:j3nsrywfnjcy3aw3zrug6xzmyu
Application Specific Drone Simulators: Recent Advances and Challenges
2019
Simulation modelling practice and theory
A performance evaluation through relevant drone simulator becomes indispensable procedure to test features, configurations, and designs to demonstrate superiority to comparative schemes and suitability ...
However, incidents such as fatal system failures, malicious attacks, and disastrous misuses have raised concerns in the recent past. ...
, and reinforcement learning algorithms for various autonomous drones. ...
doi:10.1016/j.simpat.2019.01.004
fatcat:oy4rssrl5fagtixx5wrr747apm
« Previous
Showing results 1 — 15 out of 36 results