A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Learning to Fly via Deep Model-Based Reinforcement Learning
[article]
2020
arXiv
pre-print
We show that "learning to fly" can be achieved with less than 30 minutes of experience with a single drone, and can be deployed solely using onboard computational resources and sensors, on a self-built ...
In this work, by leveraging a learnt probabilistic model of drone dynamics, we learn a thrust-attitude controller for a quadrotor through model-based reinforcement learning. ...
Our principal contribution is the learning of a controller capable of flying a self-built drone to marked positions (see Figure 1 ) using model-based reinforcement learning (MBRL) methods. ...
arXiv:2003.08876v3
fatcat:gonirfi77rdvxbditv2pnop5lu
Learning to Fly: A Distributed Deep Reinforcement Learning Framework for Software-Defined UAV Network Control
2021
IEEE Open Journal of the Communications Society
To address these challenges, this article introduces a new architectural framework to control and optimize UAV networks based on Deep Reinforcement Learning (DRL). ...
, and provides a scalable solution for large UAV networks. ...
For these reasons, in this work, we select a datadriven approach and aim to solve the UAV network control problem through Deep Reinforcement Learning (DRL). ...
doi:10.1109/ojcoms.2021.3092690
fatcat:oq4zsqljhvdabi6jhrtyzdmpbu
Deep Learning and Reinforcement Learning for Autonomous Unmanned Aerial Systems: Roadmap for Theory to Deployment
[article]
2020
arXiv
pre-print
Then we discuss how reinforcement learning is explored for using this information to provide autonomous control and navigation for UAS. ...
The current UAS state-of-the-art still depends on a remote human controller with robust wireless links to perform several of these applications. ...
In [47] , Fitted Value Iteration (FVI) is used to design a velocity control system for a quadrotor UAV. ...
arXiv:2009.03349v2
fatcat:5ylreoukrfcrtorzzp44mntjum
A Survey on Applications of Reinforcement Learning in Flying Ad-Hoc Networks
2021
Electronics
Hence, many researchers have introduced reinforcement learning (RL) algorithms in FANETs to overcome these shortcomings. ...
Flying ad-hoc networks (FANET) are one of the most important branches of wireless ad-hoc networks, consisting of multiple unmanned air vehicles (UAVs) performing assigned tasks and communicating with each ...
RLSRP with PPMAC Reinforcement learning based self-learning routing protocol (RLSRP) with positionprediction-based directional MAC (PPMAC) is a hybrid communication protocol proposed in [49] wherein ...
doi:10.3390/electronics10040449
fatcat:74luhksatzadplhluwzeszltuq
Survey on Q-Learning-Based Position-Aware Routing Protocols in Flying Ad Hoc Networks
2022
Electronics
However, designing an efficient multihop routing protocol for FANETs is challenging due to high mobility, dynamic topology, limited energy, and short transmission range. ...
Recently, owing to the advantages of multi-objective optimization, Q-learning (QL)-based position-aware routing protocols have improved the performance of routing in FANETs. ...
In addition, multi-rotor UAVs fly in confined areas. Typically, each UAV has four major modules: flight control, energy management, computation, and communication modules. ...
doi:10.3390/electronics11071099
fatcat:k76imusc65enrojpl3aqfsvg2e
Learning to estimate UAV created turbulence from scene structure observed by onboard cameras
[article]
2022
arXiv
pre-print
We learn a mapping from control input and images captured by onboard cameras to turbulence. ...
While machine learning has been used in the past to estimate UAV created turbulence, this was restricted to flat grounds or diffuse in-flight air turbulences, both without taking into account obstacles ...
RELATED WORK Control theory -is classically used to design UAV controllers [15] , [16] . ...
arXiv:2203.14726v1
fatcat:54kdt2cjjjd5ze7vtzwz773s7e
On the Application of Machine Learning to the Design of UAV-Based 5G Radio Access Networks
2020
Electronics
In this paper, we discuss why, how, and which types of ML methods are useful for designing U-RANs, by focusing in particular on supervised and reinforcement learning strategies. ...
To this aim, a cost-effective and flexible strategy consists of complementing terrestrial RANs with unmanned aerial vehicles (UAVs). ...
Table 1 . 1 Basic reinforcement learning elements in U-RANs. ...
doi:10.3390/electronics9040689
fatcat:q6bab7cmnjgh3ntkpovdshd7tm
In Situ MIMO-WPT Recharging of UAVs Using Intelligent Flying Energy Sources
2021
Drones
We propose an intelligent trajectory selection algorithm for the tUAVs based on a deep reinforcement learning model called Proximal Policy Optimization (PPO) to optimize the energy transfer gain. ...
Therefore, developing efficient mechanisms for in situ power transfer to recharge UAV batteries holds potential to extend their mission time. ...
available for conducting the research reported in this paper. ...
doi:10.3390/drones5030089
fatcat:vtlob3pt7vgyjoc4hzvipaaqxq
An Intelligent Cluster-Based Routing Scheme in 5G Flying Ad Hoc Networks
2022
Applied Sciences
Compared to the traditional reinforcement learning approach, the proposed DQN-based vertical routing scheme has shown to increase network lifetime by up to 60%, reduce energy consumption by up to 20%, ...
Flying ad hoc network (FANET) is an application of 5G access network, which consists of unmanned aerial vehicles or flying nodes with scarce resources and high mobility rates. ...
Reinforcement Learning Reinforcement learning (RL), as shown in Algorithm 2, is also embedded in CC for comparison. ...
doi:10.3390/app12073665
fatcat:5nxtcbwg7bewrjr7o6jjm7prsu
QMR:Q-learning based Multi-objective optimization Routing protocol for Flying Ad Hoc Networks
2020
Computer Communications
A network with reliable and rapid communication is critical for Unmanned Aerial Vehicles (UAVs). Flying Ad Hoc Networks (FANETs) consisting of UAVs is a new paradigm of wireless communication. ...
However, the highly dynamic topology of FANETs and limited energy of UAVs have brought great challenges to the routing design of FANETs. ...
To achieve more complex applications which are very difficult for traditional Mobile Ad Hoc Networks (MANETs) or individual UAV, Flying Ad Hoc Networks (FANETs) consisting of UAVs have been intensively ...
doi:10.1016/j.comcom.2019.11.011
fatcat:ag5krnikq5c6pfx6kmctm2aer4
Monitoring System-Based Flying IoT in Public Health and Sports Using Ant-Enabled Energy-Aware Routing
2021
Journal of Healthcare Engineering
In recent decades, the Internet of flying networks has made significant progress. Several aerial vehicles communicate with one another to form flying ad hoc networks. ...
WBAN can be merged with aerial vehicles to collect data regarding health and transfer it to a base station. ...
Ant-based reinforcement works on pheromone modeling to choose the route for flying vehicles. e approach "AntHocNet" [26] is a hybrid algorithm having both reactive and proactive components. ...
doi:10.1155/2021/1686946
pmid:34306586
pmcid:PMC8270719
fatcat:7hp3qbhpyjgyza27v2526n33ay
5G Network on Wings: A Deep Reinforcement Learning Approach to UAV-based Integrated Access and Backhaul
[article]
2022
arXiv
pre-print
A deep reinforcement learning algorithm is developed to jointly optimize the tilt of the access and backhaul antennas of the UAV-BS as well as its three-dimensional placement. ...
In this paper, we study how to control a UAV-BS in both static and dynamic environments. ...
This can relax the hardware requirements and reduce the computational complexity for the UAV-BS, which can in turn reduce the power consumption and increase the flying time of the UAV-BS. ...
arXiv:2202.02006v2
fatcat:rihctqidffcehjt7ucgnz3jahq
Federated Learning-Based Cognitive Detection of Jamming Attack in Flying Ad-Hoc Network
2019
IEEE Access
Third, given a huge number of UAV clients, the global model may need to choose a sub-group of UAV clients for providing a timely global update. ...
Flying Ad-hoc Network (FANET) is a decentralized communication system solely formed by Unmanned Aerial Vehicles (UAVs). ...
In [20] , smart jamming detection in a UAV network was proposed using reinforcement learning. ...
doi:10.1109/access.2019.2962873
fatcat:u72rla7pljfl7gojdmzoekocdu
A Survey on Energy Optimization Techniques in UAV-Based Cellular Networks: From Conventional to Machine Learning Approaches
[article]
2022
arXiv
pre-print
The main idea is to deploy base stations on UAVs and operate them as flying base stations, thereby bringing additional capacity to where it is needed. ...
In this survey, we investigate different energy optimization techniques with a top-level classification in terms of the optimization algorithm employed; conventional and machine learning (ML). ...
He worked as a Research Assistant (from 2013 to 2016) and a Lecturer (from 2020 to 2021) with Ankara Yildirim Beyazit University, where he is currently an Assistant Professor. ...
arXiv:2204.07967v1
fatcat:2x7dyojlvjfknibfsliiq4ozw4
Deep Reinforcement Learning for End-to-End Local Motion Planning of Autonomous Aerial Robots in Unknown Outdoor Environments: Real-Time Flight Experiments
2021
Sensors
The proposed system uses an actor–critic-based reinforcement learning technique to train the aerial robot in a Gazebo simulator to perform a point-goal navigation task by directly mapping the noisy MAV's ...
Intensive simulations and real-time experiments were conducted and compared with a nonlinear model predictive control technique to show the generalization capabilities to new unseen environments, and robustness ...
To successfully perform the desired PANCA task, a hybrid reward function r(t) (shaped: with respect to flight time; sparse: with respect to laser scan data and the current distance from the goal) was designed ...
doi:10.3390/s21072534
pmid:33916624
fatcat:w4r3kha6sjgtbddk6i3s3y7gde
« Previous
Showing results 1 — 15 out of 1,007 results