Filters








118 Hits in 1.4 sec

Safe Exploration for Interactive Machine Learning [article]

Matteo Turchetta, Felix Berkenkamp, Andreas Krause
2019 arXiv   pre-print
To exclude these decisions, we use the ergodicity operator introduced by Turchetta et al.  ...  In our experiments, we set β t = 3 for all t ≥ 1 as suggested by Turchetta et al. (2016) .  ...  See Lemma 1 in Turchetta et al. (2016) . We proceed by induction. For n = 1, we have R reach (S) ⊆ R reach (R) by Lemma 8 by Turchetta et al. (2016) .  ... 
arXiv:1910.13726v1 fatcat:xw6z3cf5t5cyfpcqfyluubud4a

GoSafe: Globally Optimal Safe Robot Learning [article]

Dominik Baumann and Alonso Marco and Matteo Turchetta and Sebastian Trimpe
2021 arXiv   pre-print
When learning policies for robotic systems from data, safety is a major concern, as violation of safety constraints may cause hardware damage. SafeOpt is an efficient Bayesian optimization (BO) algorithm that can learn policies while guaranteeing safety with high probability. However, its search space is limited to an initially given safe region. We extend this method by exploring outside the initial safe area while still guaranteeing safety with high probability. This is achieved by learning a
more » ... set of initial conditions from which we can recover safely using a learned backup controller in case of a potential failure. We derive conditions for guaranteed convergence to the global optimum and validate GoSafe in hardware experiments.
arXiv:2105.13281v1 fatcat:kas62vsd7bdnpacwymhu2imln4

Robust Model-free Reinforcement Learning with Multi-objective Bayesian Optimization [article]

Matteo Turchetta, Andreas Krause, Sebastian Trimpe
2019 arXiv   pre-print
In reinforcement learning (RL), an autonomous agent learns to perform complex tasks by maximizing an exogenous reward signal while interacting with its environment. In real-world applications, test conditions may differ substantially from the training scenario and, therefore, focusing on pure reward maximization during training may lead to poor results at test time. In these cases, it is important to trade-off between performance and robustness while learning a policy. While several results
more » ... t for robust, model-based RL, the model-free case has not been widely investigated. In this paper, we cast the robust, model-free RL problem as a multi-objective optimization problem. To quantify the robustness of a policy, we use delay margin and gain margin, two robustness indicators that are common in control theory. We show how these metrics can be estimated from data in the model-free setting. We use multi-objective Bayesian optimization (MOBO) to solve efficiently this expensive-to-evaluate, multi-objective optimization problem. We show the benefits of our robust formulation both in sim-to-real and pure hardware experiments to balance a Furuta pendulum.
arXiv:1910.13399v1 fatcat:ih5b2qnjw5depp74yspgrtkc54

Safe Reinforcement Learning via Curriculum Induction [article]

Matteo Turchetta, Andrey Kolobov, Shital Shah, Andreas Krause, Alekh Agarwal
2021 arXiv   pre-print
In Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, pages 53-59. [44] Turchetta, M., Berkenkamp, F., and Krause, A. (2016).  ...  In Advances in Neural Information Processing Systems, pages 4312-4320. [45] Turchetta, M., Berkenkamp, F., and Krause, A. (2019). Safe exploration for interactive machine learning.  ... 
arXiv:2006.12136v2 fatcat:nrk22liksfdg3derjntf7mqgum

Mixed-Variable Bayesian Optimization [article]

Erik Daxberger, Anastasia Makarova, Matteo Turchetta, Andreas Krause
2019 arXiv   pre-print
Matteo Turchetta was supported Dosovitskiy, A., Tobias Springenberg, J., and Brox, T. through the ETH-MPI Center for Learning Systems. The (2015).  ...  Mixed-Variable Bayesian Optimization Erik Daxberger∗,† Anastasia Makarova∗ Matteo  ... 
arXiv:1907.01329v3 fatcat:srdpkrqzpvc4bjdypnlrzupxza

Mixed-Variable Bayesian Optimization

Erik Daxberger, Anastasia Makarova, Matteo Turchetta, Andreas Krause
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
The optimization of expensive to evaluate, black-box, mixed-variable functions, i.e. functions that have continuous and discrete inputs, is a difficult and yet pervasive problem in science and engineering. In Bayesian optimization (BO), special cases of this problem that consider fully continuous or fully discrete domains have been widely studied. However, few methods exist for mixed-variable domains and none of them can handle discrete constraints that arise in many real-world applications. In
more » ... this paper, we introduce MiVaBo, a novel BO algorithm for the efficient optimization of mixed-variable functions combining a linear surrogate model based on expressive feature representations with Thompson sampling. We propose an effective method to optimize its acquisition function, a challenging problem for mixed-variable domains, making MiVaBo the first BO method that can handle complex constraints over the discrete variables. Moreover, we provide the first convergence analysis of a mixed-variable BO algorithm. Finally, we show that MiVaBo is significantly more sample efficient than state-of-the-art mixed-variable BO algorithms on several hyperparameter tuning tasks, including the tuning of deep generative models.
doi:10.24963/ijcai.2020/361 dblp:conf/ijcai/LuoHG20 fatcat:ql5jucnuqzaxvk2fcecnn2jf5m

Safe Exploration in Finite Markov Decision Processes with Gaussian Processes

Matteo Turchetta, Felix Berkenkamp, Andreas Krause
2016 arXiv   pre-print
In classical reinforcement learning, when exploring an environment, agents accept arbitrary short term loss for long term gain. This is infeasible for safety critical applications, such as robotics, where even a single unsafe action may cause system failure. In this paper, we address the problem of safely exploring finite Markov decision processes (MDP). We define safety in terms of an, a priori unknown, safety constraint that depends on states and actions. We aim to explore the MDP under this
more » ... onstraint, assuming that the unknown function satisfies regularity conditions expressed via a Gaussian process prior. We develop a novel algorithm for this task and prove that it is able to completely explore the safely reachable part of the MDP without violating the safety constraint. To achieve this, it cautiously explores safe states and actions in order to gain statistical confidence about the safety of unvisited state-action pairs from noisy observations collected while navigating the environment. Moreover, the algorithm explicitly considers reachability when exploring the MDP, ensuring that it does not get stuck in any state with no safe way out. We demonstrate our method on digital terrain models for the task of exploring an unknown map with a rover.
arXiv:1606.04753v2 fatcat:rbi4n4eruva6le7j42ubzob54q

Learning-Based Model Predictive Control for Safe Exploration

Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Andreas Krause
2018 2018 IEEE Conference on Decision and Control (CDC)  
Reinforcement learning has been successfully used to solve difficult tasks in complex unknown environments. However, these methods typically do not provide any safety guarantees, which prevents their use in safety-critical, realworld applications. In this paper, we attempt to bridge the gap between learning-based techniques that are scalable and highly autonomous but often unsafe and robust control techniques, which have a solid theoretical foundation that guarantees safety but often require
more » ... ensive expert knowledge to identify the system and estimate disturbance sets. We combine a provably safe learning-based MPC scheme that allows for input-dependent uncertainties with techniques from model-based RL to solve tasks with only limited prior knowledge. We evaluate the resulting algorithm to solve a reinforcement learning task in a simulated cart-pole dynamical system with safety constraints.
doi:10.1109/cdc.2018.8619572 dblp:conf/cdc/KollerBT018 fatcat:omofexgb6vbzrnuluspinjmnmu

Safe and Efficient Model-free Adaptive Control via Bayesian Optimization [article]

Christopher König, Matteo Turchetta, John Lygeros, Alisa Rupenyan, Andreas Krause
2021 arXiv   pre-print
Adaptive control approaches yield high-performance controllers when a precise system model or suitable parametrizations of the controller are available. Existing data-driven approaches for adaptive control mostly augment standard model-based methods with additional information about uncertainties in the dynamics or about disturbances. In this work, we propose a purely data-driven, model-free approach for adaptive control. Tuning low-level controllers based solely on system data raises concerns
more » ... n the underlying algorithm safety and computational performance. Thus, our approach builds on GoOSE, an algorithm for safe and sample-efficient Bayesian optimization. We introduce several computational and algorithmic modifications in GoOSE that enable its practical use on a rotational motion system. We numerically demonstrate for several types of disturbances that our approach is sample efficient, outperforms constrained Bayesian optimization in terms of safety, and achieves the performance optima computed by grid evaluation. We further demonstrate the proposed adaptive control approach experimentally on a rotational motion system.
arXiv:2101.07825v2 fatcat:657dg4dnk5b2lj2ft5nofgtqiy

Learning-based Model Predictive Control for Safe Exploration [article]

Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Andreas Krause
2018 arXiv   pre-print
Email: kollert@informatik.uni-freiburg.de Felix Berkenkamp, Matteo Turchetta and Andreas Krause are with the Learning & Adaptive Systems Group, Department of Computer Science, ETH Zurich, Switzerland.  ... 
arXiv:1803.08287v3 fatcat:7v6bk4w7wzb67cv4ll4wsix32a

Information Directed Reward Learning for Reinforcement Learning [article]

David Lindner and Matteo Turchetta and Sebastian Tschiatschek and Kamil Ciosek and Andreas Krause
2022 arXiv   pre-print
For many reinforcement learning (RL) applications, specifying a reward is difficult. This paper considers an RL setting where the agent obtains information about the reward only by querying an expert that can, for example, evaluate individual states or provide binary preferences over trajectories. From such expensive feedback, we aim to learn a model of the reward that allows standard RL algorithms to achieve high expected returns with as few expert queries as possible. To this end, we propose
more » ... nformation Directed Reward Learning (IDRL), which uses a Bayesian model of the reward and selects queries that maximize the information gain about the difference in return between plausibly optimal policies. In contrast to prior active reward learning methods designed for specific types of queries, IDRL naturally accommodates different query types. Moreover, it achieves similar or better performance with significantly fewer queries by shifting the focus from reducing the reward approximation error to improving the policy induced by the reward model. We support our findings with extensive evaluations in multiple environments and with different query types.
arXiv:2102.12466v3 fatcat:cgcewu473zcjdjmgokcwwfhb4i

Safe Model-based Reinforcement Learning with Stability Guarantees [article]

Felix Berkenkamp, Matteo Turchetta, Angela P. Schoellig, Andreas Krause
2017 arXiv   pre-print
Reinforcement learning is a powerful paradigm for learning optimal policies from experimental data. However, to find optimal policies, most reinforcement learning algorithms explore all possible actions, which may be harmful for real-world systems. As a consequence, learning algorithms are rarely applied on safety-critical systems in the real world. In this paper, we present a learning algorithm that explicitly considers safety, defined in terms of stability guarantees. Specifically, we extend
more » ... ontrol-theoretic results on Lyapunov stability verification and show how to use statistical models of the dynamics to obtain high-performance control policies with provable stability certificates. Moreover, under additional regularity assumptions in terms of a Gaussian process prior, we prove that one can effectively and safely collect data in order to learn about the dynamics and thus both improve control performance and expand the safe region of the state space. In our experiments, we show how the resulting algorithm can safely optimize a neural network policy on a simulated inverted pendulum, without the pendulum ever falling down.
arXiv:1705.08551v3 fatcat:d2kydamlrvarvmduonwymjh2oy

Biohumoral Indicators Influenced by Physical Activity in the Elderly

Chiara Fossati, Guglielmo Torre, Paolo Borrione, Arrigo Giombini, Federica Fagnani, Matteo Turchetta, Erika Albo, Maurizio Casasco, Attilio Parisi, Fabio Pigozzi
2020 Journal of Clinical Medicine  
In the scientific landscape, there is a growing interest in defining the role of several biomolecules and humoral indicators of the aging process and in the modifications of these biomarkers induced by physical activity and exercise. The main aim of the present narrative review is to collect the available evidence on the biohumoral indicators that could be modified by physical activity (PA) in the elderly. Online databases including Pubmed, Web of science (Medline), and Scopus were searched for
more » ... relevant articles published in the last five years in English. Keywords and combination of these used for the search were the following: "biological", "indicators", "markers", "physical", "activity", and "elderly". Thirty-four papers were analyzed for inclusion. Twenty-nine studies were included and divided into four categories: cardiovascular (CV) biomarkers, metabolic biomarkers, inflammatory markers-oxidative stress molecules, and other markers. There are many distinct biomarkers influenced by PA in the elderly, with promising results concerning the metabolic and CV indexes, as a growing number of studies demonstrate the role of PA on improving parameters related to heart function and CV risk like atherogenic lipid profile. Furthermore, it is also a verified hypothesis that PA is able to modify the inflammatory status of the subject by decreasing the levels of pro-inflammatory cytokines, including interleukin-1 (IL-1), interleukin-6 (IL-6), and tumor necrosis factor-alpha (TNF-α). PA seems also to be able to have a direct effect on the immune system. There is a strong evidence of a positive effect of PA on the health of elderly people that could be evidenced and "quantified" by the modifications of the levels of several biohumoral indicators.
doi:10.3390/jcm9041115 pmid:32295038 pmcid:PMC7231282 fatcat:eqvlmoepf5bexfgg7qkq47mkba

Injection-Based Management of Osteoarthritis of the Knee: A Systematic Review of Guidelines

Vito Pavone, Andrea Vescio, Matteo Turchetta, Serena Maria Chiara Giardina, Annalisa Culmone, Gianluca Testa
2021 Frontiers in Pharmacology  
Osteoarthritis (OA) is a leading cause of disability among older adults. Numerous pharmaceutical and nonpharmaceutical interventions have been described. Intra-articular injections are commonly the first line treatment. There are several articles, reporting the outcome of corticosteroids (CS), hyaluronic acid (HA) and platelet rich plasma (PRP). The aim of the study is to highlight the usefulness, indication and efficacy of the intra-articular injection of principal drugs. CSs have been shown
more » ... reduce the severity of pain, but care should be taken with repeated injections because of potential harm. HA reported good outcomes both for pain reduction and functional improvement. Different national societies guidelines do not recommend the PRP intra-articular injection in the management of knee OA for lack of evidence. In conclusion, the authors affirm that there is some evidence that intra-articular steroids are efficacious, but their benefit may be relatively short lived (<4 weeks). Most of the positive outcome were limited to the studies or part of the studies that considered the injection of high molecular weight as visco-supplementation, with a course of two to four injections a year.
doi:10.3389/fphar.2021.661805 pmid:33959026 pmcid:PMC8096293 fatcat:aove3xtqnzgspmisusp2dnn3ci

Intra-Articular Injections in Knee Osteoarthritis: A Review of Literature

Gianluca Testa, Serena Maria Chiara Giardina, Annalisa Culmone, Andrea Vescio, Matteo Turchetta, Salvatore Cannavò, Vito Pavone
2021 Journal of Functional Morphology and Kinesiology  
Knee osteoarthritis (OA) is a chronic, degenerative, and progressive disease of articular cartilage, producing discomfort and physical disability in older adults. Thirteen percent of elderly people complain of knee OA. Management options for knee OA could be divided into the following categories: conservative, pharmacological, procedural, and surgical. Joint replacement is the gold standard, reserved for severe grades of knee OA, due to its complications rate and increased risk of joint
more » ... . A nonsurgical approach is the first choice in the adult population with cartilage damage and knee OA. Yearly, more than 10% of knee OA-affected patients undergo intra-articular injections of different drugs, especially within three months after OA diagnosis. Several molecules, such as corticosteroids injection, hyaluronic acid (HA), and platelet-rich plasma (PRP), are managed to reduce the symptoms of patients with knee OA. The aim of this review was to offer an overview of intra-articular injections used for the treatment of OA and report the conventional pharmacological products used.
doi:10.3390/jfmk6010015 pmid:33546408 fatcat:avud7b3rn5hwlia2zh56esxcsa
« Previous Showing results 1 — 15 out of 118 results