Filters








21 Hits in 6.0 sec

Flatland-RL : Multi-Agent Reinforcement Learning on Trains [article]

Sharada Mohanty, Erik Nygren, Florian Laurent, Manuel Schneider, Christian Scheller, Nilabha Bhattacharya, Jeremy Watson, Adrian Egli, Christian Eichenberger, Christian Baumberger, Gereon Vienken, Irene Sturm (+2 others)
2020 arXiv   pre-print
Flatland does not only reduce the complexity of the full physical simulation, but also provides an easy-to-use interface to test novel approaches for the VRSP, such as Reinforcement Learning (RL) and Imitation  ...  In order to probe the potential of Machine Learning (ML) research on Flatland, we (1) ran a first series of RL and IL experiments and (2) design and executed a public Benchmark at NeurIPS 2020 to engage  ...  We test masking out all invalid and noop actions so that the agents can focus on relevant actions only. We evaluated skipping "no-choice" cells for both DQN Ape-X and PPO agents.  ... 
arXiv:2012.05893v2 fatcat:au2esg6tgndj5dnqe4qvejvlym

Flatland: a Lightweight First-Person 2-D Environment for Reinforcement Learning [article]

Hugo Caselles-Dupré, Louis Annabi, Oksana Hagen, Michael Garcia-Ortiz, David Filliat
2018 arXiv   pre-print
We experiment with three reinforcement learning baseline agents and show that they can rapidly solve a navigation task in Flatland.  ...  Flatland is a simple, lightweight environment for fast prototyping and testing of reinforcement learning agents. It is of lower complexity compared to similar 3D platforms (e.g.  ...  They are used for testing Reinforcement Learning (RL) agents on tasks and scenarios that require advanced capabilities in terms of perception, planning, representation of space for navigationrelated tasks  ... 
arXiv:1809.00510v2 fatcat:gqej5j2kdvdxtl6ryhv5devkm4

Improving Sample Efficiency and Multi-Agent Communication in RL-based Train Rescheduling [article]

Dano Roost, Ralph Meier, Stephan Huschauer, Erik Nygren, Adrian Egli, Andreas Weiler, Thilo Stadelmann
2020 arXiv   pre-print
We present preliminary results from our sixth placed entry to the Flatland international competition for train rescheduling, including two improvements for optimized reinforcement learning (RL) training  ...  of high-consequence environments; second, that learning explicit communication actions (an emerging machine-to-machine language, so to speak) might offer a remedy.  ...  Therefore, SBB has created Flatland, a simulation environment and international data science competition to solicit research into the area of multi-agent reinforcement learning as an alternative [2] .  ... 
arXiv:2004.13439v1 fatcat:juvdb6un3vcmxhlqrcfqh6zdsy

Aiding vehicle Scheduling and rescheduling using Machine Learning

Jonas Wälter, Farhad D. Mehta, Xiaolu Rao
2020 International Journal on Transport Development and Integration  
well-trained and experienced personnel to provide practical solutions to these problems. over the last couple of years, novel techniques based on machine learning have been used to propose solutions to  ...  technique called reinforcement learning. the solutions obtained using this technique are compared with solutions obtained using classical algorithmic and constraint-based search techniques. the initial  ...  Agent An agent in the Flatland environment corresponds to a train on a railway network.  ... 
doi:10.2495/tdi-v4-n4-308-320 fatcat:ot7kikwjergz5febailqmg5piy

An Introduction to Multi-Agent Reinforcement Learning and Review of its Application to Autonomous Mobility [article]

Lukas M. Schmidt, Johanna Brosig, Axel Plinge, Bjoern M. Eskofier, Christopher Mutschler
2022 arXiv   pre-print
Multi-Agent Reinforcement Learning (MARL) is a research field that aims to find optimal solutions for multiple agents that interact with each other.  ...  Recent advances in behavioral planning use Reinforcement Learning to find effective and performant behavior strategies.  ...  MULTI-AGENT REINFORCEMENT LEARNING In MARL, multiple agents are concurrently optimized to find optimal policies.  ... 
arXiv:2203.07676v1 fatcat:vzmuhswdlbeu7hedsqmp474nzu

Mava: a research framework for distributed multi-agent reinforcement learning [article]

Arnu Pretorius, Kale-ab Tessera, Andries P. Smit, Claude Formanek, St John Grimbly, Kevin Eloff, Siphelele Danisa, Lawrence Francis, Jonathan Shock, Herman Kamper, Willie Brink, Herman Engelbrecht (+2 others)
2021 arXiv   pre-print
Breakthrough advances in reinforcement learning (RL) research have led to a surge in the development and application of RL.  ...  We provide experimental results for these implementations on a wide range of multi-agent environments and highlight the benefits of distributed system training.  ...  Figure 1 : 1 Multi-agent reinforcement learning system-environment interaction loop. Figure 2 : 2 Multi-agent reinforcement learning systems and the executor-trainer paradigm.  ... 
arXiv:2107.01460v1 fatcat:nuymlvbeendtpe3thy77qqp4ba

Continual State Representation Learning for Reinforcement Learning using Generative Replay [article]

Hugo Caselles-Dupré, Michael Garcia-Ortiz, David Filliat
2018 arXiv   pre-print
The learned features are then fed to a Reinforcement Learning algorithm to learn a policy.  ...  The resulting model is capable of incrementally learning information without using past data and with a bounded system size.  ...  Introduction Building agents capable of learning over extended periods of time in the real world is a long standing challenge of Reinforcement Learning (RL) research, with direct applications in Robotics  ... 
arXiv:1810.03880v3 fatcat:47h6z7zjbrfnlpy5y7x6pl2nje

Megaverse: Simulating Embodied Agents at One Million Experiences per Second [article]

Aleksei Petrenko, Erik Wijmans, Brennan Shacklett, Vladlen Koltun
2021 arXiv   pre-print
We present Megaverse, a new 3D simulation platform for reinforcement learning and embodied AI research.  ...  We use Megaverse to build a new benchmark that consists of several single-agent and multi-agent tasks covering a variety of cognitive challenges.  ...  The agent was trained on a total of 2 × 10 9 frames of experience, which is equivalent to 2.5 × 10 8 frames on each of the environments.  ... 
arXiv:2107.08170v2 fatcat:6vrk7dj5gfhshd7i6qnp2gtwfm

State representation learning with recurrent capsule networks [article]

Louis Annabi, Michael Garcia Ortiz
2019 arXiv   pre-print
Unsupervised learning of compact and relevant state representations has been proved very useful at solving complex reinforcement learning tasks.  ...  In this paper, we propose a recurrent capsule network that learns such representations by trying to predict the future observations in an agent's trajectory.  ...  To further verify the quality of our model for reinforcement learning, we would need to compare it with other state representation learning methods (like Ha and Schmidhuber (2018) ) and on several environments  ... 
arXiv:1812.11202v4 fatcat:skushzzwojaohmicurozp6mvsi

NeurIPS 2021 Competition IGLU: Interactive Grounded Language Understanding in a Collaborative Environment [article]

Julia Kiseleva, Ziming Li, Mohammad Aliannejadi, Shrestha Mohanty, Maartje ter Hoeve, Mikhail Burtsev, Alexey Skrynnik, Artem Zholus, Aleksandr Panov, Kavya Srinet, Arthur Szlam, Yuxuan Sun (+3 others)
2021 arXiv   pre-print
Learning (RL).  ...  The primary goal of the competition is to approach the problem of how to build interactive agents that learn to solve a task while provided with grounded natural language instructions in a collaborative  ...  His research focuses on model-based reinforcement learning with application to robotics.  ... 
arXiv:2110.06536v2 fatcat:jyyjlgbmpbdtjf6ctqssl2xpf4

Improving sample efficiency and multi-agent communication in RL-based train rescheduling

Dano Roost, Ralph Meier, Stephan Huschauer, Erik Nygren, Adrian Egli, Andreas Weiler, Thilo Stadelmann
2020
Index Terms-multi-agent deep reinforcement learning  ...  We present preliminary results from our sixth placed entry to the Flatland international competition for train rescheduling, including two improvements for optimized reinforcement learning (RL) training  ...  Therefore, SBB has created Flatland, a simulation environment and international data science competition to solicit research into the area of multi-agent reinforcement learning as an alternative [2] .  ... 
doi:10.21256/zhaw-19978 fatcat:ttenodueyjdsxiursq3y6vdya4

Unsupervised Emergence of Spatial Structure from Sensorimotor Prediction [article]

Alban Laflaquière, Michael Garcia Ortiz
2018 arXiv   pre-print
Under certain exploratory conditions, spatial representations should thus emerge as a byproduct of learning to predict.  ...  Moreover, it is hypothesized that capturing these invariants is beneficial for a naive agent trying to predict its sensorimotor experience.  ...  The problem of spatial knowledge acquisition is often conceptualized in a supervised or reinforcement learning (RL) framework.  ... 
arXiv:1810.01344v2 fatcat:asjshzecorezdnhzshuhbtfkji

On the Origin of Species of Self-Supervised Learning [article]

Samuel Albanie, Erika Lu, Joao F. Henriques
2021 arXiv   pre-print
In the quiet backwaters of cs.CV, cs.LG and stat.ML, a cornucopia of new learning systems is emerging from a primordial soup of mathematics-learning systems with no need for external supervision.  ...  form of tweets and vestigial plumage such as press releases) communicates dramatic changes; (3) We propose a unifying theory of self-supervised machine evolution and compare to other unifying theories on  ...  From top left to bottom right: (i) The seminal multi-layered cake metaphor introduced by LeCun (2016) , linking reinforcement learning, supervised learning and predictive learning, (ii) a chef's revision  ... 
arXiv:2103.17143v1 fatcat:kgnkjn4yhbfd5oe32gide3hwuu

Simulating humans: computer graphics animation and control

1994 ChoiceReviews  
Our motor systems manage to learn how to make us move without leaving us the burden or pleasure of knowing how we did it.  ...  Likewise we learn how to describe the actions and behaviors of others without consciously struggling with the processes of perception, recognition, and language.  ...  Jane, turn tglJ-1 on. John, look at tglJ-2. Jane, look at twf-2. Jane, turn twf-2 to state 1. John, look at twf-2. John, look at Jane. Jane, look at John.  ... 
doi:10.5860/choice.31-2728 fatcat:r2oehymzrvb27mchwharo6fesm

Philosophy of Education

Michael Taylor
1977 Social Theory and Practice  
To accomplish this, our students must develop and utilize: ▪ intellectual curiosity and eagerness for lifelong learning ▪ a positive self-image based on a realistic acceptance of self ▪ the knowledge,  ...  habits and responsible behavior ▪ an understanding of a variety of processes that can be used in decision-making situations ▪ interpersonal and group dynamic skills ▪ ethical and moral behavior based on  ...  The profile of a true honors student is multi-dimensional.  ... 
doi:10.5840/soctheorpract1977437 fatcat:ly7e6vjrirckrdmrqgowitmesa
« Previous Showing results 1 — 15 out of 21 results