Warning
This package is in maintenance mode, please use Stable-Baselines3 (SB3) for an up-to-date version. You can find a migration guide in SB3 documentation.
Projects¶
This is a list of projects using stable-baselines. Please tell us, if you want your project to appear on this page ;)
Stable Baselines for TensorFlow 2¶
A fork of the original stable-baselines repo that works with TF2.x.
Slime Volleyball Gym Environment¶
A simple environment for benchmarking single and multi-agent reinforcement learning algorithms on a clone of the Slime Volleyball game. Only dependencies are gym and numpy. Both state and pixel observation environments are available. The motivation of this environment is to easily enable trained agents to play against each other, and also facilitate the training of agents directly in a multi-agent setting, thus adding an extra dimension for evaluating an agent’s performance.
Uses stable-baselines to train RL agents for both state and pixel observation versions of the task. A tutorial is also provided on modifying stable-baselines for self-play using PPO.
Learning to drive in a day¶
Implementation of reinforcement learning approach to make a donkey car learn to drive. Uses DDPG on VAE features (reproducing paper from wayve.ai)
Donkey Gym¶
OpenAI gym environment for donkeycar simulator.
Self-driving FZERO Artificial Intelligence¶
Series of videos on how to make a self-driving FZERO artificial intelligence using reinforcement learning algorithms PPO2 and A2C.
S-RL Toolbox¶
S-RL Toolbox: Reinforcement Learning (RL) and State Representation Learning (SRL) for Robotics. Stable-Baselines was originally developped for this project.
Roboschool simulations training on Amazon SageMaker¶
“In this notebook example, we will make HalfCheetah learn to walk using the stable-baselines […]”
MarathonEnvs + OpenAi.Baselines¶
Experimental - using OpenAI baselines with MarathonEnvs (ML-Agents)
Learning to drive smoothly in minutes¶
Implementation of reinforcement learning approach to make a car learn to drive smoothly in minutes. Uses SAC on VAE features.
Making Roboy move with elegance¶
Project around Roboy, a tendon-driven robot, that enabled it to move its shoulder in simulation to reach a pre-defined point in 3D space. The agent used Proximal Policy Optimization (PPO) or Soft Actor-Critic (SAC) and was tested on the real hardware.
Train a ROS-integrated mobile robot (differential drive) to avoid dynamic objects¶
The RL-agent serves as local planner and is trained in a simulator, fusion of the Flatland Simulator and the crowd simulator Pedsim. This was tested on a real mobile robot. The Proximal Policy Optimization (PPO) algorithm is applied.
Adversarial Policies: Attacking Deep Reinforcement Learning¶
Uses Stable Baselines to train adversarial policies that attack pre-trained victim policies in a zero-sum multi-agent environments. May be useful as an example of how to integrate Stable Baselines with Ray to perform distributed experiments and Sacred for experiment configuration and monitoring.
WaveRL: Training RL agents to perform active damping¶
Reinforcement learning is used to train agents to control pistons attached to a bridge to cancel out vibrations. The bridge is modeled as a one dimensional oscillating system and dynamics are simulated using a finite difference solver. Agents were trained using Proximal Policy Optimization. See presentation for environment detalis.
Fenics-DRL: Fluid mechanics and Deep Reinforcement Learning¶
Deep Reinforcement Learning is used to control the position or the shape of obstacles in different fluids in order to optimize drag or lift. Fenics is used for the Fluid Mechanics part, and Stable Baselines is used for the DRL.
Air Learning: An AI Research Platform Algorithm Hardware Benchmarking of Autonomous Aerial Robots¶
Aerial robotics is a cross-layer, interdisciplinary field. Air Learning is an effort to bridge seemingly disparate fields.
Designing an autonomous robot to perform a task involves interactions between various boundaries spanning from modeling the environment down to the choice of onboard computer platform available in the robot. Our goal through building Air Learning is to provide researchers with a cross-domain infrastructure that allows them to holistically study and evaluate reinforcement learning algorithms for autonomous aerial machines. We use stable-baselines to train UAV agent with Deep Q-Networks and Proximal Policy Optimization algorithms.
Snake Game AI¶
AI to play the classic snake game. The game was trained using PPO2 available from stable-baselines and then exported to tensorflowjs to run directly on the browser
Pwnagotchi¶
Pwnagotchi is an A2C-based “AI” powered by bettercap and running on a Raspberry Pi Zero W that learns from its surrounding WiFi environment in order to maximize the crackable WPA key material it captures (either through passive sniffing or by performing deauthentication and association attacks). This material is collected on disk as PCAP files containing any form of handshake supported by hashcat, including full and half WPA handshakes as well as PMKIDs.
Quantized Reinforcement Learning (QuaRL)¶
QuaRL is a open-source framework to study the effects of quantization broad spectrum of reinforcement learning algorithms. The RL algorithms we used in this study are from stable-baselines.
PPO_CPP: C++ version of a Deep Reinforcement Learning algorithm PPO¶
Executes PPO at C++ level yielding notable execution performance speedups. Uses Stable Baselines to create a computational graph which is then used for training with custom environments by machine-code-compiled binary.
Learning Agile Robotic Locomotion Skills by Imitating Animals¶
Learning locomotion gaits by imitating animals. It uses PPO1 and AWR.
Imitation Learning Baseline Implementations¶
This project aims to provide clean implementations of imitation learning algorithms. Currently we have implementations of AIRL and GAIL, and intend to add more in the future.