Welcome to Stable Baselines docs! - RL Baselines Made Easy

Stable Baselines is a set of improved implementations of Reinforcement Learning (RL) algorithms based on OpenAI Baselines.

Github repository: https://github.com/hill-a/stable-baselines

You can read a detailed presentation of Stable Baselines in the Medium article: link

Citing Stable Baselines

To cite this project in publications:

@misc{stable-baselines,
  author = {Hill, Ashley and Raffin, Antonin and Traore, Rene and Dhariwal, Prafulla and Hesse, Christopher and Klimov, Oleg and Nichol, Alex and Plappert, Matthias and Radford, Alec and Schulman, John and Sidor, Szymon and Wu, Yuhuai},
  title = {Stable Baselines},
  year = {2018},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/hill-a/stable-baselines}},
}

Contributing

To any interested in making the rl baselines better, there is still some improvements that needs to be done: good-to-have features like support for continuous actions (ACER) and more documentation on the rl algorithms.

If you want to contribute, please open an issue first and then propose your pull request on Github at https://github.com/hill-a/stable-baselines.

Indices and tables