# PPO1¶

The Proximal Policy Optimization algorithm combines ideas from A2C (having multiple workers) and TRPO (it uses a trust region to improve the actor).

The main idea is that after an update, the new policy should be not too far from the old policy. For that, ppo uses clipping to avoid too large update.

Note

PPO1 requires OpenMPI. If OpenMPI isn’t enabled, then PPO1 isn’t imported into the stable_baselines module.

Note

PPO1 uses MPI for multiprocessing unlike PPO2, which uses vectorized environments. PPO2 is the implementation OpenAI made for GPU.

## Notes¶

• Original paper: https://arxiv.org/abs/1707.06347
• Clear explanation of PPO on Arxiv Insights channel: https://www.youtube.com/watch?v=5P7I-xPq8u8
• OpenAI blog post: https://blog.openai.com/openai-baselines-ppo/
• mpirun -np 8 python -m stable_baselines.ppo1.run_atari runs the algorithm for 40M frames = 10M timesteps on an Atari game. See help (-h) for more options.
• python -m stable_baselines.ppo1.run_mujoco runs the algorithm for 1M frames on a Mujoco environment.
• Train mujoco 3d humanoid (with optimal-ish hyperparameters): mpirun -np 16 python -m stable_baselines.ppo1.run_humanoid --model-path=/path/to/model
• Render the 3d humanoid: python -m stable_baselines.ppo1.run_humanoid --play --model-path=/path/to/model

## Can I use?¶

• Recurrent policies: ❌
• Multi processing: ✔️ (using MPI)
• Gym spaces:
Space Action Observation
Discrete ✔️ ✔️
Box ✔️ ✔️
MultiDiscrete ✔️ ✔️
MultiBinary ✔️ ✔️

## Example¶

import gym

from stable_baselines.common.policies import MlpPolicy
from stable_baselines import PPO1

env = gym.make('CartPole-v1')

model = PPO1(MlpPolicy, env, verbose=1)
model.learn(total_timesteps=25000)
model.save("ppo1_cartpole")

obs = env.reset()
while True:
action, _states = model.predict(obs)
obs, rewards, dones, info = env.step(action)
env.render()


## Parameters¶

class stable_baselines.ppo1.PPO1(policy, env, gamma=0.99, timesteps_per_actorbatch=256, clip_param=0.2, entcoeff=0.01, optim_epochs=4, optim_stepsize=0.001, optim_batchsize=64, lam=0.95, adam_epsilon=1e-05, schedule='linear', verbose=0, tensorboard_log=None, _init_setup_model=True, policy_kwargs=None, full_tensorboard_log=False, seed=None, n_cpu_tf_sess=1)[source]

Proximal Policy Optimization algorithm (MPI version). Paper: https://arxiv.org/abs/1707.06347

Parameters: env – (Gym environment or str) The environment to learn from (if registered in Gym, can be str) policy – (ActorCriticPolicy or str) The policy model to use (MlpPolicy, CnnPolicy, CnnLstmPolicy, …) timesteps_per_actorbatch – (int) timesteps per actor per update clip_param – (float) clipping parameter epsilon entcoeff – (float) the entropy loss weight optim_epochs – (float) the optimizer’s number of epochs optim_stepsize – (float) the optimizer’s stepsize optim_batchsize – (int) the optimizer’s the batch size gamma – (float) discount factor lam – (float) advantage estimation adam_epsilon – (float) the epsilon value for the adam optimizer schedule – (str) The type of scheduler for the learning rate update (‘linear’, ‘constant’, ‘double_linear_con’, ‘middle_drop’ or ‘double_middle_drop’) verbose – (int) the verbosity level: 0 none, 1 training information, 2 tensorflow debug tensorboard_log – (str) the log location for tensorboard (if None, no logging) _init_setup_model – (bool) Whether or not to build the network at the creation of the instance policy_kwargs – (dict) additional arguments to be passed to the policy on creation full_tensorboard_log – (bool) enable additional logging when using tensorboard WARNING: this logging can take a lot of space quickly seed – (int) Seed for the pseudo-random generators (python, numpy, tensorflow). If None (default), use random seed. Note that if you want completely deterministic results, you must set n_cpu_tf_sess to 1. n_cpu_tf_sess – (int) The number of threads for TensorFlow operations If None, the number of cpu of the current machine will be used.
action_probability(observation, state=None, mask=None, actions=None, logp=False)

If actions is None, then get the model’s action probability distribution from a given observation.

Depending on the action space the output is:
• Discrete: probability for each possible action
• Box: mean and standard deviation of the action output

However if actions is not None, this function will return the probability that the given actions are taken with the given parameters (observation, state, …) on this model. For discrete action spaces, it returns the probability mass; for continuous action spaces, the probability density. This is since the probability mass will always be zero in continuous spaces, see http://blog.christianperone.com/2019/01/ for a good explanation

Parameters: observation – (np.ndarray) the input observation state – (np.ndarray) The last states (can be None, used in recurrent policies) mask – (np.ndarray) The last masks (can be None, used in recurrent policies) actions – (np.ndarray) (OPTIONAL) For calculating the likelihood that the given actions are chosen by the model for each of the given parameters. Must have the same number of actions and observations. (set to None to return the complete action probability distribution) logp – (bool) (OPTIONAL) When specified with actions, returns probability in log-space. This has no effect if actions is None. (np.ndarray) the model’s (log) action probability
get_env()

returns the current environment (can be None if not defined)

Returns: (Gym Environment) The current environment
get_parameter_list()

Get tensorflow Variables of model’s parameters

Returns: (list) List of tensorflow Variables
get_parameters()

Get current model parameters as dictionary of variable name -> ndarray.

Returns: (OrderedDict) Dictionary of variable name -> ndarray of model’s parameters.
get_vec_normalize_env() → Optional[stable_baselines.common.vec_env.vec_normalize.VecNormalize]

Return the VecNormalize wrapper of the training env if it exists.

Returns: Optional[VecNormalize] The VecNormalize env.
learn(total_timesteps, callback=None, log_interval=100, tb_log_name='PPO1', reset_num_timesteps=True)[source]

Return a trained model.

Parameters: total_timesteps – (int) The total number of samples to train on callback – (Union[callable, [callable], BaseCallback]) function called at every steps with state of the algorithm. It takes the local and global variables. If it returns False, training is aborted. When the callback inherits from BaseCallback, you will have access to additional stages of the training (training start/end), please read the documentation for more details. log_interval – (int) The number of timesteps before logging. tb_log_name – (str) the name of the run for tensorboard log reset_num_timesteps – (bool) whether or not to reset the current timestep number (used in logging) (BaseRLModel) the trained model
classmethod load(load_path, env=None, custom_objects=None, **kwargs)

Parameters: load_path – (str or file-like) the saved parameter location env – (Gym Environment) the new environment to run the loaded model on (can be None if you only need prediction from a trained model) custom_objects – (dict) Dictionary of objects to replace upon loading. If a variable is present in this dictionary as a key, it will not be deserialized and the corresponding item will be used instead. Similar to custom_objects in keras.models.load_model. Useful when you have an object in file that can not be deserialized. kwargs – extra arguments to change the model when loading
load_parameters(load_path_or_dict, exact_match=True)

Load model parameters from a file or a dictionary

Dictionary keys should be tensorflow variable names, which can be obtained with get_parameters function. If exact_match is True, dictionary should contain keys for all model’s parameters, otherwise RunTimeError is raised. If False, only variables included in the dictionary will be updated.

This does not load agent’s hyper-parameters.

Warning

This function does not update trainer/optimizer variables (e.g. momentum). As such training after using this function may lead to less-than-optimal results.

Parameters: load_path_or_dict – (str or file-like or dict) Save parameter location or dict of parameters as variable.name -> ndarrays to be loaded. exact_match – (bool) If True, expects load dictionary to contain keys for all variables in the model. If False, loads parameters only for variables mentioned in the dictionary. Defaults to True.
predict(observation, state=None, mask=None, deterministic=False)

Get the model’s action from an observation

Parameters: observation – (np.ndarray) the input observation state – (np.ndarray) The last states (can be None, used in recurrent policies) mask – (np.ndarray) The last masks (can be None, used in recurrent policies) deterministic – (bool) Whether or not to return deterministic actions. (np.ndarray, np.ndarray) the model’s action and the next state (used in recurrent policies)
pretrain(dataset, n_epochs=10, learning_rate=0.0001, adam_epsilon=1e-08, val_interval=None)

Pretrain a model using behavior cloning: supervised learning given an expert dataset.

NOTE: only Box and Discrete spaces are supported for now.

Parameters: dataset – (ExpertDataset) Dataset manager n_epochs – (int) Number of iterations on the training set learning_rate – (float) Learning rate adam_epsilon – (float) the epsilon value for the adam optimizer val_interval – (int) Report training and validation losses every n epochs. By default, every 10th of the maximum number of epochs. (BaseRLModel) the pretrained model
save(save_path, cloudpickle=False)[source]

Save the current parameters to file

Parameters: save_path – (str or file-like) The save location cloudpickle – (bool) Use older cloudpickle format instead of zip-archives.
set_env(env)

Checks the validity of the environment, and if it is coherent, set it as the current environment.

Parameters: env – (Gym Environment) The environment for learning a policy
set_random_seed(seed: Optional[int]) → None
Parameters: seed – (Optional[int]) Seed for the pseudo-random generators. If None, do not change the seeds.
setup_model()[source]

Create all the functions and tensorflow graphs necessary to train the model