Base RL Class

Common interface for all the RL algorithms

class stable_baselines.common.base_class.BaseRLModel(policy, env, verbose=0, *, requires_vec_env, policy_base, policy_kwargs=None)[source]

The base RL model

Parameters:
  • policy – (BasePolicy) Policy object
  • env – (Gym environment) The environment to learn from (if registered in Gym, can be str. Can be None for loading trained models)
  • verbose – (int) the verbosity level: 0 none, 1 training information, 2 tensorflow debug
  • requires_vec_env – (bool) Does this model require a vectorized environment
  • policy_base – (BasePolicy) the base policy used by this method
action_probability(observation, state=None, mask=None, actions=None)[source]

If actions is None, then get the model’s action probability distribution from a given observation

depending on the action space the output is:
  • Discrete: probability for each possible action
  • Box: mean and standard deviation of the action output

However if actions is not None, this function will return the probability that the given actions are taken with the given parameters (observation, state, …) on this model.

Warning

When working with continuous probability distribution (e.g. Gaussian distribution for continuous action) the probability of taking a particular action is exactly zero. See http://blog.christianperone.com/2019/01/ for a good explanation

Parameters:
  • observation – (np.ndarray) the input observation
  • state – (np.ndarray) The last states (can be None, used in recurrent policies)
  • mask – (np.ndarray) The last masks (can be None, used in recurrent policies)
  • actions – (np.ndarray) (OPTIONAL) For calculating the likelihood that the given actions are chosen by the model for each of the given parameters. Must have the same number of actions and observations. (set to None to return the complete action probability distribution)
Returns:

(np.ndarray) the model’s action probability

get_env()[source]

returns the current environment (can be None if not defined)

Returns:(Gym Environment) The current environment
learn(total_timesteps, callback=None, seed=None, log_interval=100, tb_log_name='run', reset_num_timesteps=True)[source]

Return a trained model.

Parameters:
  • total_timesteps – (int) The total number of samples to train on
  • seed – (int) The initial seed for training, if None: keep current seed
  • callback – (function (dict, dict)) -> boolean function called at every steps with state of the algorithm. It takes the local and global variables. If it returns False, training is aborted.
  • log_interval – (int) The number of timesteps before logging.
  • tb_log_name – (str) the name of the run for tensorboard log
  • reset_num_timesteps – (bool) whether or not to reset the current timestep number (used in logging)
Returns:

(BaseRLModel) the trained model

classmethod load(load_path, env=None, **kwargs)[source]

Load the model from file

Parameters:
  • load_path – (str or file-like) the saved parameter location
  • env – (Gym Envrionment) the new environment to run the loaded model on (can be None if you only need prediction from a trained model)
  • kwargs – extra arguments to change the model when loading
predict(observation, state=None, mask=None, deterministic=False)[source]

Get the model’s action from an observation

Parameters:
  • observation – (np.ndarray) the input observation
  • state – (np.ndarray) The last states (can be None, used in recurrent policies)
  • mask – (np.ndarray) The last masks (can be None, used in recurrent policies)
  • deterministic – (bool) Whether or not to return deterministic actions.
Returns:

(np.ndarray, np.ndarray) the model’s action and the next state (used in recurrent policies)

pretrain(dataset, n_epochs=10, learning_rate=0.0001, adam_epsilon=1e-08, val_interval=None)[source]

Pretrain a model using behavior cloning: supervised learning given an expert dataset.

NOTE: only Box and Discrete spaces are supported for now.

Parameters:
  • dataset – (ExpertDataset) Dataset manager
  • n_epochs – (int) Number of iterations on the training set
  • learning_rate – (float) Learning rate
  • adam_epsilon – (float) the epsilon value for the adam optimizer
  • val_interval – (int) Report training and validation losses every n epochs. By default, every 10th of the maximum number of epochs.
Returns:

(BaseRLModel) the pretrained model

save(save_path)[source]

Save the current parameters to file

Parameters:save_path – (str or file-like object) the save location
set_env(env)[source]

Checks the validity of the environment, and if it is coherent, set it as the current environment.

Parameters:env – (Gym Environment) The environment for learning a policy
setup_model()[source]

Create all the functions and tensorflow graphs necessary to train the model