ACKTR

Actor Critic using Kronecker-Factored Trust Region (ACKTR) uses Kronecker-factored approximate curvature (K-FAC) for trust region optimization.

Notes

Can I use?

  • Recurrent policies: ✔️
  • Multi processing: ✔️
  • Gym spaces:
Space Action Observation
Discrete ✔️ ✔️
Box ✔️ ✔️
MultiDiscrete ✔️
MultiBinary ✔️

Example

import gym

from stable_baselines.common.policies import MlpPolicy, MlpLstmPolicy, MlpLnLstmPolicy
from stable_baselines.common import make_vec_env
from stable_baselines import ACKTR

# multiprocess environment
env = make_vec_env('CartPole-v1', n_envs=4)

model = ACKTR(MlpPolicy, env, verbose=1)
model.learn(total_timesteps=25000)
model.save("acktr_cartpole")

del model # remove to demonstrate saving and loading

model = ACKTR.load("acktr_cartpole")

obs = env.reset()
while True:
    action, _states = model.predict(obs)
    obs, rewards, dones, info = env.step(action)
    env.render()

Parameters

class stable_baselines.acktr.ACKTR(policy, env, gamma=0.99, nprocs=None, n_steps=20, ent_coef=0.01, vf_coef=0.25, vf_fisher_coef=1.0, learning_rate=0.25, max_grad_norm=0.5, kfac_clip=0.001, lr_schedule='linear', verbose=0, tensorboard_log=None, _init_setup_model=True, async_eigen_decomp=False, kfac_update=1, gae_lambda=None, policy_kwargs=None, full_tensorboard_log=False, seed=None, n_cpu_tf_sess=1)[source]

The ACKTR (Actor Critic using Kronecker-Factored Trust Region) model class, https://arxiv.org/abs/1708.05144

Parameters:
  • policy – (ActorCriticPolicy or str) The policy model to use (MlpPolicy, CnnPolicy, CnnLstmPolicy, …)
  • env – (Gym environment or str) The environment to learn from (if registered in Gym, can be str)
  • gamma – (float) Discount factor
  • nprocs

    (int) The number of threads for TensorFlow operations

    Deprecated since version 2.9.0: Use n_cpu_tf_sess instead.

  • n_steps – (int) The number of steps to run for each environment
  • ent_coef – (float) The weight for the entropy loss
  • vf_coef – (float) The weight for the loss on the value function
  • vf_fisher_coef – (float) The weight for the fisher loss on the value function
  • learning_rate – (float) The initial learning rate for the RMS prop optimizer
  • max_grad_norm – (float) The clipping value for the maximum gradient
  • kfac_clip – (float) gradient clipping for Kullback-Leibler
  • lr_schedule – (str) The type of scheduler for the learning rate update (‘linear’, ‘constant’, ‘double_linear_con’, ‘middle_drop’ or ‘double_middle_drop’)
  • verbose – (int) the verbosity level: 0 none, 1 training information, 2 tensorflow debug
  • tensorboard_log – (str) the log location for tensorboard (if None, no logging)
  • _init_setup_model – (bool) Whether or not to build the network at the creation of the instance
  • async_eigen_decomp – (bool) Use async eigen decomposition
  • kfac_update – (int) update kfac after kfac_update steps
  • policy_kwargs – (dict) additional arguments to be passed to the policy on creation
  • gae_lambda – (float) Factor for trade-off of bias vs variance for Generalized Advantage Estimator If None (default), then the classic advantage will be used instead of GAE
  • full_tensorboard_log – (bool) enable additional logging when using tensorboard WARNING: this logging can take a lot of space quickly
  • seed – (int) Seed for the pseudo-random generators (python, numpy, tensorflow). If None (default), use random seed. Note that if you want completely deterministic results, you must set n_cpu_tf_sess to 1.
  • n_cpu_tf_sess – (int) The number of threads for TensorFlow operations If None, the number of cpu of the current machine will be used.
action_probability(observation, state=None, mask=None, actions=None, logp=False)

If actions is None, then get the model’s action probability distribution from a given observation.

Depending on the action space the output is:
  • Discrete: probability for each possible action
  • Box: mean and standard deviation of the action output

However if actions is not None, this function will return the probability that the given actions are taken with the given parameters (observation, state, …) on this model. For discrete action spaces, it returns the probability mass; for continuous action spaces, the probability density. This is since the probability mass will always be zero in continuous spaces, see http://blog.christianperone.com/2019/01/ for a good explanation

Parameters:
  • observation – (np.ndarray) the input observation
  • state – (np.ndarray) The last states (can be None, used in recurrent policies)
  • mask – (np.ndarray) The last masks (can be None, used in recurrent policies)
  • actions – (np.ndarray) (OPTIONAL) For calculating the likelihood that the given actions are chosen by the model for each of the given parameters. Must have the same number of actions and observations. (set to None to return the complete action probability distribution)
  • logp – (bool) (OPTIONAL) When specified with actions, returns probability in log-space. This has no effect if actions is None.
Returns:

(np.ndarray) the model’s (log) action probability

get_env()

returns the current environment (can be None if not defined)

Returns:(Gym Environment) The current environment
get_parameter_list()

Get tensorflow Variables of model’s parameters

This includes all variables necessary for continuing training (saving / loading).

Returns:(list) List of tensorflow Variables
get_parameters()

Get current model parameters as dictionary of variable name -> ndarray.

Returns:(OrderedDict) Dictionary of variable name -> ndarray of model’s parameters.
learn(total_timesteps, callback=None, log_interval=100, tb_log_name='ACKTR', reset_num_timesteps=True)[source]

Return a trained model.

Parameters:
  • total_timesteps – (int) The total number of samples to train on
  • callback – (function (dict, dict)) -> boolean function called at every steps with state of the algorithm. It takes the local and global variables. If it returns False, training is aborted.
  • log_interval – (int) The number of timesteps before logging.
  • tb_log_name – (str) the name of the run for tensorboard log
  • reset_num_timesteps – (bool) whether or not to reset the current timestep number (used in logging)
Returns:

(BaseRLModel) the trained model

classmethod load(load_path, env=None, custom_objects=None, **kwargs)

Load the model from file

Parameters:
  • load_path – (str or file-like) the saved parameter location
  • env – (Gym Environment) the new environment to run the loaded model on (can be None if you only need prediction from a trained model)
  • custom_objects – (dict) Dictionary of objects to replace upon loading. If a variable is present in this dictionary as a key, it will not be deserialized and the corresponding item will be used instead. Similar to custom_objects in keras.models.load_model. Useful when you have an object in file that can not be deserialized.
  • kwargs – extra arguments to change the model when loading
load_parameters(load_path_or_dict, exact_match=True)

Load model parameters from a file or a dictionary

Dictionary keys should be tensorflow variable names, which can be obtained with get_parameters function. If exact_match is True, dictionary should contain keys for all model’s parameters, otherwise RunTimeError is raised. If False, only variables included in the dictionary will be updated.

This does not load agent’s hyper-parameters.

Warning

This function does not update trainer/optimizer variables (e.g. momentum). As such training after using this function may lead to less-than-optimal results.

Parameters:
  • load_path_or_dict – (str or file-like or dict) Save parameter location or dict of parameters as variable.name -> ndarrays to be loaded.
  • exact_match – (bool) If True, expects load dictionary to contain keys for all variables in the model. If False, loads parameters only for variables mentioned in the dictionary. Defaults to True.
predict(observation, state=None, mask=None, deterministic=False)

Get the model’s action from an observation

Parameters:
  • observation – (np.ndarray) the input observation
  • state – (np.ndarray) The last states (can be None, used in recurrent policies)
  • mask – (np.ndarray) The last masks (can be None, used in recurrent policies)
  • deterministic – (bool) Whether or not to return deterministic actions.
Returns:

(np.ndarray, np.ndarray) the model’s action and the next state (used in recurrent policies)

pretrain(dataset, n_epochs=10, learning_rate=0.0001, adam_epsilon=1e-08, val_interval=None)

Pretrain a model using behavior cloning: supervised learning given an expert dataset.

NOTE: only Box and Discrete spaces are supported for now.

Parameters:
  • dataset – (ExpertDataset) Dataset manager
  • n_epochs – (int) Number of iterations on the training set
  • learning_rate – (float) Learning rate
  • adam_epsilon – (float) the epsilon value for the adam optimizer
  • val_interval – (int) Report training and validation losses every n epochs. By default, every 10th of the maximum number of epochs.
Returns:

(BaseRLModel) the pretrained model

save(save_path, cloudpickle=False)[source]

Save the current parameters to file

Parameters:
  • save_path – (str or file-like) The save location
  • cloudpickle – (bool) Use older cloudpickle format instead of zip-archives.
set_env(env)

Checks the validity of the environment, and if it is coherent, set it as the current environment.

Parameters:env – (Gym Environment) The environment for learning a policy
set_random_seed(seed)
Parameters:seed – (int) Seed for the pseudo-random generators. If None, do not change the seeds.
setup_model()[source]

Create all the functions and tensorflow graphs necessary to train the model