SAC

Soft Actor Critic (SAC) Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor.

Warning

The SAC model does not support stable_baselines.common.policies because it uses double q-values and value estimation, as a result it must use its own policy models (see SAC Policies).

Available Policies

MlpPolicy Policy object that implements actor critic, using a MLP (2 layers of 64)
LnMlpPolicy Policy object that implements actor critic, using a MLP (2 layers of 64), with layer normalisation
CnnPolicy Policy object that implements actor critic, using a CNN (the nature CNN)
LnCnnPolicy Policy object that implements actor critic, using a CNN (the nature CNN), with layer normalisation

Notes

Note

In our implementation, we use an entropy coefficient (as in OpenAI Spinning or Facebook Horizon), which is the equivalent to the inverse of reward scale in the original SAC paper. The main reason is that it avoids having too high errors when updating the Q functions.

Note

The default policies for SAC differ a bit from others MlpPolicy: it uses ReLU instead of tanh activation, to match the original paper

Can I use?

  • Recurrent policies: ❌
  • Multi processing: ❌
  • Gym spaces:
Space Action Observation
Discrete ✔️
Box ✔️ ✔️
MultiDiscrete ✔️
MultiBinary ✔️

Example

import gym
import numpy as np

from stable_baselines.sac.policies import MlpPolicy
from stable_baselines.common.vec_env import DummyVecEnv
from stable_baselines import SAC

env = gym.make('Pendulum-v0')
env = DummyVecEnv([lambda: env])

model = SAC(MlpPolicy, env, verbose=1)
model.learn(total_timesteps=50000, log_interval=10)
model.save("sac_pendulum")

del model # remove to demonstrate saving and loading

model = SAC.load("sac_pendulum")

obs = env.reset()
while True:
    action, _states = model.predict(obs)
    obs, rewards, dones, info = env.step(action)
    env.render()

Parameters

class stable_baselines.sac.SAC(policy, env, gamma=0.99, learning_rate=0.0003, buffer_size=50000, learning_starts=100, train_freq=1, batch_size=64, tau=0.005, ent_coef='auto', target_update_interval=1, gradient_steps=1, target_entropy='auto', verbose=0, tensorboard_log=None, _init_setup_model=True, policy_kwargs=None)[source]

Soft Actor-Critic (SAC) Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor, This implementation borrows code from original implementation (https://github.com/haarnoja/sac) from OpenAI Spinning Up (https://github.com/openai/spinningup) and from the Softlearning repo (https://github.com/rail-berkeley/softlearning/) Paper: https://arxiv.org/abs/1801.01290 Introduction to SAC: https://spinningup.openai.com/en/latest/algorithms/sac.html

Parameters:
  • policy – (SACPolicy or str) The policy model to use (MlpPolicy, CnnPolicy, LnMlpPolicy, …)
  • env – (Gym environment or str) The environment to learn from (if registered in Gym, can be str)
  • gamma – (float) the discount factor
  • learning_rate – (float or callable) learning rate for adam optimizer, the same learning rate will be used for all networks (Q-Values, Actor and Value function) it can be a function of the current progress (from 1 to 0)
  • buffer_size – (int) size of the replay buffer
  • batch_size – (int) Minibatch size for each gradient update
  • tau – (float) the soft update coefficient (“polyak update”, between 0 and 1)
  • ent_coef – (str or float) Entropy regularization coefficient. (Equivalent to inverse of reward scale in the original SAC paper.) Controlling exploration/exploitation trade-off. Set it to ‘auto’ to learn it automatically (and ‘auto_0.1’ for using 0.1 as initial value)
  • train_freq – (int) Update the model every train_freq steps.
  • learning_starts – (int) how many steps of the model to collect transitions for before learning starts
  • target_update_interval – (int) update the target network every target_network_update_freq steps.
  • gradient_steps – (int) How many gradient update after each step
  • target_entropy – (str or float) target entropy when learning ent_coef (ent_coef = ‘auto’)
  • verbose – (int) the verbosity level: 0 none, 1 training information, 2 tensorflow debug
  • tensorboard_log – (str) the log location for tensorboard (if None, no logging)
  • _init_setup_model – (bool) Whether or not to build the network at the creation of the instance
  • policy_kwargs – (dict) additional arguments to be passed to the policy on creation
action_probability(observation, state=None, mask=None, actions=None)[source]

If actions is None, then get the model’s action probability distribution from a given observation

depending on the action space the output is:
  • Discrete: probability for each possible action
  • Box: mean and standard deviation of the action output

However if actions is not None, this function will return the probability that the given actions are taken with the given parameters (observation, state, …) on this model.

Warning

When working with continuous probability distribution (e.g. Gaussian distribution for continuous action) the probability of taking a particular action is exactly zero. See http://blog.christianperone.com/2019/01/ for a good explanation

Parameters:
  • observation – (np.ndarray) the input observation
  • state – (np.ndarray) The last states (can be None, used in recurrent policies)
  • mask – (np.ndarray) The last masks (can be None, used in recurrent policies)
  • actions – (np.ndarray) (OPTIONAL) For calculating the likelihood that the given actions are chosen by the model for each of the given parameters. Must have the same number of actions and observations. (set to None to return the complete action probability distribution)
Returns:

(np.ndarray) the model’s action probability

get_env()

returns the current environment (can be None if not defined)

Returns:(Gym Environment) The current environment
learn(total_timesteps, callback=None, seed=None, log_interval=4, tb_log_name='SAC')[source]

Return a trained model.

Parameters:
  • total_timesteps – (int) The total number of samples to train on
  • seed – (int) The initial seed for training, if None: keep current seed
  • callback – (function (dict, dict)) -> boolean function called at every steps with state of the algorithm. It takes the local and global variables. If it returns False, training is aborted.
  • log_interval – (int) The number of timesteps before logging.
  • tb_log_name – (str) the name of the run for tensorboard log
Returns:

(BaseRLModel) the trained model

classmethod load(load_path, env=None, **kwargs)[source]

Load the model from file

Parameters:
  • load_path – (str or file-like) the saved parameter location
  • env – (Gym Envrionment) the new environment to run the loaded model on (can be None if you only need prediction from a trained model)
  • kwargs – extra arguments to change the model when loading
predict(observation, state=None, mask=None, deterministic=True)[source]

Get the model’s action from an observation

Parameters:
  • observation – (np.ndarray) the input observation
  • state – (np.ndarray) The last states (can be None, used in recurrent policies)
  • mask – (np.ndarray) The last masks (can be None, used in recurrent policies)
  • deterministic – (bool) Whether or not to return deterministic actions.
Returns:

(np.ndarray, np.ndarray) the model’s action and the next state (used in recurrent policies)

save(save_path)[source]

Save the current parameters to file

Parameters:save_path – (str or file-like object) the save location
set_env(env)

Checks the validity of the environment, and if it is coherent, set it as the current environment.

Parameters:env – (Gym Environment) The environment for learning a policy
setup_model()[source]

Create all the functions and tensorflow graphs necessary to train the model

SAC Policies

class stable_baselines.sac.MlpPolicy(sess, ob_space, ac_space, n_env=1, n_steps=1, n_batch=None, reuse=False, **_kwargs)[source]

Policy object that implements actor critic, using a MLP (2 layers of 64)

Parameters:
  • sess – (TensorFlow session) The current TensorFlow session
  • ob_space – (Gym Space) The observation space of the environment
  • ac_space – (Gym Space) The action space of the environment
  • n_env – (int) The number of environments to run
  • n_steps – (int) The number of steps to run for each environment
  • n_batch – (int) The number of batch to run (n_envs * n_steps)
  • reuse – (bool) If the policy is reusable or not
  • _kwargs – (dict) Extra keyword arguments for the nature CNN feature extraction
make_actor(obs=None, reuse=False, scope='pi')

Creates an actor object

Parameters:
  • obs – (TensorFlow Tensor) The observation placeholder (can be None for default placeholder)
  • reuse – (bool) whether or not to resue parameters
  • scope – (str) the scope name of the actor
Returns:

(TensorFlow Tensor) the output tensor

make_critics(obs=None, action=None, reuse=False, scope='values_fn', create_vf=True, create_qf=True)

Creates the two Q-Values approximator along with the Value function

Parameters:
  • obs – (TensorFlow Tensor) The observation placeholder (can be None for default placeholder)
  • action – (TensorFlow Tensor) The action placeholder
  • reuse – (bool) whether or not to resue parameters
  • scope – (str) the scope name
  • create_vf – (bool) Whether to create Value fn or not
  • create_qf – (bool) Whether to create Q-Values fn or not
Returns:

([tf.Tensor]) Mean, action and log probability

proba_step(obs, state=None, mask=None)

Returns the action probability params (mean, std) for a single step

Parameters:
  • obs – ([float] or [int]) The current observation of the environment
  • state – ([float]) The last states (used in recurrent policies)
  • mask – ([float]) The last masks (used in recurrent policies)
Returns:

([float], [float])

step(obs, state=None, mask=None, deterministic=False)

Returns the policy for a single step

Parameters:
  • obs – ([float] or [int]) The current observation of the environment
  • state – ([float]) The last states (used in recurrent policies)
  • mask – ([float]) The last masks (used in recurrent policies)
  • deterministic – (bool) Whether or not to return deterministic actions.
Returns:

([float]) actions

class stable_baselines.sac.LnMlpPolicy(sess, ob_space, ac_space, n_env=1, n_steps=1, n_batch=None, reuse=False, **_kwargs)[source]

Policy object that implements actor critic, using a MLP (2 layers of 64), with layer normalisation

Parameters:
  • sess – (TensorFlow session) The current TensorFlow session
  • ob_space – (Gym Space) The observation space of the environment
  • ac_space – (Gym Space) The action space of the environment
  • n_env – (int) The number of environments to run
  • n_steps – (int) The number of steps to run for each environment
  • n_batch – (int) The number of batch to run (n_envs * n_steps)
  • reuse – (bool) If the policy is reusable or not
  • _kwargs – (dict) Extra keyword arguments for the nature CNN feature extraction
make_actor(obs=None, reuse=False, scope='pi')

Creates an actor object

Parameters:
  • obs – (TensorFlow Tensor) The observation placeholder (can be None for default placeholder)
  • reuse – (bool) whether or not to resue parameters
  • scope – (str) the scope name of the actor
Returns:

(TensorFlow Tensor) the output tensor

make_critics(obs=None, action=None, reuse=False, scope='values_fn', create_vf=True, create_qf=True)

Creates the two Q-Values approximator along with the Value function

Parameters:
  • obs – (TensorFlow Tensor) The observation placeholder (can be None for default placeholder)
  • action – (TensorFlow Tensor) The action placeholder
  • reuse – (bool) whether or not to resue parameters
  • scope – (str) the scope name
  • create_vf – (bool) Whether to create Value fn or not
  • create_qf – (bool) Whether to create Q-Values fn or not
Returns:

([tf.Tensor]) Mean, action and log probability

proba_step(obs, state=None, mask=None)

Returns the action probability params (mean, std) for a single step

Parameters:
  • obs – ([float] or [int]) The current observation of the environment
  • state – ([float]) The last states (used in recurrent policies)
  • mask – ([float]) The last masks (used in recurrent policies)
Returns:

([float], [float])

step(obs, state=None, mask=None, deterministic=False)

Returns the policy for a single step

Parameters:
  • obs – ([float] or [int]) The current observation of the environment
  • state – ([float]) The last states (used in recurrent policies)
  • mask – ([float]) The last masks (used in recurrent policies)
  • deterministic – (bool) Whether or not to return deterministic actions.
Returns:

([float]) actions

class stable_baselines.sac.CnnPolicy(sess, ob_space, ac_space, n_env=1, n_steps=1, n_batch=None, reuse=False, **_kwargs)[source]

Policy object that implements actor critic, using a CNN (the nature CNN)

Parameters:
  • sess – (TensorFlow session) The current TensorFlow session
  • ob_space – (Gym Space) The observation space of the environment
  • ac_space – (Gym Space) The action space of the environment
  • n_env – (int) The number of environments to run
  • n_steps – (int) The number of steps to run for each environment
  • n_batch – (int) The number of batch to run (n_envs * n_steps)
  • reuse – (bool) If the policy is reusable or not
  • _kwargs – (dict) Extra keyword arguments for the nature CNN feature extraction
make_actor(obs=None, reuse=False, scope='pi')

Creates an actor object

Parameters:
  • obs – (TensorFlow Tensor) The observation placeholder (can be None for default placeholder)
  • reuse – (bool) whether or not to resue parameters
  • scope – (str) the scope name of the actor
Returns:

(TensorFlow Tensor) the output tensor

make_critics(obs=None, action=None, reuse=False, scope='values_fn', create_vf=True, create_qf=True)

Creates the two Q-Values approximator along with the Value function

Parameters:
  • obs – (TensorFlow Tensor) The observation placeholder (can be None for default placeholder)
  • action – (TensorFlow Tensor) The action placeholder
  • reuse – (bool) whether or not to resue parameters
  • scope – (str) the scope name
  • create_vf – (bool) Whether to create Value fn or not
  • create_qf – (bool) Whether to create Q-Values fn or not
Returns:

([tf.Tensor]) Mean, action and log probability

proba_step(obs, state=None, mask=None)

Returns the action probability params (mean, std) for a single step

Parameters:
  • obs – ([float] or [int]) The current observation of the environment
  • state – ([float]) The last states (used in recurrent policies)
  • mask – ([float]) The last masks (used in recurrent policies)
Returns:

([float], [float])

step(obs, state=None, mask=None, deterministic=False)

Returns the policy for a single step

Parameters:
  • obs – ([float] or [int]) The current observation of the environment
  • state – ([float]) The last states (used in recurrent policies)
  • mask – ([float]) The last masks (used in recurrent policies)
  • deterministic – (bool) Whether or not to return deterministic actions.
Returns:

([float]) actions

class stable_baselines.sac.LnCnnPolicy(sess, ob_space, ac_space, n_env=1, n_steps=1, n_batch=None, reuse=False, **_kwargs)[source]

Policy object that implements actor critic, using a CNN (the nature CNN), with layer normalisation

Parameters:
  • sess – (TensorFlow session) The current TensorFlow session
  • ob_space – (Gym Space) The observation space of the environment
  • ac_space – (Gym Space) The action space of the environment
  • n_env – (int) The number of environments to run
  • n_steps – (int) The number of steps to run for each environment
  • n_batch – (int) The number of batch to run (n_envs * n_steps)
  • reuse – (bool) If the policy is reusable or not
  • _kwargs – (dict) Extra keyword arguments for the nature CNN feature extraction
make_actor(obs=None, reuse=False, scope='pi')

Creates an actor object

Parameters:
  • obs – (TensorFlow Tensor) The observation placeholder (can be None for default placeholder)
  • reuse – (bool) whether or not to resue parameters
  • scope – (str) the scope name of the actor
Returns:

(TensorFlow Tensor) the output tensor

make_critics(obs=None, action=None, reuse=False, scope='values_fn', create_vf=True, create_qf=True)

Creates the two Q-Values approximator along with the Value function

Parameters:
  • obs – (TensorFlow Tensor) The observation placeholder (can be None for default placeholder)
  • action – (TensorFlow Tensor) The action placeholder
  • reuse – (bool) whether or not to resue parameters
  • scope – (str) the scope name
  • create_vf – (bool) Whether to create Value fn or not
  • create_qf – (bool) Whether to create Q-Values fn or not
Returns:

([tf.Tensor]) Mean, action and log probability

proba_step(obs, state=None, mask=None)

Returns the action probability params (mean, std) for a single step

Parameters:
  • obs – ([float] or [int]) The current observation of the environment
  • state – ([float]) The last states (used in recurrent policies)
  • mask – ([float]) The last masks (used in recurrent policies)
Returns:

([float], [float])

step(obs, state=None, mask=None, deterministic=False)

Returns the policy for a single step

Parameters:
  • obs – ([float] or [int]) The current observation of the environment
  • state – ([float]) The last states (used in recurrent policies)
  • mask – ([float]) The last masks (used in recurrent policies)
  • deterministic – (bool) Whether or not to return deterministic actions.
Returns:

([float]) actions

Custom Policy Network

Similarly to the example given in the examples page. You can easily define a custom architecture for the policy network:

import gym

from stable_baselines.sac.policies import FeedForwardPolicy
from stable_baselines.common.vec_env import DummyVecEnv
from stable_baselines import SAC

# Custom MLP policy of three layers of size 128 each
class CustomSACPolicy(FeedForwardPolicy):
    def __init__(self, *args, **kwargs):
        super(CustomPolicy, self).__init__(*args, **kwargs,
                                           layers=[128, 128, 128],
                                           layer_norm=False,
                                           feature_extraction="mlp")

# Create and wrap the environment
env = gym.make('Pendulum-v0')
env = DummyVecEnv([lambda: env])

model = SAC(CustomSACPolicy, env, verbose=1)
# Train the agent
model.learn(total_timesteps=100000)