HER

Hindsight Experience Replay (HER)

Warning

HER is not refactored yet. We are looking for contributors to help us.

How to use Hindsight Experience Replay

Getting started

Training an agent is very simple:

python -m stable_baselines.her.experiment.train

This will train a DDPG+HER agent on the FetchReach environment. You should see the success rate go up quickly to 1.0, which means that the agent achieves the desired goal in 100% of the cases. The training script logs other diagnostics as well and pickles the best policy so far (w.r.t. to its test success rate), the latest policy, and, if enabled, a history of policies every K epochs.

To inspect what the agent has learned, use the play script:

python -m stable_baselines.her.experiment.play /path/to/an/experiment/policy_best.pkl

You can try it right now with the results of the training step (the script prints out the path for you). This should visualize the current policy for 10 episodes and will also print statistics.

Reproducing results

In order to reproduce the results from Plappert et al. (2018), run the following command:

python -m stable_baselines.her.experiment.train --num_cpu 19

This will require a machine with sufficient amount of physical CPU cores. In our experiments, we used Azure’s D15v2 instances, which have 20 physical cores. We only scheduled the experiment on 19 of those to leave some head-room on the system.

Parameters

class stable_baselines.her.HER(policy, env, verbose=0, _init_setup_model=True)[source]
action_probability(observation, state=None, mask=None)[source]

If actions is None, then get the model’s action probability distribution from a given observation

depending on the action space the output is:
  • Discrete: probability for each possible action
  • Box: mean and standard deviation of the action output

However if actions is not None, this function will return the probability that the given actions are taken with the given parameters (observation, state, …) on this model.

Warning

When working with continuous probability distribution (e.g. Gaussian distribution for continuous action) the probability of taking a particular action is exactly zero. See http://blog.christianperone.com/2019/01/ for a good explanation

Parameters:
  • observation – (np.ndarray) the input observation
  • state – (np.ndarray) The last states (can be None, used in recurrent policies)
  • mask – (np.ndarray) The last masks (can be None, used in recurrent policies)
  • actions – (np.ndarray) (OPTIONAL) For calculating the likelihood that the given actions are chosen by the model for each of the given parameters. Must have the same number of actions and observations. (set to None to return the complete action probability distribution)
Returns:

(np.ndarray) the model’s action probability

learn(total_timesteps, callback=None, seed=None, log_interval=100, tb_log_name='HER')[source]

Return a trained model.

Parameters:
  • total_timesteps – (int) The total number of samples to train on
  • seed – (int) The initial seed for training, if None: keep current seed
  • callback – (function (dict, dict)) -> boolean function called at every steps with state of the algorithm. It takes the local and global variables. If it returns False, training is aborted.
  • log_interval – (int) The number of timesteps before logging.
  • tb_log_name – (str) the name of the run for tensorboard log
Returns:

(BaseRLModel) the trained model

classmethod load(load_path, env=None, **kwargs)[source]

Load the model from file

Parameters:
  • load_path – (str or file-like) the saved parameter location
  • env – (Gym Envrionment) the new environment to run the loaded model on (can be None if you only need prediction from a trained model)
  • kwargs – extra arguments to change the model when loading
predict(observation, state=None, mask=None, deterministic=False)[source]

Get the model’s action from an observation

Parameters:
  • observation – (np.ndarray) the input observation
  • state – (np.ndarray) The last states (can be None, used in recurrent policies)
  • mask – (np.ndarray) The last masks (can be None, used in recurrent policies)
  • deterministic – (bool) Whether or not to return deterministic actions.
Returns:

(np.ndarray, np.ndarray) the model’s action and the next state (used in recurrent policies)

save(save_path)[source]

Save the current parameters to file

Parameters:save_path – (str or file-like object) the save location
setup_model()[source]

Create all the functions and tensorflow graphs necessary to train the model