GAIL¶
The Generative Adversarial Imitation Learning (GAIL) uses expert trajectories to recover a cost function and then learn a policy.
Learning a cost function from expert demonstrations is called Inverse Reinforcement Learning (IRL). The connection between GAIL and Generative Adversarial Networks (GANs) is that it uses a discriminator that tries to seperate expert trajectory from trajectories of the learned policy, which has the role of the generator here.
Notes¶
- Original paper: https://arxiv.org/abs/1606.03476
Warning
Images are not yet handled properly by the current implementation
If you want to train an imitation learning agent¶
Step 1: Generate expert data¶
You can either train a RL algorithm in a classic setting, use another controller (e.g. a PID controller) or human demonstrations.
We recommend you to take a look at pre-training section
or directly look at stable_baselines/gail/dataset/
folder to learn more about the expected format for the dataset.
Here is an example of training a Soft Actor-Critic model to generate expert trajectories for GAIL:
from stable_baselines import SAC
from stable_baselines.gail import generate_expert_traj
# Generate expert trajectories (train expert)
model = SAC('MlpPolicy', 'Pendulum-v0', verbose=1)
# Train for 60000 timesteps and record 10 trajectories
# all the data will be saved in 'expert_pendulum.npz' file
generate_expert_traj(model, 'expert_pendulum', n_timesteps=60000, n_episodes=10)
Step 2: Run GAIL¶
In case you want to run Behavior Cloning (BC)
Use the .pretrain()
method (cf guide).
Others
Thanks to the open source:
- @openai/imitation
- @carpedm20/deep-rl-tensorflow
Can I use?¶
- Recurrent policies: ❌
- Multi processing: ✔️ (using MPI)
- Gym spaces:
Space | Action | Observation |
---|---|---|
Discrete | ✔️ | ✔️ |
Box | ✔️ | ✔️ |
MultiDiscrete | ❌ | ✔️ |
MultiBinary | ❌ | ✔️ |
Example¶
import gym
from stable_baselines import GAIL, SAC
from stable_baselines.gail import ExpertDataset, generate_expert_traj
# Generate expert trajectories (train expert)
model = SAC('MlpPolicy', 'Pendulum-v0', verbose=1)
generate_expert_traj(model, 'expert_pendulum', n_timesteps=100, n_episodes=10)
# Load the expert dataset
dataset = ExpertDataset(expert_path='expert_pendulum.npz', traj_limitation=10, verbose=1)
model = GAIL("MlpPolicy", 'Pendulum-v0', dataset, verbose=1)
# Note: in practice, you need to train for 1M steps to have a working policy
model.learn(total_timesteps=1000)
model.save("gail_pendulum")
del model # remove to demonstrate saving and loading
model = GAIL.load("gail_pendulum")
env = gym.make('Pendulum-v0')
obs = env.reset()
while True:
action, _states = model.predict(obs)
obs, rewards, dones, info = env.step(action)
env.render()
Parameters¶
-
class
stable_baselines.gail.
GAIL
(policy, env, expert_dataset=None, hidden_size_adversary=100, adversary_entcoeff=0.001, g_step=3, d_step=1, d_stepsize=0.0003, verbose=0, _init_setup_model=True, **kwargs)[source]¶ Generative Adversarial Imitation Learning (GAIL)
Warning
Images are not yet handled properly by the current implementation
Parameters: - policy – (ActorCriticPolicy or str) The policy model to use (MlpPolicy, CnnPolicy, CnnLstmPolicy, …)
- env – (Gym environment or str) The environment to learn from (if registered in Gym, can be str)
- expert_dataset – (ExpertDataset) the dataset manager
- gamma – (float) the discount value
- timesteps_per_batch – (int) the number of timesteps to run per batch (horizon)
- max_kl – (float) the kullback leiber loss threashold
- cg_iters – (int) the number of iterations for the conjugate gradient calculation
- lam – (float) GAE factor
- entcoeff – (float) the weight for the entropy loss
- cg_damping – (float) the compute gradient dampening factor
- vf_stepsize – (float) the value function stepsize
- vf_iters – (int) the value function’s number iterations for learning
- hidden_size – ([int]) the hidden dimension for the MLP
- g_step – (int) number of steps to train policy in each epoch
- d_step – (int) number of steps to train discriminator in each epoch
- d_stepsize – (float) the reward giver stepsize
- verbose – (int) the verbosity level: 0 none, 1 training information, 2 tensorflow debug
- _init_setup_model – (bool) Whether or not to build the network at the creation of the instance
- full_tensorboard_log – (bool) enable additional logging when using tensorboard WARNING: this logging can take a lot of space quickly
-
action_probability
(observation, state=None, mask=None, actions=None)[source]¶ If
actions
isNone
, then get the model’s action probability distribution from a given observation- depending on the action space the output is:
- Discrete: probability for each possible action
- Box: mean and standard deviation of the action output
However if
actions
is notNone
, this function will return the probability that the given actions are taken with the given parameters (observation, state, …) on this model.Warning
When working with continuous probability distribution (e.g. Gaussian distribution for continuous action) the probability of taking a particular action is exactly zero. See http://blog.christianperone.com/2019/01/ for a good explanation
Parameters: - observation – (np.ndarray) the input observation
- state – (np.ndarray) The last states (can be None, used in recurrent policies)
- mask – (np.ndarray) The last masks (can be None, used in recurrent policies)
- actions – (np.ndarray) (OPTIONAL) For calculating the likelihood that the given actions are chosen by the model for each of the given parameters. Must have the same number of actions and observations. (set to None to return the complete action probability distribution)
Returns: (np.ndarray) the model’s action probability
-
get_env
()¶ returns the current environment (can be None if not defined)
Returns: (Gym Environment) The current environment
-
learn
(total_timesteps, callback=None, seed=None, log_interval=100, tb_log_name='GAIL', reset_num_timesteps=True)[source]¶ Return a trained model.
Parameters: - total_timesteps – (int) The total number of samples to train on
- seed – (int) The initial seed for training, if None: keep current seed
- callback – (function (dict, dict)) -> boolean function called at every steps with state of the algorithm. It takes the local and global variables. If it returns False, training is aborted.
- log_interval – (int) The number of timesteps before logging.
- tb_log_name – (str) the name of the run for tensorboard log
- reset_num_timesteps – (bool) whether or not to reset the current timestep number (used in logging)
Returns: (BaseRLModel) the trained model
-
classmethod
load
(load_path, env=None, **kwargs)[source]¶ Load the model from file
Parameters: - load_path – (str or file-like) the saved parameter location
- env – (Gym Envrionment) the new environment to run the loaded model on (can be None if you only need prediction from a trained model)
- kwargs – extra arguments to change the model when loading
-
predict
(observation, state=None, mask=None, deterministic=False)[source]¶ Get the model’s action from an observation
Parameters: - observation – (np.ndarray) the input observation
- state – (np.ndarray) The last states (can be None, used in recurrent policies)
- mask – (np.ndarray) The last masks (can be None, used in recurrent policies)
- deterministic – (bool) Whether or not to return deterministic actions.
Returns: (np.ndarray, np.ndarray) the model’s action and the next state (used in recurrent policies)
-
pretrain
(dataset, n_epochs=10, learning_rate=0.0001, adam_epsilon=1e-08, val_interval=None)[source]¶ Pretrain a model using behavior cloning: supervised learning given an expert dataset.
NOTE: only Box and Discrete spaces are supported for now.
Parameters: - dataset – (ExpertDataset) Dataset manager
- n_epochs – (int) Number of iterations on the training set
- learning_rate – (float) Learning rate
- adam_epsilon – (float) the epsilon value for the adam optimizer
- val_interval – (int) Report training and validation losses every n epochs. By default, every 10th of the maximum number of epochs.
Returns: (BaseRLModel) the pretrained model
-
save
(save_path)[source]¶ Save the current parameters to file
Parameters: save_path – (str or file-like object) the save location