An experimentation system for Reinforcement Learning using OpenAI Gym, Tensorflow, and Keras.
OpenAI Lab is created to do Reinforcement Learning (RL) like science - theorize, experiment. It provides an easy interface to OpenAI Gym and Keras, with an automated experiment and evaluation framework.
This is motivated by the problems we faced in RL research: the difficulty of building upon other's work, the lack of rigor in comparisons of research results, and the inertia to high level vision.
The Lab aims to make RL research more efficient and to encourage experimentation, by doing three things:
- Handles the basic RL environment and algorithm setups.
- Provides a standard, extensible platform with reusable components for developing deep reinforcement learning algorithms.
- Provides a rigorous experimentation system with logs, plots and analytics for testing new RL algorithms. Experimental settings are stored in standardized format for reproducibility and comparisons.
With OpenAI Lab, we could focus on researching the essential elements of reinforcement learning such as the algorithm, policy, memory, and parameter tuning. It allows us to build agents efficiently using existing components with the implementations from research ideas. We could then test the research hypotheses systematically by running experiments.
Ultimately, the Lab is a generalized framework for doing reinforcement learning, agnostic of OpenAI Gym and Keras. Pytorch-based implementations are on the roadmap, for example.
See the Fitness Matrix for the results of these algorithms below in the OpenAI gym environments.
algorithm | implementation | eval score (pending) |
---|---|---|
DQN | DQN | - |
Double DQN | DoubleDQN | - |
Dueling DQN | - | - |
Sarsa | DeepSarsa | - |
Off-Policy Sarsa | OffPolicySarsa | - |
PER (Prioritized Experience Replay) | PrioritizedExperienceReplay | - |
CEM (Cross Entropy Method) | next | - |
REINFORCE | - | - |
DPG (Deterministic Policy Gradient) off-policy actor-critic | ActorCritic | - |
DDPG (Deep-DPG) actor-critic with target networks | DDPG | - |
A3C (asynchronous advantage actor-critic) | - | - |
Dyna | next | - |
TRPO | - | - |
Q*(lambda) | - | - |
Retrace(lambda) | - | - |
Neural Episodic Control (NEC) | - | - |
EWC (Elastic Weight Consolidation) | - | - |
Next, see Installation and jump to Quickstart.
Timelapse of OpenAI Lab, solving CartPole-v0.