Here we benchmark various reinforcement learning algorithms available in RLlib on the MineRL environment.
RLlib is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications. RLlib natively supports TensorFlow, TensorFlow Eager, and PyTorch, but most of its internals are framework agnostic.
Make sure you have JDK 1.8 on your system for MineRL
Requires Python 3.7 or 3.8.
conda create --name minerl-rllib python=3.8
conda activate minerl-rllib
pip install poetry
poetry install
Install PyTorch with correct cuda version.
Make sure you have the environment variable MINERL_DATA_ROOT
set,
otherwise it defaults to the data
folder.
Follow the official instructions: https://minerl.io/dataset/
If you download the data to ./data
then you don't need to set MINERL_DATA_ROOT
in your environment variables.
Do train.py --help
to see all options.
See the following command trains the SAC algorithm on offline data in the MineRLObtainDiamondVectorObf-v0
environment.
python train.py -f config/sac-offline.yaml
This repository comes with a modular configuration system.
We specify configuration yaml
files according to the rllib specification.