A collection of GPU/TPU-accelerated parallel game simulators for reinforcement learning (RL)
Brax, a JAX-native physics engine, provides extremely high-speed parallel simulation for RL in continuous state space. Then, what about RL in discrete state spaces like Chess, Shogi, and Go? Pgx provides a wide variety of JAX-native game simulators! Highlighted features include:
- JAX-native. All
step
functions are JIT-able - Super fast in parallel execution on accelerators
- Various game support including Backgammon, Chess, Shogi, and Go
- Beautiful visualization in SVG format
pip install pgx
import jax
import pgx
env = pgx.make("go_19x19")
init = jax.jit(jax.vmap(env.init)) # vectorize and JIT-compile
step = jax.jit(jax.vmap(env.step))
batch_size = 1024
keys = jax.random.split(jax.random.PRNGKey(42), batch_size)
state = init(keys) # vectorized states
while not state.terminated.all():
action = model(state.current_player, state.observation, state.legal_action_mask)
state = step(state, action) # state.reward (2,)
⚠️ Pgx is currently in the beta version. Therefore, API is subject to change without notice. We aim to release v1.0.0 in April 2023. Opinions and comments are more than welcome!
Use pgx.available_games()
to see the list of currently available games.
Game | Environment | Visualization |
---|---|---|
2048 | ✅ | ✅ |
Animal Shogi | ✅ | ✅ |
Backgammon | ✅ | ✅ |
Bridge Bidding | 🚧 | ✅ |
Chess | ✅ | ✅ |
Connect Four | ✅ | ✅ |
Go | ✅ | ✅ |
Hex | ✅ | ✅ |
Kuhn Poker | ✅ | ✅ |
Leduc hold'em | ✅ | ✅ |
Mahjong | 🚧 | 🚧 |
MinAtar/Asterix | ✅ | ✅ |
MinAtar/Breakout | ✅ | ✅ |
MinAtar/Freeway | ✅ | ✅ |
MinAtar/Seaquest | ✅ | ✅ |
MinAtar/SpaceInvaders | ✅ | ✅ |
Othello | ✅ | ✅ |
Shogi | ✅ | ✅ |
Sparrow Mahjong | ✅ | ✅ |
Tic-tac-toe | ✅ | ✅ |
Pgx is intended to complement these JAX-native environments with (classic) board game suits:
- RobertTLange/gymnax: JAX implementation of popular RL environments (classic control, bsuite, MinAtar, etc) and meta RL tasks
- google/brax: Rigidbody physics simulation in JAX and continuous-space RL tasks (ant, fetch, humanoid, etc)
- instadeepai/jumanji: A suite of diverse and challenging RL environments in JAX (bin-packing, routing problems, etc)
Combining Pgx with these JAX-native algorithms/implementations might be an interesting direction:
- Anakin framework: Highly efficient RL framework that works with JAX-native environments on TPUs
- deepmind/mctx: JAX-native MCTS implementations, including AlphaZero and MuZero
- deepmind/rlax: JAX-native RL components
- google/evojax: Hardware-Accelerated neuroevolution
- RobertTLange/evosax: JAX-native evolution strategy (ES) implementations
- adaptive-intelligent-robotics/QDax: JAX-native Quality-Diversity (QD) algorithms
@article{koyamada2023pgx,
title={Pgx: Hardware-accelerated parallel game simulation for reinforcement learning},
author={Koyamada, Sotetsu and Okano, Shinri and Nishimori, Soichiro and Murata, Yu and Habara, Keigo and Kita, Haruka and Ishii, Shin},
journal={arXiv preprint arXiv:2303.17503},
year={2023}
}
Apache-2.0