Skip to content

LooseTerrifyingSpaceMonkey/GridWorlds.jl

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GridWorlds

A package for creating grid world environments for reinforcement learning in Julia. This package is designed to be lightweight and fast.

This package is inspired by gym-minigrid. In order to cite this package, please refer to the file CITATION.bib. Starring the repository on GitHub is also appreciated. For benchmarks, refer to benchmarks/benchmarks.md.

Table of contents:

List of Environments

  1. SingleRoomUndirected
  2. SingleRoomDirected
  3. GridRoomsUndirected
  4. GridRoomsDirected
  5. SequentialRoomsUndirected
  6. SequentialRoomsDirected
  7. MazeUndirected
  8. MazeDirected
  9. GoToTargetUndirected
  10. GoToTargetDirected
  11. DoorKeyUndirected
  12. DoorKeyDirected
  13. CollectGemsUndirected
  14. CollectGemsDirected
  15. CollectGemsMultiAgentUndirected
  16. DynamicObstaclesUndirected
  17. DynamicObstaclesDirected
  18. SokobanUndirected
  19. SokobanDirected
  20. Snake
  21. Catcher
  22. TransportUndirected
  23. TransportDirected
  24. FrozenLakeUndirected

Getting Started

import GridWorlds as GW

# Each environment `Env` lives in its own module `EnvModule`
# For example, the `SingleRoomUndirected` environment lives inside the `SingleRoomUndirectedModule` module

env = GW.SingleRoomUndirectedModule.SingleRoomUndirected()

# reset the environment. All environments are randomized

GW.reset!(env)

# get names of actions that can be performed in this environment

GW.get_action_names(env)

# perform actions in the environment

GW.act!(env, 1) # move up
GW.act!(env, 2) # move down
GW.act!(env, 3) # move left
GW.act!(env, 4) # move right

# play an environment interactively inside the terminal

GW.play!(env)

# play and record the interaction in a file called recording.txt

GW.play!(env, file_name = "recording.txt", frame_start_delimiter = "FRAME_START_DELIMITER")

# manually step through the frames in the recording

GW.replay(file_name = "recording.txt", frame_start_delimiter = "FRAME_START_DELIMITER")

# replay the recording inside the terminal at a given frame rate

GW.replay(file_name = "recording.txt", frame_start_delimiter = "FRAME_START_DELIMITER", frame_rate = 2)

# use the RLBase API

import ReinforcementLearningBase as RLBase

# wrap a game instance from this package to create an RLBase compatible environment

rlbase_env = GW.RLBaseEnv(env)

# perform RLBase operations on the wrapped environment

RLBase.reset!(rlbase_env)
state = RLBase.state(rlbase_env)
action_space = RLBase.action_space(rlbase_env)
reward = RLBase.reward(rlbase_env)
done = RLBase.is_terminated(rlbase_env)

rlbase_env(1) # move up
rlbase_env(2) # move down
rlbase_env(3) # move left
rlbase_env(4) # move right

Notes

Reinforcement Learning

This package does not intend to reinvent a fully usable reinforcement learning API. Instead, all the games in this package provide the bare minimum of what is needed to for the game logic, which is the ability to reset an environment using GW.reset!(env) and to perform actions in the environment using GW.act!(env, action). In order to utilize such a game for reinforcement learning, you would probably be using a higher level reinforcement learning API like the one offered by the ReinforcementLearning.jl package (RLBase API), for example. As of this writing, all the environments provide a default implementation for the RLBase API, which means that you can easily wrap a game from GridWorlds.jl and use it directly with the rest of the ReinforcementLearning.jl ecosystem.

  1. States

    There are a few possible options for representing the state/observation for an environment. You can use the entire tile map. You can also augment that with other environment specific information like the agent's direction, target (in GoToTargetUndirected) etc. In several games, you can also use the GW.get_sub_tile_map! function to get a partial view of the tile map to be used as the observation.

    All environemnts provide a default implementation of the RLBase.state function. It is recommended that before performing reinforcement learning experiments using an environment, you carefully understand the information contained in the state representation for that environment.

  2. Actions

    As of this writing, all actions in all environments are discrete. And so, to keep things simple and consistent, they are represented by elements of Base.OneTo(NUM_ACTIONS) (basically integers going from 1 to NUM_ACTIONS). In order to know which action does what, you can call GW.get_action_names(env) to get a list of names which gives a better description. For example:

    julia> env = GW.SingleRoomUndirectedModule.SingleRoomUndirected();
    
    julia> GW.get_action_names(env)
    (:MOVE_UP, :MOVE_DOWN, :MOVE_LEFT, :MOVE_RIGHT)

    The order of elements in this list corresponds to that of the actions.

  3. Rewards and Termination

    As mentioned before, in order to use these for reinforcement learning experiments, you would mostly be using a higher level API like RLBase, which should already provide a way to get these values. For example, in RLBase, rewards can be accessed using RLBase.reward(env) and checking whether an environment has terminated or not can by done by calling RLBase.is_terminated(env). In case you are using some other API and need more direct control, it is better to take a look at the implementation for that environment to access things like reward and check for termination.

Tile Map

Each environment contains a tile map, which is a BitArray{3} that encodes information about the presence or absence of objects in the grid world. It is of size (num_objects, height, width). The second and third dimensions correspond to positions along the height and width of the tile map. The first dimension corresponds to the presence or absence of objects at a particular position using a multi-hot encoding along the first dimension. You can get the name and ordering of objects along the first dimension of the tile map by using the following method:

julia> env = GW.SingleRoomUndirectedModule.SingleRoomUndirected();

julia> GW.get_object_names(env)
(:AGENT, :WALL, :GOAL)

Navigation

Several environments contain the word Undirected or Directed within their name. This refers to the navigation style of the agent. Undirected means that the agent has no direction associated with it, and navigates around by directly moving up, down, left, or right on the tile map. Directed means that the agent has a direction associated with it, and it navigates around by moving forward or backward along its current direction, or it could also turn left or right with respect to its current direction. There are 4 directions - UP, DOWN, LEFT, and RIGHT.

Interactive Playing and Recording

All the environments can be played directly inside the REPL. These interactive sessions can also be recorded in plain text files and replayed in the terminal. There are two ways to replay a recording:

  1. The default way is to manually step through each recorded frame. This allows you to move through the frames one by one at your own pace using keyboard inputs.
  2. The second way is to replay the frames at a given frame rate. This would loop through all the frames once and then (and only then) exit the replay.

Here is an example:

Programmatic Recording of Agent's Behavior

In order to programmatically record the behavior of an agent during an episode, you can simply log the string representation of the environment at each step prefixed with a delimiter. You can also log other arbitrary information if you want, like the total reward so far, for example. You can then use the GW.replay functiton to replay the recording inside the terminal. The string representation of an environment can be obtained using repr(MIME"text/plain"(), env). Here is an example:

import GridWorlds as GW
import ReinforcementLearningBase as RLBase

game = GW.SingleRoomUndirectedModule.SingleRoomUndirected()
env = GW.RLBaseEnv(game)

total_reward = zero(RLBase.reward(env))
frame_number = 1

str = ""

str = str * "FRAME_START_DELIMITER"
str = str * "frame_number: $(frame_number)\n"
str = str * repr(MIME"text/plain"(), env)
str = str * "\ntotal_reward: $(total_reward)"

while !RLBase.is_terminated(env)
    action = rand(RLBase.action_space(env))
    env(action)
    reward = RLBase.reward(env)

    global total_reward += reward
    global frame_number += 1

    global str = str * "FRAME_START_DELIMITER"
    global str = str * "frame_number: $(frame_number)\n"
    global str = str * repr(MIME"text/plain"(), env)
    global str = str * "\ntotal_reward: $(total_reward)"
end

write("recording.txt", str)

GW.replay(file_name = "recording.txt", frame_start_delimiter = "FRAME_START_DELIMITER")

In ReinforcementLearning.jl, you can create a hook for recording the agent's behavior at any point during training.

List of Environments

  1. SingleRoomUndirected

    The objective of the agent is to navigate its way to the goal. When the agent reaches the goal, it receives a reward of 1 and the environment terminates.

  2. SingleRoomDirected

    The objective of the agent is to navigate its way to the goal. When the agent reaches the goal, it receives a reward of 1 and the environment terminates.

  3. GridRoomsUndirected

    The objective of the agent is to navigate its way to the goal. When the agent reaches the goal, it receives a reward of 1 and the environment terminates.

  4. GridRoomsDirected

    The objective of the agent is to navigate its way to the goal. When the agent reaches the goal, it receives a reward of 1 and the environment terminates.

  5. SequentialRoomsUndirected

    The objective of the agent is to navigate its way to the goal. When the agent reaches the goal, it receives a reward of 1 and the environment terminates.

  6. SequentialRoomsDirected

    The objective of the agent is to navigate its way to the goal. When the agent reaches the goal, it receives a reward of 1 and the environment terminates.

  7. MazeUndirected

    The objective of the agent is to navigate its way to the goal. When the agent reaches the goal, it receives a reward of 1 and the environment terminates.

  8. MazeDirected

    The objective of the agent is to navigate its way to the goal. When the agent reaches the goal, it receives a reward of 1 and the environment terminates.

  9. GoToTargetUndirected

    The objective of the agent is to navigate its way to the desired target. When the agent reaches the desired target, it receives a reward of 1. When the agent reaches the other target, it receives a reward of -1. In either case, the environment terminates upon reaching a target.

  10. GoToTargetDirected

    The objective of the agent is to navigate its way to the desired target. When the agent reaches the desired target, it receives a reward of 1. When the agent reaches the other target, it receives a reward of -1. In either case, the environment terminates upon reaching a target.

  11. DoorKeyUndirected

    The objective of the agent is to collect the key and navigate its way to the goal. When the agent reaches the goal, it receives a reward of 1 and the environment terminates. Without picking up the key, the agent will not be able to pass through the door that separtes the agent and goal.

  12. DoorKeyDirected

    The objective of the agent is to collect the key and navigate its way to the goal. When the agent reaches the goal, it receives a reward of 1 and the environment terminates. Without picking up the key, the agent will not be able to pass through the door that separtes the agent and goal.

  13. CollectGemsUndirected

    The objective of the agent is to collect all the randomly scattered gems. When the agent collects a gem, it receives a reward of 1. The environment terminates when the agent has collected all the gems.

  14. CollectGemsDirected

    The objective of the agent is to collect all the randomly scattered gems. When the agent collects a gem, it receives a reward of 1. The environment terminates when the agent has collected all the gems.

  15. CollectGemsMultiAgentUndirected

    The objective of the agents is to collect all the randomly scattered gems. The agents take turns for performing actions. When an agent collects a gem, the environment gives a reward of 1. The environment terminates when the agents have collected all the gems.

  16. DynamicObstaclesUndirected

    The objective of the agent is to navigate its way to the goal while avoiding collision with obstacles. When the agent reaches the goal, it receives a reward of 1 and the environment terminates. If the agent collides with an obstacle, the agent receives a reward of -1 and the environment terminates.

  17. DynamicObstaclesDirected

    The objective of the agent is to navigate its way to the goal while avoiding collision with obstacles. When the agent reaches the goal, it receives a reward of 1 and the environment terminates. If the agent collides with an obstacle, the agent receives a reward of -1 and the environment terminates.

  18. SokobanUndirected

    The agent needs to push the boxes onto the target positions. The levels are taken from https://github.com/deepmind/boxoban-levels. Upon each reset, a level is randomly selected from https://github.com/deepmind/boxoban-levels/blob/master/medium/train/000.txt. The level dataset can be dynamically swapped during runtime in case more levels are needed. One way to achieve this while using ReinforcementLearning.jl is with the help of hooks.

  19. SokobanDirected

    The agent needs to push the boxes onto the target positions. The levels are taken from https://github.com/deepmind/boxoban-levels. Upon each reset, a level is randomly selected from https://github.com/deepmind/boxoban-levels/blob/master/medium/train/000.txt. The level dataset can be dynamically swapped during runtime in case more levels are needed. One way to achieve this while using ReinforcementLearning.jl is with the help of hooks.

  20. Snake

    The objective of the agent is to eat as many food pellets as possible. As soon as the agent eats a food pellet, the length of its body incrases by one and it receives a reward of 1. When the agent tries to move into a wall or into its body, it receives a reward of - tile_map_height * tile_map_width and the environment terminates. When the agent collects all the food pellets possible, it receives a reward of tile_map_height * tile_map_width + 1 (for the last food pellet it ate).

  21. Catcher

    The objective of the agent is to keep catching the falling gems for as long as possible. It receives a reward of 1 when it catches a gem and a new gem gets spawned in the next step. When the agent misses catching a gem, it receives a reward of -1 and the environment terminates.

  22. TransportUndirected

    The objective of the agent is to pick up the gem and drop it to the target location. When the agent drops the gem at the target location, it receives a reward of 1 and the environment terminates.

  23. TransportDirected

    The objective of the agent is to pick up the gem and drop it to the target location. When the agent drops the gem at the target location, it receives a reward of 1 and the environment terminates.

  24. FrozenLakeUndirected

    The objective of the agent is to navigate its way to the goal while avoiding falling into the holes in the lake. When the agent reaches the goal, it receives a reward of 1 and the environment terminates. If the agent collides falls into a hole, the agent receives a reward of -1 and the environment terminates. The probablility of moving in the direction given by an agent is 1/3 while there is 1/3 chance to move in either perpendicular direction (for example: 1/3 chance to move up, 1/3 chance to move left and 1/3 chance to move right if the agent chose up). The scenario is based on the Frozen Lake environment in Python's gymnasium. In the Python version there are two preset maps: "4x4" and "8x8". The GridWorlds implementation includes the walls as part of the dimensions, so the equivalent maps in GridWorlds is "6x6" and "10x10" respectively. The start, goal, and holes are located in the same positions in the lake as the Python version. If specifying custom height and widths keep in mind it is going to add walls all around the map so the actual surface of the lake is (height - 2, width - 2).

About

Help! I'm lost in the flatland!

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Julia 99.8%
  • TeX 0.2%