Skip to content

solving CartPole-V1 using deep Q-Learning and Experience Replay

Notifications You must be signed in to change notification settings

Ashkanfld/CartPole-V1

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

Deep Q-Learning

This program assigning Deep Q-learning for the CartPole-V1

Environment

This code uses the CartPole-V1 environment from the gymnasium library. CartPole-V1 is the latest version of the Cart Pole environment and you can read about its version history in the https://www.gymlibrary.dev/environments/classic_control/cart_pole/

How to use

  1. Download a Python file ("Cart Pole.py")

  2. Installing packages if they are not currently installed on your system (it is recommended to use pip install)

  3. run the script

  4. you can reach the weights of the Q-Network in the HDF5 format in dir/Cart_Pole_final_result.h5

About

solving CartPole-V1 using deep Q-Learning and Experience Replay

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages