-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DDPG on Discrete Action Space #285
Comments
Update: I've tried to implement my idea and inserted this line just before The error I'm faced with is triggered after
action_batch.shape is (32, ) and the right hand side is (32, 2). |
Hi. |
在源码文件路径中
测试的时候会出现同样的错误, 同样在大概 300 多行左右. 也就是报错的位置添加 |
Hi.
I wanted to train an Actor-Critic on the Cartpole Environment (new Deel RL student here :) )
I know that sounds overkill, I just wanted to try.
This is my actor:
However, this is what happens when I try to run the file:
From what I understand, the error is occurring since the Actor is outputting floating points, and the env expects a discrete number when calling env.step(action).
Does anybody know a workaround? Currently, I'm thinking of modifying ddpg.py and change
select_action()
to loop through the output and output either a 0 or a 1 for the cartpole.Please advise.
Thanks and Best Regards.
The text was updated successfully, but these errors were encountered: