You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We would like to train perform training in contexts where the typical calling sequence of transferring data at every time step is problematic.
For example, we have a hardware-in-the loop system where we would ideally be able to run a full episode of training, collate the results and process them as a block. The reason this is desirable is because there are communication and synchronization issues which make transferring the data on every step problematic.
The same can be true though of other situation, where there simply isn't a good bridge between the training environment software and python that can easily work on every time step.
Therefore my question is, is there any capability to achieve this within stable baselines? If not, how difficult would it be to modify stable baselines to work this way? As we understand it some of the algorithms effectively operate in this way already, i.e. learning is based on the actions and rewards gathered from a full episode.
This is a question, but I can't add the question tag.
Therefore my question is, is there any capability to achieve this within stable baselines? If not, how difficult would it be to modify stable baselines to work this way? As we understand it some of the algorithms effectively operate in this way already, i.e. learning is based on the actions and rewards gathered from a full episode.
Thanks for this, the repo you linked to looks very helpful, and very close to what we want to do, so it could be a solution for us.
Is there any example of the n_episodes_rollout parameter in SB3? I searched and have found it in the documentation but it is a bit too terse for me to understand how to use it in practice.
Is there any example of the n_episodes_rollout parameter in SB3? I searched and have found it in the documentation but it is a bit too terse for me to understand how to use it in practice.
We would like to train perform training in contexts where the typical calling sequence of transferring data at every time step is problematic.
For example, we have a hardware-in-the loop system where we would ideally be able to run a full episode of training, collate the results and process them as a block. The reason this is desirable is because there are communication and synchronization issues which make transferring the data on every step problematic.
The same can be true though of other situation, where there simply isn't a good bridge between the training environment software and python that can easily work on every time step.
Therefore my question is, is there any capability to achieve this within stable baselines? If not, how difficult would it be to modify stable baselines to work this way? As we understand it some of the algorithms effectively operate in this way already, i.e. learning is based on the actions and rewards gathered from a full episode.
This is a question, but I can't add the question tag.
@pstansell
The text was updated successfully, but these errors were encountered: