Description
We would like to train perform training in contexts where the typical calling sequence of transferring data at every time step is problematic.
For example, we have a hardware-in-the loop system where we would ideally be able to run a full episode of training, collate the results and process them as a block. The reason this is desirable is because there are communication and synchronization issues which make transferring the data on every step problematic.
The same can be true though of other situation, where there simply isn't a good bridge between the training environment software and python that can easily work on every time step.
Therefore my question is, is there any capability to achieve this within stable baselines? If not, how difficult would it be to modify stable baselines to work this way? As we understand it some of the algorithms effectively operate in this way already, i.e. learning is based on the actions and rewards gathered from a full episode.
This is a question, but I can't add the question tag.
Activity