Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reset seed at the beginning of each epoch. #221

Merged
merged 2 commits into from
Feb 21, 2022

Conversation

csukuangfj
Copy link
Collaborator

No description provided.

Copy link
Collaborator

@danpovey danpovey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking it might be safer to set it to a different value per epoch, e.g. 42 + epoch?
One of my concerns is that it might interact badly with lhotse, which might rely on the seed being different for each epoch.
But also on general principles.

@csukuangfj
Copy link
Collaborator Author

I'm thinking it might be safer to set it to a different value per epoch, e.g. 42 + epoch? One of my concerns is that it might interact badly with lhotse, which might rely on the seed being different for each epoch. But also on general principles.

Thanks! It is fixed now.

@danpovey
Copy link
Collaborator

Cool!

@csukuangfj csukuangfj merged commit 1c35ae1 into k2-fsa:master Feb 21, 2022
@csukuangfj csukuangfj deleted the fix-seed branch February 21, 2022 07:16
@pzelasko
Copy link
Collaborator

This reminds me, we may want to ensure that rng used for data augmentation in dataloading workers is seeded differently per worker, otherwise it might be applying the same augmentations. Dynamic batch sizes probably partially amend that though.

The samplers instantiate their own rng which they seed at the beginning of each epoch so changing global settings won’t affect them.

@danpovey
Copy link
Collaborator

is dataloading done in main process?
could maybe do " + 100 * rank" or something

@pzelasko
Copy link
Collaborator

no, in a multi-GPU training scheme, there are (at least) three levels of subprocesses:

  1. top-level script that spawns one subproc per GPU
  2. per-GPU process, spawns X subprocesses for dataloading and runs the sampler and the training loop
  3. X dataloading worker subprocesses for each per-GPU process, each runs I/O, augmentation, and collation

probably sth like "+ 100 * rank + worker_idx" would work yeah -- I'll check it out eventually

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants