Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding a parallelisation test for DeepEnsemble #898

Open
wants to merge 3 commits into
base: develop
Choose a base branch
from

Conversation

hstojic
Copy link
Collaborator

@hstojic hstojic commented Jan 23, 2025

we explicitly test that increasing number of networks in the ensemble, once accounted for difference in number of parameters, will lead to similar training time

@hstojic hstojic requested a review from uri-granta January 23, 2025 11:56
Copy link
Collaborator

@uri-granta uri-granta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea for a test! Couple of questions:

  1. how long does it to take to run? If it's slow you could mark it with @pytest.mark.slow.
  2. why does the smaller ensemble have more units per layer? wouldn't that increase the run time and make the comparison less fair?
  3. just to be certain, have manually confirmed that the training is happening in parallel (e.g. by setting a breakpoint? or disabling parallelisation and seeing a change?)

@hstojic
Copy link
Collaborator Author

hstojic commented Jan 23, 2025

Good idea for a test! Couple of questions:

  1. how long does it to take to run? If it's slow you could mark it with @pytest.mark.slow.

about a minute on CPU, probably similar on GPU - I'll do that

  1. why does the smaller ensemble have more units per layer? wouldn't that increase the run time and make the comparison less fair?

that's specifically to account for the number of parameters in each model, so this is the fair comparison

  1. just to be certain, have manually confirmed that the training is happening in parallel (e.g. by setting a breakpoint? or disabling parallelisation and seeing a change?)

its not possible to do that I think, disabling parllelisation would require a completely different DE architecture

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants