-
Notifications
You must be signed in to change notification settings - Fork 561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add gpytorch.metrics
and tests
#1870
Conversation
…le_coverage_error`
|
test/metrics/test_metrics.py
Outdated
|
||
model.train() | ||
likelihood.train() | ||
for i in range(50): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this pass if it's 2-3 steps (or even 1)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All metrics except quantile_coverage_error
are passing step=1. Currently, I reduced 50 to 20.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this work at 5 or 10 steps? Lower is better so the unit tests for these don't take a really long time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done @wjmaddox.
This looks pretty much ready other than the nit above. Do you mind adding a notebook describing the usage to |
Sure @wjmaddox! I welcome your suggestions on the notebook. |
@wjmaddox I have added the metrics and additional tests. I have also added the entries in
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realized I should have taken a deeper look before now. But, these should be all that we need to address (modulo checking for an assert error in the quantile function).
Thanks a lot for your help in resolving the issues @wjmaddox.
Is this something that still needs to be addressed? If so, can you please elaborate a bit? |
By that I mean putting a unit test to ensure that the error is raised when we think it does. I'll go ahead and do it. |
Great! Thank you @wjmaddox |
Sorry about the linting disasters previously. |
ugh, sorry @patel-zeel can you fix the lint, this is pretty much impossible to do in browser mode. hopefully I didnt break the docs somehow with that commit. |
Done, @wjmaddox. It seems that everything is working fine. |
As discussed in #1857 with @gpleiss.
Added the following workflows with tests
gpytorch.metrics.coverage_error
gpytorch.metrics.mean_standardized_log_loss
gpytorch.metrics.negatave_log_predictive_density
Initially, I thought of the following version:
metric(model, likelihood, test_x, test_y)
But, I had to drop the idea due to the following reason:
model
andlikelihood
. For example, use ofgpytorch.settings.fast_pred_var()
may not be desired for all models.Thus, currently, we have the following version suitable for all models:
metric(pred_dist, test_y)
Please let me know if corrections/improvements are required.