Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix confidence bounds issues on binary setting #86

Merged
merged 4 commits into from
Jun 3, 2022

Conversation

rfrenoy
Copy link
Contributor

@rfrenoy rfrenoy commented Jun 1, 2022

Fixes #41

@rfrenoy
Copy link
Contributor Author

rfrenoy commented Jun 1, 2022

I am open to suggestions about the current implementation. I would especially like feedback regarding the metric_lower_bound and metric_upper_bound attributes I have added to the _BinaryClassificationCBPE object to avoid having raw values in the _estimate method and to be able to monkeypatch the values for the unit test.
It works, but I would like a second opinion. Thanks 🙂

@nnansters nnansters marked this pull request as ready for review June 2, 2022 08:39
@nnansters nnansters requested review from nnansters and nikml as code owners June 2, 2022 08:39
Copy link
Contributor

@nnansters nnansters left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work so far!

@codecov
Copy link

codecov bot commented Jun 2, 2022

Codecov Report

Merging #86 (19e038c) into main (934b49e) will increase coverage by 3.49%.
The diff coverage is 88.88%.

@@            Coverage Diff             @@
##             main      #86      +/-   ##
==========================================
+ Coverage   66.56%   70.06%   +3.49%     
==========================================
  Files          49       49              
  Lines        2623     2629       +6     
  Branches      506      506              
==========================================
+ Hits         1746     1842      +96     
+ Misses        839      737     -102     
- Partials       38       50      +12     
Impacted Files Coverage Δ
...performance_estimation/confidence_based/results.py 61.53% <ø> (ø)
nannyml/plots/_step_plot.py 15.38% <0.00%> (ø)
...on/confidence_based/_cbpe_binary_classification.py 98.90% <100.00%> (+0.01%) ⬆️
...onfidence_based/_cbpe_multiclass_classification.py 59.43% <100.00%> (+40.29%) ⬆️
...ml/performance_estimation/confidence_based/cbpe.py 95.00% <0.00%> (+2.50%) ⬆️
nannyml/preprocessing.py 95.65% <0.00%> (+13.04%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 934b49e...19e038c. Read the comment docs.

@nnansters nnansters removed the request for review from nikml June 3, 2022 08:08
@nnansters nnansters merged commit eebeaeb into NannyML:main Jun 3, 2022
@nnansters
Copy link
Contributor

Brilliant! This definitely was somewhat more convoluted than expected, job well done @rfrenoy !

@rfrenoy rfrenoy deleted the fix-confidence_bounds branch June 3, 2022 08:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Confidence bounds on CBPE plot go above 1.0 when ROC-AUC is 1.0
2 participants