-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change import FFT according to interface of torch #246
Conversation
Codecov Report
@@ Coverage Diff @@
## master #246 +/- ##
==========================================
+ Coverage 95.66% 95.68% +0.02%
==========================================
Files 29 29
Lines 1798 1807 +9
==========================================
+ Hits 1720 1729 +9
Misses 78 78
Flags with carried forward coverage won't be shown. Click here to find out more.
|
In order to resolve blocked merging, the protection of master branch should be changed (required tests). I suggest to do it after review. |
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One question from my side. The rest looks good.
@@ -53,7 +53,7 @@ def brisque(x: torch.Tensor, | |||
assert kernel_size % 2 == 1, f'Kernel size must be odd, got [{kernel_size}]' | |||
_validate_input([x, ], dim_range=(4, 4), data_range=(0, data_range)) | |||
|
|||
x = x / data_range * 255 | |||
x = x / float(data_range) * 255 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this explicit conversion necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case of x
and data_range
being torch.uint8
and int
, the result will be Byte
for torch==1.5.1
due to classic division instead of the true_division
. It will lead to zeroing the input and possible exceptions from pytorch layers, which could be incompatible with torch.Byte
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agh, I see. This is unfortunate because from one hand we don't check user's inputs but from the other we require data_range
to be int
of float
. I think this is a good compromise though.
Closes #245
Proposed Changes
torch
torch=={'1.5.1', '1.8.1'}, torchvision=={'0.6.1', '0,9,1'}
)torch>=1.5.1