generated from fastai/nbdev_template
-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unsure whether the behavior is expected #16
Comments
Try not setting num_samples=3. Num_samples is the number of MC samples for
samples configurations in the importance-weighted code path.
Thanks,
Andreas
…On Wed, Mar 23, 2022, 09:43 Akim Tsvigun ***@***.***> wrote:
Hi,
I am inspecting the fields of application of BatchBald. I am using the
get_batchbald_batch function from
https://github.com/BlackHC/batchbald_redux/blob/master/01_batchbald.ipynb
Here is an example: we have 3 samples with 2 mc inferences for each with 4
classes. The first two examples are totally identical, while the third is
completely different yet has slightly lower BALD score (due to smaller
fluctuation of probabilities across mc runs):
log_probs_N_K_C = torch.Tensor([
[[0.1, 0.2, 0.3, 0.4], [0.15, 0.15, 0.3, 0.4]],
[[0.1, 0.2, 0.3, 0.4], [0.15, 0.15, 0.3, 0.4]],
[[0.4, 0.3, 0.2, 0.1], [0.4, 0.3, 0.16, 0.14]],
]).log()
I hoped BatchBald would be useful in such a case since it queries diverse
instances for the batch. Yet, it queries two duplicate examples:
get_batchbald_batch(log_probs_N_K_C, batch_size=2, num_samples=3) outputs
[0, 1].
I wonder whether this is the expected behavior of the algorithm or some
bugs in the code may be present.
Thank you for your attention!
—
Reply to this email directly, view it on GitHub
<#16>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFSBYHKROHLSHLXH3OMO53VBLRVZANCNFSM5RNOFFJA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
This does not seem to affect: the following code returns with a success.
|
I cannot comment further for the time being sadly
…On Wed, Mar 23, 2022, 12:10 Akim Tsvigun ***@***.***> wrote:
This does not seem to affect: the following code returns with a success.
log_probs_N_K_C = torch.Tensor([
[[0.1, 0.2, 0.3, 0.4], [0.15, 0.15, 0.3, 0.4]],
[[0.1, 0.2, 0.3, 0.4], [0.15, 0.15, 0.3, 0.4]],
[[0.4, 0.3, 0.2, 0.1], [0.4, 0.3, 0.16, 0.14]],
]).log()
for n_samples in range(2, 100):
assert get_batchbald_batch(log_probs_N_K_C, batch_size=2, num_samples=n_samples).indices == [0, 1]
—
Reply to this email directly, view it on GitHub
<#16 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFSBYCEFALI6IJAPBGHV2LVBMC37ANCNFSM5RNOFFJA>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
I am inspecting the fields of application of BatchBald. I am using the
get_batchbald_batch
function from https://github.com/BlackHC/batchbald_redux/blob/master/01_batchbald.ipynbHere is an example: we have 3 samples with 2 mc inferences for each with 4 classes. The first two examples are totally identical, while the third is completely different yet has slightly lower BALD score (due to smaller fluctuation of probabilities across mc runs):
log_probs_N_K_C = torch.Tensor([
[[0.1, 0.2, 0.3, 0.4], [0.15, 0.15, 0.3, 0.4]],
[[0.1, 0.2, 0.3, 0.4], [0.15, 0.15, 0.3, 0.4]],
[[0.4, 0.3, 0.2, 0.1], [0.4, 0.3, 0.16, 0.14]],
]).log()
I hoped BatchBald would be useful in such a case since it queries diverse instances for the batch. Yet, it queries two duplicate examples:
get_batchbald_batch(log_probs_N_K_C, batch_size=2, num_samples=3)
outputs [0, 1].I wonder whether this is the expected behavior of the algorithm or some bugs in the code may be present.
Thank you for your attention!
The text was updated successfully, but these errors were encountered: