Description
📚 Documentation/Examples
Thanks for this awesome library! I'm really enjoying learning it, but I did get a little confused while
reading the documentation and would like to suggest an improvement.
In the example with uncertain inputs, the final graph is confusing.
Here, I am talking about this workbook:
The setup is that we have a sine function with decreasing noise as x increases:
# Training data is 100 points in [0,1] inclusive regularly spaced
train_x_mean = torch.linspace(0, 1, 20)
# We'll assume the variance shrinks the closer we get to 1
train_x_stdv = torch.linspace(0.03, 0.01, 20)
But, when we look at the final graph the confidence bands become much wider as the training noise decreases!
This is the opposite of what I would expect!
This is quite strange, but I think this is because of the noise assumed for the test set.
In the final cell you have:
...
test_x_distributional = torch.stack((test_x, (1e-2 * torch.ones_like(test_x)).log()), dim=1)
...
This level of noise for the test data has a huge impact on the final graph.
- If you set the test noise level to e.g. 1e-3 you get what I would have expected,
wider bands on the left, then narrower bands on the right as the training noise decreases. - If you set the test noise level to e.g. 1e-1 this flattens everything and you get a single wide band.
- For the existing noise, I'm not quite sure I understand why the bands are wider on the right but I think the logic is,
that given the training noise a test noise of 1e-2 is easy to explain on the left where the noise is 3e-2, but harder to explain
on the right where the training noise is only 1e-2.
Recommendations:
I would recommend changing the test noise to 1e-3. Or, if the intent is to really show the interaction between similar training and test noises maybe emphasize this and talk about how training and test noise interact?