Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Kronecker example #653

Merged
merged 2 commits into from
May 30, 2024
Merged

Improve Kronecker example #653

merged 2 commits into from
May 30, 2024

Conversation

AlexAndorra
Copy link
Collaborator

@AlexAndorra AlexAndorra commented Apr 1, 2024

Various updates and improvements on the Kronecker variance GP example:

  • Improved plots
  • More specific priors, to improve model sampling (it's still brittle, but wasn't converging at all before)
  • Sampling with more than 1 chain
  • Showcasing how to use numpyro to sample, and JAX to sample posterior predictive
  • Increase the true noise standard deviation, to help a bit with convergence

📚 Documentation preview 📚: https://pymc-examples--653.org.readthedocs.build/en/653/

@AlexAndorra AlexAndorra added the enhancement New notebook planned to be added to the collection label Apr 1, 2024
@AlexAndorra AlexAndorra requested a review from bwengals April 1, 2024 22:10
@AlexAndorra AlexAndorra self-assigned this Apr 1, 2024
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@AlexAndorra
Copy link
Collaborator Author

@bwengals can we merge this one please?

Copy link

review-notebook-app bot commented May 27, 2024

View / edit / reply to this conversation on ReviewNB

bwengals commented on 2024-05-27T05:04:43Z
----------------------------------------------------------------

The truncated normal makes sense to me for the lengthscales, but it seems a bit strange for eta and sigma, because it won't let either the GP or the noise go to zero. I think it'd make more sense to use HalfNormals or something like that for those parameters. Does it still sample with those priors?


AlexAndorra commented on 2024-05-27T18:25:51Z
----------------------------------------------------------------

Yep, it still samples with pm.HalfNormal("sigma", 0.5) and pm.HalfNormal("eta", 0.5)

But eta and sigma can't be equal to zero anyways, can they? What I'm trying to do is avoid the near-zero region. Does that make sense?

bwengals commented on 2024-05-27T22:32:07Z
----------------------------------------------------------------

I think its analogous to doing a linear regression, and putting a Normal(0, sigma) prior on the beta coefficients, and a half-normal or something on the likelihood sigma, right? Or Im missing something.

Like it'd be strange to do y ~ N(beta0 + beta1 * x, sigma) with truncated normal priors on beta0, beta1 and sigma? Unless you had a very particular reason

AlexAndorra commented on 2024-05-28T13:40:24Z
----------------------------------------------------------------

Yep, agreed. I've definitely done the second case -- it's very useful when you have prior information about the parameters. Here though it's hard to argue for or against, as it's a simulated case.

Anyways, that's not a blocker and runs fine now, so we can go ahead

@bwengals
Copy link
Collaborator

Ah sorry for missing this one, left one quick Q then all good.

Copy link
Collaborator Author

Yep, it still samples with pm.HalfNormal("sigma", 0.5) and pm.HalfNormal("eta", 0.5)

But eta and sigma can't be equal to zero anyways, can they? What I'm trying to do is avoid the near-zero region. Does that make sense?


View entire conversation on ReviewNB

@AlexAndorra
Copy link
Collaborator Author

Thanks @bwengals ! Just pushed the changes. I also had a small question on your comment, but this is not a blocker, so feel free to merge

Copy link
Collaborator

I think its analogous to doing a linear regression, and putting a Normal(0, sigma) prior on the beta coefficients, and a half-normal or something on the likelihood sigma, right? Or Im missing something.

Like it'd be strange to do y ~ N(beta0 + beta1 * x, sigma) with truncated normal priors on beta0, beta1 and sigma? Unless you had a very particular reason


View entire conversation on ReviewNB

@AlexAndorra
Copy link
Collaborator Author

Can you approve and merge @bwengals if that looks good to you?

Copy link
Collaborator Author

Yep, agreed. I've definitely done the second case -- it's very useful when you have prior information about the parameters. Here though it's hard to argue for or against, as it's a simulated case.

Anyways, that's not a blocker and runs fine now, so we can go ahead


View entire conversation on ReviewNB

@AlexAndorra AlexAndorra merged commit 475a1b4 into main May 30, 2024
2 checks passed
@AlexAndorra AlexAndorra deleted the improve-kron-gp branch May 30, 2024 17:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New notebook planned to be added to the collection
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants