-
-
Notifications
You must be signed in to change notification settings - Fork 745
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move to v4 for Rethinking 2 #194
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
View / edit / reply to this conversation on ReviewNB aloctavodia commented on 2022-06-02T16:59:54Z This need to fixed https://github.com/pymc-devs/pymc/issues/5443
see how you are overestimating the standard deviation |
View / edit / reply to this conversation on ReviewNB aloctavodia commented on 2022-06-02T16:59:55Z Line #2. sns.kdeplot(x[4, :].flatten(), bw_method=0.01, ax=ax[0]) alternatively, we can use az.plot_kde |
Issue pymc-devs/pymc#5443 seems to be over my pay grade, going to exclude Yes indeed, using |
Sorry for not being clear: This is the change needed in data = np.repeat((0, 1), (3, 6))
with pm.Model() as m:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
p_value = m.rvs_to_values[p]
p_value.tag.transform = None
p_value.name = p.name
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
# display summary of quadratic approximation
print(" Mean, Standard deviation\np {:.2}, {:.2}".format(mean_q["p"], std_q[0])) |
…nb with RNg update
View / edit / reply to this conversation on ReviewNB aloctavodia commented on 2022-06-03T14:57:08Z This cell also needs to be fixed, similar to the previous one.
|
Move to v2 for Rethinking 2
Starting with Statistical Rethinking 2 as this is the one I know the best.
Was using the script from pymc-example (
scripts/rerun.py
)Some of the file got transformed automatically:
Rethinking_2/Chp_02.ipynb
Rethinking_2/Chp_03.ipynb
Rethinking_2/Chp_10.ipynb
Rethinking_2/End_of_chapter_problems/Chapter_2.ipynb
Rethinking_2/End_of_chapter_problems/Chapter_11.ipynb
This PR include #152 .
For
Rethinking_2/Chp_04.ipynb
:return_inferencedata=True
for thepm.sample
so using theaz.InferenceData
for the the trace variables.trace['var']
totrace.posterior('var']
when possible or when it was expecting a array with all the sample in one array (rather than one array per chain), I usedaz.extract_dataset
Other change needed was when a Panda Series was used to define a model, this was triggering a
Elementwise
error, adding.values
turned out to be enough.As the traces are
az.InferenceData
, calculating the mean returns an array (I meant an Xarray), so added.item(0)
to keep the compatibility with the formula as they exist, not sure this is the most elegant solution.Same thing for the size of the posterior distribution, you cannot just do
len(trace['var']
, so I used.sizes["sample"]
, again not sure this is the most elegant solution, but it works.For the #152 :
sns.kdeplot
following seaborn depreciation warningbw
tobw_method
as per seaborn depreciation warning.Simulation of field trip
, as it was only doing 15 steps (index 0 is when no step have been done as far as I understood it).