Skip to content

Commit

Permalink
fix indentation in bvar calc_ar (pymc-devs#525)
Browse files Browse the repository at this point in the history
* [fix BVAR pymc-devs#523] fix indentation in bvar calc_ar

* [fix BVAR pymc-devs#523] remove az.extract_dataset calls
  • Loading branch information
NathanielF authored Feb 21, 2023
1 parent 214f1ba commit c0621b6
Show file tree
Hide file tree
Showing 2 changed files with 6,215 additions and 6,675 deletions.
12,875 changes: 6,207 additions & 6,668 deletions examples/time_series/bayesian_var_model.ipynb

Large diffs are not rendered by default.

15 changes: 8 additions & 7 deletions examples/time_series/bayesian_var_model.myst.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ jupytext:
format_name: myst
format_version: 0.13
kernelspec:
display_name: Python 3.9.0 ('pymc_ar_ex')
display_name: myjlabenv
language: python
name: python3
name: myjlabenv
---

(Bayesian Vector Autoregressive Models)=
Expand Down Expand Up @@ -145,7 +145,7 @@ def calc_ar_step(lag_coefs, n_eqs, n_lags, df):
],
axis=0,
)
ars.append(ar)
ars.append(ar)
beta = pm.math.stack(ars, axis=-1)
return beta
Expand Down Expand Up @@ -275,7 +275,7 @@ az.summary(idata_fake_data, var_names=["alpha", "lag_coefs", "noise_chol_corr"])
```

```{code-cell} ipython3
az.plot_posterior(idata_fake_data, var_names=["alpha"], ref_val=[18, 8]);
az.plot_posterior(idata_fake_data, var_names=["alpha"], ref_val=[8, 18]);
```

Next we'll plot the posterior predictive distribution to check that the fitted model can capture the patterns in the observed data. This is the primary test of goodness of fit.
Expand Down Expand Up @@ -311,7 +311,7 @@ def plot_ppc(idata, df, group="posterior_predictive"):
fig, axs = plt.subplots(2, 1, figsize=(25, 15))
df = pd.DataFrame(idata_fake_data["observed_data"]["obs"].data, columns=["x", "y"])
axs = axs.flatten()
ppc = az.extract_dataset(idata, group=group, num_samples=100)["obs"]
ppc = az.extract(idata, group=group, num_samples=100)["obs"]
# Minus the lagged terms and the constant
shade_background(ppc, axs, 0, "inferno")
axs[0].plot(np.arange(ppc.shape[0]), ppc[:, 0, :].mean(axis=1), color="cyan", label="Mean")
Expand Down Expand Up @@ -409,7 +409,7 @@ def plot_ppc_macro(idata, df, group="posterior_predictive"):
df = pd.DataFrame(idata["observed_data"]["obs"].data, columns=["dl_gdp", "dl_cons"])
fig, axs = plt.subplots(2, 1, figsize=(20, 10))
axs = axs.flatten()
ppc = az.extract_dataset(idata, group=group, num_samples=100)["obs"]
ppc = az.extract(idata, group=group, num_samples=100)["obs"]
shade_background(ppc, axs, 0, "inferno")
axs[0].plot(np.arange(ppc.shape[0]), ppc[:, 0, :].mean(axis=1), color="cyan", label="Mean")
Expand Down Expand Up @@ -704,7 +704,7 @@ for ax, country in zip(axs, countries):
idata_full_test["observed_data"][f"obs_{country}"].data,
columns=["dl_gdp", "dl_cons", "dl_gfcf"],
)
ppc = az.extract_dataset(idata_full_test, group="posterior_predictive", num_samples=100)[
ppc = az.extract(idata_full_test, group="posterior_predictive", num_samples=100)[
f"obs_{country}"
]
if country == "Ireland":
Expand Down Expand Up @@ -754,6 +754,7 @@ In the next post in this series we will spend some time digging into the implied

## Authors
* Adapted from the PYMC labs [Blog post](https://www.pymc-labs.io/blog-posts/bayesian-vector-autoregression/) and Jim Savage's discussion [here](https://rpubs.com/jimsavage/hierarchical_var) by [Nathaniel Forde](https://nathanielf.github.io/) in November 2022 ([pymc-examples#456](https://github.com/pymc-devs/pymc-examples/pull/456))
* Reexecuted by Nathaniel Forde on Feb, 2023 ([pymc_examples#523](https://github.com/pymc-devs/pymc-examples/issues/523))

+++

Expand Down

0 comments on commit c0621b6

Please sign in to comment.