Skip to content
Snippets Groups Projects
Commit 3f7e71b8 authored by Mirko Birbaumer's avatar Mirko Birbaumer
Browse files

added example MC_3_1

parent b66aa7a5
No related branches found
No related tags found
No related merge requests found
Pipeline #664567 passed
-1.081 9.357
-0.888 8.322
-0.623 8.745
-0.480 7.346
-0.505 7.028
-0.345 4.032
0.052 1.490
0.061 3.265
0.215 4.525
0.395 3.339
0.643 2.045
0.871 -1.336
0.776 1.264
0.833 -0.813
1.224 -0.335
1.300 0.152
1.563 -1.797
1.669 -4.772
1.967 -3.043
1.950 -5.293
2.030 -4.401
2.284 -5.294
2.534 -2.912
2.531 -3.680
2.765 -3.975
2.913 -7.202
3.168 -4.317
3.119 -7.854
3.385 -5.167
3.488 -5.135
3.538 -6.754
3.900 -6.662
3.970 -6.080
%% Cell type:code id:2bc64f1b-8f55-40b3-bfe2-38e144d482f1 tags:
``` python
import arviz as az
import matplotlib.pyplot as plt
from matplotlib.ticker import FormatStrFormatter
import numpy as np
import pandas as pd
import pymc as pm
```
%% Cell type:code id:3677ec51-54fd-48cd-b379-ce7161c0c1f9 tags:
``` python
df = pd.read_csv("./Daten/chemical_shifts.csv",header=None)
#df = df.iloc[:,0]
az.style.use("arviz-darkgrid")
df.head()
```
%% Output
0
0 51.06
1 55.12
2 53.73
3 50.24
4 52.05
%% Cell type:code id:2db6d34d-e1f4-4e8b-b001-fac7c7ab8910 tags:
``` python
with pm.Model() as model_g:
μ = pm.Uniform('μ', lower=40, upper=70)
σ = pm.HalfNormal('σ', sigma=10)
y = pm.Normal('y', mu=μ, sigma=σ, observed=df)
trace_g = pm.sample(2000, idata_kwargs={"log_likelihood": True}, random_seed=4591)
trace_g.extend(pm.sample_posterior_predictive(trace_g, random_seed=4591))
with pm.Model() as model_t:
μ = pm.Uniform('μ', 40, 75)
σ = pm.HalfNormal('σ', sigma=10)
ν = pm.Exponential('ν', 1/30)
y = pm.StudentT('y', mu=μ, sigma=σ, nu=ν, observed=df)
trace_t = pm.sample(2000, idata_kwargs={"log_likelihood": True}, random_seed=4591)
trace_t.extend(pm.sample_posterior_predictive(trace_t, random_seed=4591))
```
%% Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [μ, σ]
Sampling 4 chains for 1_000 tune and 2_000 draw iterations (4_000 + 8_000 draws total) took 3 seconds.
Sampling: [y]
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [μ, σ, ν]
Sampling 4 chains for 1_000 tune and 2_000 draw iterations (4_000 + 8_000 draws total) took 4 seconds.
Sampling: [y]
%% Cell type:code id:0e05f801-5548-4533-9c46-7ffc290582e9 tags:
``` python
cmp_df = az.compare({"model_g": trace_g, "model_t": trace_t})
# cmp_df.to_markdown()
cmp_df
```
%% Output
/opt/conda/lib/python3.10/site-packages/arviz/stats/stats.py:792: UserWarning: Estimated shape parameter of Pareto distribution is greater than 0.70 for one or more samples. You should consider using a more robust model, this is because importance sampling is less likely to work well if the marginal posterior and LOO posterior are very different. This is more likely to happen with a non-robust model and highly influential observations.
warnings.warn(
/opt/conda/lib/python3.10/site-packages/arviz/stats/stats.py:792: UserWarning: Estimated shape parameter of Pareto distribution is greater than 0.70 for one or more samples. You should consider using a more robust model, this is because importance sampling is less likely to work well if the marginal posterior and LOO posterior are very different. This is more likely to happen with a non-robust model and highly influential observations.
warnings.warn(
rank elpd_loo p_loo elpd_diff weight se \
model_t 0 -122.479620 3.940166 0.000000 1.000000e+00 9.110407
model_g 1 -131.694721 5.685827 9.215101 1.243450e-14 12.510035
dse warning scale
model_t 0.000000 True log
model_g 5.110462 True log
%% Cell type:code id:82618723-2f74-4c98-8a0e-011f9889cd6e tags:
``` python
az.plot_compare(cmp_df)
```
%% Output
<Axes: title={'center': 'Model comparison\nhigher is better'}, xlabel='elpd_loo (log)', ylabel='ranked models'>
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment