Only group-level autoregressive coefficients possible? #81
-
Hi there! Thanks so much for making mvgam() – I've recently discovered it and absolutely love the flexibility and capabilities of it! I have some data that I'm trying to model with VAR() and gp(), and I have some questions. What I'm interested in is the auto- and cross-correlation of time series for some experience sampling data where we have sampled emotional valence of partners within couples 6 times a day for 7 days. Because trajectories of emotion valence across these seven days are highly individual, they have different shapes and degrees of fluctuation – and GP models seem to be able to capture these dynamics really nicely. What I'm interested in, however, is the coupling of individual time series within couples – do we see leader and follower dynamics, etc.? Here is a visualisation of what the data look like: At the moment, I have tried to model the data as follows (responses are restricted to between 0 and 1, therefore Beta family for the time being): varmod <- mvgam( When I run this model on a subset of the data, it gives me auto-regressive structures within and between participants, but it estimates the degree of cross dependence for ALL of the individual participants. What I'm a little nervous about is scaling the above model up to 120 participants or so (which would create a huge autoregressive structure) – is there a way to estimate the VAR component only for time series within each CoupleID? I've tried to use trend_map to incorporate a potential latent process (i.e., to create a dependence between partners within couples), but this made the model incapable of capturing individual trajectories. This is an example of the data I'm currently dealing with: I'd really appreciate your help with this, as I'm slightly stuck on how to proceed. A solution may very well involve changing the trend_formula, as I'm a little unclear on how to get hierarchical structure (i.e., some sort of (0 + Role | CoupleID)) within this sort of framework. I hope this makes sense – looking forward to hearing more! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
Thanks for the nice comments on the package @4CCoxAU. This is a very timely question, as I've had similar requests from colleagues and I'm currently implementing methods to handle this type of setup (see issue #75). There is even a branch on Github dedicated to working through all the internals so that I can make these models possible. The idea will be to allow the same types of When these groupings are supplied, the package will check to ensure that there is currently no |
Beta Was this translation helpful? Give feedback.
-
Hi again @4CCoxAU. The latest push to the hierarchical_cors branch should now be able to handle these models (installable using library(mvgam)
#> Welcome to mvgam. Please cite as: Clark, NJ, and Wells, K. 2022. Dynamic Generalized Additive Models (DGAMs) for forecasting discrete ecological time series. Methods in Ecology and Evolution, 2022, https://doi.org/10.1111/2041-210X.13974
# Simulate three couples with varying VAR dynamics and Beta observations
set.seed(0)
simdat1 <- sim_mvgam(trend_model = VAR(cor = TRUE),
prop_trend = 0.95,
n_series = 2,
mu = c(1, 0),
family = betar())
simdat2 <- sim_mvgam(trend_model = VAR(cor = TRUE),
prop_trend = 0.95,
n_series = 2,
mu = c(1, 0),
family = betar())
simdat3 <- sim_mvgam(trend_model = VAR(cor = TRUE),
prop_trend = 0.95,
n_series = 2,
mu = c(1, 0),
family = betar())
# Set up the data but DO NOT include 'series'
all_dat <- rbind(simdat1$data_train %>%
dplyr::mutate(CoupleID = '65'),
simdat2$data_train %>%
dplyr::mutate(CoupleID = '55'),
simdat3$data_train %>%
dplyr::mutate(CoupleID = '45')) %>%
dplyr::rowwise() %>%
dplyr::mutate(Role = if(series == 'series_1'){
'Father'
} else {
'Mother'
},
Role = as.factor(Role),
CoupleID = as.factor(CoupleID),
Valence = y) %>%
dplyr::select(-series, -year, -season, -y)
dplyr::glimpse(all_dat)
#> Rows: 450
#> Columns: 4
#> Rowwise:
#> $ time <int> 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10,…
#> $ CoupleID <fct> 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 6…
#> $ Role <fct> Father, Mother, Father, Mother, Father, Mother, Father, Mothe…
#> $ Valence <dbl> 0.7458002, 0.5236628, 0.6941480, 0.6616783, 0.3908517, 0.5712…
# Check that priors work
get_mvgam_priors(formula = Valence ~ Role + CoupleID,
trend_model = VAR(gr = CoupleID, subgr = Role),
data = all_dat)
#> param_name param_length
#> 1 (Intercept) 1
#> 2 RoleMother 1
#> 3 CoupleID55 1
#> 4 CoupleID45 1
#> 5 vector<lower=0>[n_series] sigma; 6
#> 6 real es[1]; 1
#> 7 real es[2]; 1
#> 8 real<lower=0> fs[1]; 1
#> 9 real<lower=0> fs[2]; 1
#> 10 real<lower=0> gs[1]; 1
#> 11 real<lower=0> gs[2]; 1
#> 12 real<lower=0> hs[1]; 1
#> 13 real<lower=0> hs[2]; 1
#> param_info
#> 1 (Intercept)
#> 2 RoleMother fixed effect
#> 3 CoupleID55 fixed effect
#> 4 CoupleID45 fixed effect
#> 5 trend sd
#> 6 diagonal autocorrelation population mean
#> 7 off-diagonal autocorrelation population mean
#> 8 diagonal autocorrelation population variance
#> 9 off-diagonal autocorrelation population variance
#> 10 shape1 for diagonal autocorrelation precision
#> 11 shape1 for off-diagonal autocorrelation precision
#> 12 shape2 for diagonal autocorrelation precision
#> 13 shape2 for off-diagonal autocorrelation precision
#> prior example_change
#> 1 (Intercept) ~ student_t(3, -0.4, 2.5); (Intercept) ~ normal(0, 1);
#> 2 RoleMother ~ student_t(3, 0, 2); RoleMother ~ normal(0, 1);
#> 3 CoupleID55 ~ student_t(3, 0, 2); CoupleID55 ~ normal(0, 1);
#> 4 CoupleID45 ~ student_t(3, 0, 2); CoupleID45 ~ normal(0, 1);
#> 5 sigma ~ student_t(3, 0, 2.5); sigma ~ exponential(0.14);
#> 6 es[1] = 0; es[1] = 0.5;
#> 7 es[2] = 0; es[2] = 0.1;
#> 8 fs[1] = sqrt(0.455); fs[1] = 0.6;
#> 9 fs[2] = sqrt(0.455); fs[2] = 0.3;
#> 10 gs[1] = 1.365; gs[1] = 1.1;
#> 11 gs[2] = 1.365; gs[2] = 1.07;
#> 12 hs[1] = 0.071175; hs[1] = 0.08;
#> 13 hs[2] = 0.071175; hs[2] = 0.1;
#> new_lowerbound new_upperbound
#> 1 NA NA
#> 2 NA NA
#> 3 NA NA
#> 4 NA NA
#> 5 NA NA
#> 6 NA NA
#> 7 NA NA
#> 8 NA NA
#> 9 NA NA
#> 10 NA NA
#> 11 NA NA
#> 12 NA NA
#> 13 NA NA
# Fit a model the hierarchical VAR model
mod <- mvgam(formula = Valence ~ Role + CoupleID,
trend_model = VAR(gr = CoupleID, subgr = Role),
priors = c(prior(std_normal(), class = Intercept),
prior(std_normal(), class = b),
prior(exponential(2.5), class = sigma)),
data = all_dat,
family = betar())
#> Compiling Stan program using cmdstanr
#>
#> Start sampling
#> Running MCMC with 4 parallel chains...
#>
#> Chain 1 Iteration: 1 / 1000 [ 0%] (Warmup)
#> Chain 2 Iteration: 1 / 1000 [ 0%] (Warmup)
#> Chain 3 Iteration: 1 / 1000 [ 0%] (Warmup)
#> Chain 4 Iteration: 1 / 1000 [ 0%] (Warmup)
#> Chain 4 Iteration: 100 / 1000 [ 10%] (Warmup)
#> Chain 2 Iteration: 100 / 1000 [ 10%] (Warmup)
#> Chain 3 Iteration: 100 / 1000 [ 10%] (Warmup)
#> Chain 1 Iteration: 100 / 1000 [ 10%] (Warmup)
#> Chain 2 Iteration: 200 / 1000 [ 20%] (Warmup)
#> Chain 3 Iteration: 200 / 1000 [ 20%] (Warmup)
#> Chain 4 Iteration: 200 / 1000 [ 20%] (Warmup)
#> Chain 1 Iteration: 200 / 1000 [ 20%] (Warmup)
#> Chain 2 Iteration: 300 / 1000 [ 30%] (Warmup)
#> Chain 1 Iteration: 300 / 1000 [ 30%] (Warmup)
#> Chain 4 Iteration: 300 / 1000 [ 30%] (Warmup)
#> Chain 2 Iteration: 400 / 1000 [ 40%] (Warmup)
#> Chain 3 Iteration: 300 / 1000 [ 30%] (Warmup)
#> Chain 4 Iteration: 400 / 1000 [ 40%] (Warmup)
#> Chain 1 Iteration: 400 / 1000 [ 40%] (Warmup)
#> Chain 2 Iteration: 500 / 1000 [ 50%] (Warmup)
#> Chain 2 Iteration: 501 / 1000 [ 50%] (Sampling)
#> Chain 3 Iteration: 400 / 1000 [ 40%] (Warmup)
#> Chain 4 Iteration: 500 / 1000 [ 50%] (Warmup)
#> Chain 4 Iteration: 501 / 1000 [ 50%] (Sampling)
#> Chain 1 Iteration: 500 / 1000 [ 50%] (Warmup)
#> Chain 1 Iteration: 501 / 1000 [ 50%] (Sampling)
#> Chain 2 Iteration: 600 / 1000 [ 60%] (Sampling)
#> Chain 4 Iteration: 600 / 1000 [ 60%] (Sampling)
#> Chain 1 Iteration: 600 / 1000 [ 60%] (Sampling)
#> Chain 3 Iteration: 500 / 1000 [ 50%] (Warmup)
#> Chain 3 Iteration: 501 / 1000 [ 50%] (Sampling)
#> Chain 2 Iteration: 700 / 1000 [ 70%] (Sampling)
#> Chain 4 Iteration: 700 / 1000 [ 70%] (Sampling)
#> Chain 1 Iteration: 700 / 1000 [ 70%] (Sampling)
#> Chain 2 Iteration: 800 / 1000 [ 80%] (Sampling)
#> Chain 4 Iteration: 800 / 1000 [ 80%] (Sampling)
#> Chain 1 Iteration: 800 / 1000 [ 80%] (Sampling)
#> Chain 3 Iteration: 600 / 1000 [ 60%] (Sampling)
#> Chain 2 Iteration: 900 / 1000 [ 90%] (Sampling)
#> Chain 4 Iteration: 900 / 1000 [ 90%] (Sampling)
#> Chain 1 Iteration: 900 / 1000 [ 90%] (Sampling)
#> Chain 2 Iteration: 1000 / 1000 [100%] (Sampling)
#> Chain 2 finished in 20.7 seconds.
#> Chain 3 Iteration: 700 / 1000 [ 70%] (Sampling)
#> Chain 4 Iteration: 1000 / 1000 [100%] (Sampling)
#> Chain 4 finished in 21.7 seconds.
#> Chain 1 Iteration: 1000 / 1000 [100%] (Sampling)
#> Chain 1 finished in 22.1 seconds.
#> Chain 3 Iteration: 800 / 1000 [ 80%] (Sampling)
#> Chain 3 Iteration: 900 / 1000 [ 90%] (Sampling)
#> Chain 3 Iteration: 1000 / 1000 [100%] (Sampling)
#> Chain 3 finished in 32.4 seconds.
#>
#> All 4 chains finished successfully.
#> Mean chain execution time: 24.2 seconds.
#> Total execution time: 32.6 seconds.
# Check standard outputs
code(mod)
#> // Stan model code generated by package mvgam
#> functions {
#> /* Function to compute a partially pooled correlation matrix */
#>
#> /* https://discourse.mc-stan.org/t/hierarchical-prior-for-partial-pooling-on-correlation-matrices*/
#> matrix combine_cholesky(matrix global_chol_cor, matrix local_chol_cor,
#> real alpha) {
#> int dim = rows(local_chol_cor);
#> matrix[dim, dim] global_cor = multiply_lower_tri_self_transpose(global_chol_cor);
#> matrix[dim, dim] local_cor = multiply_lower_tri_self_transpose(local_chol_cor);
#> matrix[dim, dim] combined_chol_cor;
#> combined_chol_cor = cholesky_decompose(alpha * global_cor
#> + (1 - alpha) * local_cor);
#> return combined_chol_cor;
#> }
#> /* Function to compute the matrix square root */
#>
#> /* see Heaps 2022 for details (https://doi.org/10.1080/10618600.2022.2079648)*/
#> matrix sqrtm(matrix A) {
#> int m = rows(A);
#> vector[m] root_root_evals = sqrt(sqrt(eigenvalues_sym(A)));
#> matrix[m, m] evecs = eigenvectors_sym(A);
#> matrix[m, m] eprod = diag_post_multiply(evecs, root_root_evals);
#> return tcrossprod(eprod);
#> }
#> /* Function to transform P_real to P */
#>
#> /* see Heaps 2022 for details (https://doi.org/10.1080/10618600.2022.2079648)*/
#> matrix P_realtoP(matrix P_real) {
#> int m = rows(P_real);
#> matrix[m, m] B = tcrossprod(P_real);
#> for (i in 1 : m) {
#> B[i, i] += 1.0;
#> }
#> return mdivide_left_spd(sqrtm(B), P_real);
#> }
#> /* Function to perform the reverse mapping*/
#>
#> /* see Heaps 2022 for details (https://doi.org/10.1080/10618600.2022.2079648)*/
#> array[,] matrix rev_mapping(array[] matrix P, matrix Sigma) {
#> int p = size(P);
#> int m = rows(Sigma);
#> array[p, p] matrix[m, m] phi_for;
#> array[p, p] matrix[m, m] phi_rev;
#> array[p + 1] matrix[m, m] Sigma_for;
#> array[p + 1] matrix[m, m] Sigma_rev;
#> matrix[m, m] S_for;
#> matrix[m, m] S_rev;
#> array[p + 1] matrix[m, m] S_for_list;
#> array[p + 1] matrix[m, m] Gamma_trans;
#> array[2, p] matrix[m, m] phiGamma;
#>
#> // Step 1:
#> Sigma_for[p + 1] = Sigma;
#> S_for_list[p + 1] = sqrtm(Sigma);
#> for (s in 1 : p) {
#> // In this block of code S_rev is B^{-1} and S_for is a working matrix
#> S_for = -tcrossprod(P[p - s + 1]);
#> for (i in 1 : m) {
#> S_for[i, i] += 1.0;
#> }
#> S_rev = sqrtm(S_for);
#> S_for_list[p - s + 1] = mdivide_right_spd(mdivide_left_spd(S_rev,
#> sqrtm(
#> quad_form_sym(
#> Sigma_for[
#> p - s + 2],
#> S_rev))),
#> S_rev);
#> Sigma_for[p - s + 1] = tcrossprod(S_for_list[p - s + 1]);
#> }
#>
#> // Step 2:
#> Sigma_rev[1] = Sigma_for[1];
#> Gamma_trans[1] = Sigma_for[1];
#> for (s in 0 : (p - 1)) {
#> S_for = S_for_list[s + 1];
#> S_rev = sqrtm(Sigma_rev[s + 1]);
#> phi_for[s + 1, s + 1] = mdivide_right_spd(S_for * P[s + 1], S_rev);
#> phi_rev[s + 1, s + 1] = mdivide_right_spd(S_rev * P[s + 1]', S_for);
#> Gamma_trans[s + 2] = phi_for[s + 1, s + 1] * Sigma_rev[s + 1];
#> if (s >= 1) {
#> for (k in 1 : s) {
#> phi_for[s + 1, k] = phi_for[s, k]
#> - phi_for[s + 1, s + 1] * phi_rev[s, s - k + 1];
#> phi_rev[s + 1, k] = phi_rev[s, k]
#> - phi_rev[s + 1, s + 1] * phi_for[s, s - k + 1];
#> }
#> for (k in 1 : s) {
#> Gamma_trans[s + 2] = Gamma_trans[s + 2]
#> + phi_for[s, k] * Gamma_trans[s + 2 - k];
#> }
#> }
#> Sigma_rev[s + 2] = Sigma_rev[s + 1]
#> - quad_form_sym(Sigma_for[s + 1],
#> phi_rev[s + 1, s + 1]');
#> }
#> for (i in 1 : p) {
#> phiGamma[1, i] = phi_for[p, i];
#> }
#> for (i in 1 : p) {
#> phiGamma[2, i] = Gamma_trans[i]';
#> }
#> return phiGamma;
#> }
#> vector rep_each(vector x, int K) {
#> int N = rows(x);
#> vector[N * K] y;
#> int pos = 1;
#> for (n in 1 : N) {
#> for (k in 1 : K) {
#> y[pos] = x[n];
#> pos += 1;
#> }
#> }
#> return y;
#> }
#> }
#> data {
#> int<lower=0> total_obs; // total number of observations
#> int<lower=0> n; // number of timepoints per series
#> int<lower=0> n_groups; // number of groups (correlations apply within grouping levels)
#> int<lower=0> n_subgroups; // number of subgroups (units whose errors will be correlated)
#> int<lower=0> n_series; // total number of unique series (n_groups * n_subgroups)
#> array[n_groups, n_subgroups] int<lower=1> group_inds; // indices of group membership
#> int<lower=0> num_basis; // total number of basis coefficients
#> matrix[total_obs, num_basis] X; // mgcv GAM design matrix
#> array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
#> int<lower=0> n_nonmissing; // number of nonmissing observations
#> vector<lower=0, upper=1>[n_nonmissing] flat_ys; // flattened nonmissing observations
#> matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
#> array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
#> }
#> transformed data {
#> vector[n_series] trend_zeros = rep_vector(0.0, n_series);
#>
#> // exchangeable partial autocorrelation hyperparameters
#>
#> // see Heaps 2022 for details (https://doi.org/10.1080/10618600.2022.2079648)
#> vector[2] es;
#> vector<lower=0>[2] fs;
#> vector<lower=0>[2] gs;
#> vector<lower=0>[2] hs;
#> es[1] = 0;
#> es[2] = 0;
#> fs[1] = sqrt(0.455);
#> fs[2] = sqrt(0.455);
#> gs[1] = 1.365;
#> gs[2] = 1.365;
#> hs[1] = 0.071175;
#> hs[2] = 0.071175;
#> }
#> parameters {
#> // raw basis coefficients
#> vector[num_basis] b_raw;
#>
#> // Beta precision parameters
#> vector<lower=0>[n_series] phi;
#>
#> // latent trend variance parameters
#> cholesky_factor_corr[n_subgroups] L_Omega_global;
#> array[n_groups] cholesky_factor_corr[n_subgroups] L_deviation_group;
#> real<lower=0, upper=1> alpha_cor;
#> vector<lower=0>[n_series] sigma;
#>
#> // unconstrained VAR1 partial autocorrelations
#> array[n_groups] matrix[n_subgroups, n_subgroups] P_real_group;
#>
#> // partial autocorrelation hyperparameters
#> vector[2] Pmu;
#> vector<lower=0>[2] Pomega;
#>
#> // raw latent trends
#> array[n] vector[n_series] trend_raw;
#> }
#> transformed parameters {
#> // latent trend VAR1 autoregressive terms
#> array[n_groups] matrix[n_subgroups, n_subgroups] A_group;
#> matrix[n_series, n_series] A;
#>
#> // LKJ form of covariance matrix
#> matrix[n_series, n_series] L_Sigma;
#>
#> // computed error covariance matrix
#> array[n_groups] cov_matrix[n_subgroups] Sigma_group;
#> matrix[n_series, n_series] Sigma;
#>
#> // initial trend covariance
#> array[n_groups] cov_matrix[n_subgroups] Gamma_group;
#> matrix[n_series, n_series] Gamma;
#>
#> // basis coefficients
#> vector[num_basis] b;
#>
#> // trend estimates in matrix-form
#> matrix[n, n_series] trend;
#> for (i in 1 : n) {
#> trend[i, 1 : n_series] = to_row_vector(trend_raw[i]);
#> }
#>
#> // derived group-level VAR covariance matrices
#> array[n_groups] cholesky_factor_corr[n_subgroups] L_Omega_group;
#> array[n_groups] matrix[n_subgroups, n_subgroups] L_Sigma_group;
#> for (g in 1 : n_groups) {
#> L_Omega_group[g] = combine_cholesky(L_Omega_global, L_deviation_group[g],
#> alpha_cor);
#> L_Sigma_group[g] = diag_pre_multiply(sigma[group_inds[g]],
#> L_Omega_group[g]);
#> Sigma_group[g] = multiply_lower_tri_self_transpose(L_Sigma_group[g]);
#> }
#>
#> // stationary VAR reparameterisation
#> {
#> array[1] matrix[n_subgroups, n_subgroups] P;
#> array[2, 1] matrix[n_subgroups, n_subgroups] phiGamma;
#> for (g in 1 : n_groups) {
#> P[1] = P_realtoP(P_real_group[g]);
#> phiGamma = rev_mapping(P, Sigma_group[g]);
#> A_group[g] = phiGamma[1, 1];
#> Gamma_group[g] = phiGamma[2, 1];
#> }
#>
#> // computed (full) VAR matrices
#> Sigma = rep_matrix(0, n_series, n_series);
#> Gamma = rep_matrix(0, n_series, n_series);
#> A = rep_matrix(0, n_series, n_series);
#> for (g in 1 : n_groups) {
#> Sigma[group_inds[g], group_inds[g]] = multiply_lower_tri_self_transpose(L_Sigma_group[g]);
#> A[group_inds[g], group_inds[g]] = A_group[g];
#> Gamma[group_inds[g], group_inds[g]] = Gamma_group[g];
#> }
#> L_Sigma = cholesky_decompose(Sigma);
#> }
#> b[1 : num_basis] = b_raw[1 : num_basis];
#> }
#> model {
#> // latent trend mean parameters
#> array[n - 1] vector[n_series] mu;
#>
#> // prior for (Intercept)...
#> b_raw[1] ~ std_normal();
#>
#> // prior for RoleMother...
#> b_raw[2] ~ std_normal();
#>
#> // prior for CoupleID55...
#> b_raw[3] ~ std_normal();
#>
#> // prior for CoupleID45...
#> b_raw[4] ~ std_normal();
#>
#> // priors for latent trend variance parameters
#> sigma ~ exponential(2.5);
#>
#> // LKJ error correlation prior
#> alpha_cor ~ beta(3, 2);
#> L_Omega_global ~ lkj_corr_cholesky(1);
#> for (g in 1 : n_groups) {
#> L_deviation_group[g] ~ lkj_corr_cholesky(6);
#> }
#>
#> // partial autocorrelation hyperpriors
#> Pmu ~ normal(es, fs);
#> Pomega ~ gamma(gs, hs);
#> for (g in 1 : n_groups) {
#> diagonal(P_real_group[g]) ~ normal(Pmu[1], 1 / sqrt(Pomega[1]));
#> for (i in 1 : n_subgroups) {
#> for (j in 1 : n_subgroups) {
#> if (i != j) {
#> P_real_group[g, i, j] ~ normal(Pmu[2], 1 / sqrt(Pomega[2]));
#> }
#> }
#> }
#> }
#>
#> // trend means
#> for (i in 2 : n) {
#> mu[i - 1] = A * trend_raw[i - 1];
#> }
#>
#> // stochastic latent trends (contemporaneously correlated)
#> trend_raw[1] ~ multi_normal(trend_zeros, Gamma);
#> for (i in 2 : n) {
#> trend_raw[i] ~ multi_normal_cholesky(mu[i - 1], L_Sigma);
#> }
#>
#> // priors for precision parameters
#> phi ~ gamma(0.01, 0.01);
#> {
#> // likelihood functions
#> vector[n_nonmissing] flat_trends;
#> vector[n_nonmissing] flat_phis;
#> flat_trends = to_vector(trend)[obs_ind];
#> flat_phis = rep_each(phi, n)[obs_ind];
#> flat_ys ~ beta(inv_logit(append_col(flat_xs, flat_trends)
#> * append_row(b, 1.0))
#> .* flat_phis,
#> (1
#> - inv_logit(append_col(flat_xs, flat_trends)
#> * append_row(b, 1.0)))
#> .* flat_phis);
#> }
#> }
#> generated quantities {
#> vector[total_obs] eta;
#> matrix[n, n_series] phi_vec;
#> matrix[n, n_series] mus;
#> array[n, n_series] real<lower=0, upper=1> ypred;
#>
#> // posterior predictions
#> eta = X * b;
#> for (s in 1 : n_series) {
#> phi_vec[1 : n, s] = rep_vector(phi[s], n);
#> }
#> for (s in 1 : n_series) {
#> mus[1 : n, s] = eta[ytimes[1 : n, s]] + trend[1 : n, s];
#> ypred[1 : n, s] = beta_rng(inv_logit(mus[1 : n, s]) .* phi_vec[1 : n, s],
#> (1 - inv_logit(mus[1 : n, s]))
#> .* phi_vec[1 : n, s]);
#> }
#> }
summary(mod)
#> GAM formula:
#> Valence ~ Role + CoupleID
#>
#> Family:
#> beta
#>
#> Link function:
#> logit
#>
#> Trend model:
#> VAR(gr = CoupleID, subgr = Role)
#>
#> N series:
#> 6
#>
#> N timepoints:
#> 75
#>
#> Status:
#> Fitted using Stan
#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
#> Total post-warmup draws = 2000
#>
#>
#> Observation precision parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
#> phi[1] 3.3 5.3 13 1.05 73
#> phi[2] 4.5 11.0 160 1.11 39
#> phi[3] 4.2 6.8 18 1.03 124
#> phi[4] 8.2 19.0 86 1.03 95
#> phi[5] 5.0 10.0 36 1.08 123
#> phi[6] 4.9 10.0 88 1.01 121
#>
#> GAM coefficient (beta) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
#> (Intercept) 0.66 1.10 1.50 1.05 121
#> RoleMother -1.30 -0.86 -0.47 1.01 119
#> CoupleID55 -0.57 -0.21 0.26 1.05 114
#> CoupleID45 -0.60 -0.23 0.25 1.05 114
#>
#> Latent trend VAR parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
#> A[1,1] 0.066 0.690 0.910 1.04 48
#> A[1,2] -0.110 0.095 1.700 1.05 41
#> A[1,3] 0.000 0.000 0.000 NaN NaN
#> A[1,4] 0.000 0.000 0.000 NaN NaN
#> A[1,5] 0.000 0.000 0.000 NaN NaN
#> A[1,6] 0.000 0.000 0.000 NaN NaN
#> A[2,1] -0.280 0.140 1.000 1.01 588
#> A[2,2] 0.230 0.610 0.850 1.03 110
#> A[2,3] 0.000 0.000 0.000 NaN NaN
#> A[2,4] 0.000 0.000 0.000 NaN NaN
#> A[2,5] 0.000 0.000 0.000 NaN NaN
#> A[2,6] 0.000 0.000 0.000 NaN NaN
#> A[3,1] 0.000 0.000 0.000 NaN NaN
#> A[3,2] 0.000 0.000 0.000 NaN NaN
#> A[3,3] 0.460 0.910 1.600 1.04 76
#> A[3,4] -0.160 0.190 0.590 1.02 164
#> A[3,5] 0.000 0.000 0.000 NaN NaN
#> A[3,6] 0.000 0.000 0.000 NaN NaN
#> A[4,1] 0.000 0.000 0.000 NaN NaN
#> A[4,2] 0.000 0.000 0.000 NaN NaN
#> A[4,3] -2.300 -0.810 -0.150 1.02 81
#> A[4,4] -0.600 0.230 0.640 1.02 93
#> A[4,5] 0.000 0.000 0.000 NaN NaN
#> A[4,6] 0.000 0.000 0.000 NaN NaN
#> A[5,1] 0.000 0.000 0.000 NaN NaN
#> A[5,2] 0.000 0.000 0.000 NaN NaN
#> A[5,3] 0.000 0.000 0.000 NaN NaN
#> A[5,4] 0.000 0.000 0.000 NaN NaN
#> A[5,5] -0.150 0.220 0.490 1.02 222
#> A[5,6] -1.700 -0.810 -0.420 1.02 149
#> A[6,1] 0.000 0.000 0.000 NaN NaN
#> A[6,2] 0.000 0.000 0.000 NaN NaN
#> A[6,3] 0.000 0.000 0.000 NaN NaN
#> A[6,4] 0.000 0.000 0.000 NaN NaN
#> A[6,5] -0.310 0.060 0.570 1.04 160
#> A[6,6] 0.180 0.570 1.200 1.04 136
#> Sigma[1,1] 0.022 0.140 0.730 1.05 72
#> Sigma[1,2] -0.150 -0.021 0.090 1.01 364
#> Sigma[1,3] 0.000 0.000 0.000 NaN NaN
#> Sigma[1,4] 0.000 0.000 0.000 NaN NaN
#> Sigma[1,5] 0.000 0.000 0.000 NaN NaN
#> Sigma[1,6] 0.000 0.000 0.000 NaN NaN
#> Sigma[2,1] -0.150 -0.021 0.090 1.01 364
#> Sigma[2,2] 0.043 0.320 1.000 1.05 60
#> Sigma[2,3] 0.000 0.000 0.000 NaN NaN
#> Sigma[2,4] 0.000 0.000 0.000 NaN NaN
#> Sigma[2,5] 0.000 0.000 0.000 NaN NaN
#> Sigma[2,6] 0.000 0.000 0.000 NaN NaN
#> Sigma[3,1] 0.000 0.000 0.000 NaN NaN
#> Sigma[3,2] 0.000 0.000 0.000 NaN NaN
#> Sigma[3,3] 0.061 0.270 0.980 1.05 67
#> Sigma[3,4] -0.420 -0.120 0.014 1.05 77
#> Sigma[3,5] 0.000 0.000 0.000 NaN NaN
#> Sigma[3,6] 0.000 0.000 0.000 NaN NaN
#> Sigma[4,1] 0.000 0.000 0.000 NaN NaN
#> Sigma[4,2] 0.000 0.000 0.000 NaN NaN
#> Sigma[4,3] -0.420 -0.120 0.014 1.05 77
#> Sigma[4,4] 0.094 0.370 0.860 1.04 76
#> Sigma[4,5] 0.000 0.000 0.000 NaN NaN
#> Sigma[4,6] 0.000 0.000 0.000 NaN NaN
#> Sigma[5,1] 0.000 0.000 0.000 NaN NaN
#> Sigma[5,2] 0.000 0.000 0.000 NaN NaN
#> Sigma[5,3] 0.000 0.000 0.000 NaN NaN
#> Sigma[5,4] 0.000 0.000 0.000 NaN NaN
#> Sigma[5,5] 0.059 0.280 0.880 1.05 117
#> Sigma[5,6] -0.260 -0.052 0.110 1.01 576
#> Sigma[6,1] 0.000 0.000 0.000 NaN NaN
#> Sigma[6,2] 0.000 0.000 0.000 NaN NaN
#> Sigma[6,3] 0.000 0.000 0.000 NaN NaN
#> Sigma[6,4] 0.000 0.000 0.000 NaN NaN
#> Sigma[6,5] -0.260 -0.052 0.110 1.01 576
#> Sigma[6,6] 0.100 0.640 1.600 1.02 132
#>
#> Stan MCMC diagnostics:
#> n_eff / iter looks reasonable for all parameters
#> Rhats above 1.05 found for 12 parameters
#> *Diagnose further to investigate why the chains have not mixed
#> 2 of 2000 iterations ended with a divergence (0.1%)
#> *Try running with larger adapt_delta to remove the divergences
#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
#> Chain 1: E-FMI = 0.1667
#> Chain 2: E-FMI = 0.1699
#> Chain 3: E-FMI = 0.1383
#> Chain 4: E-FMI = 0.1573
#> *E-FMI below 0.2 indicates you may need to reparameterize your model
#>
#> Samples were drawn using NUTS(diag_e) at Tue Oct 08 12:57:30 PM 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split MCMC chains
#> (at convergence, Rhat = 1)
conditional_effects(mod, type = 'link') plot(mod, type = 'forecast') plot(mod, type = 'forecast', series = 2) plot(mod, type = 'trend') plot(mod, type = 'trend', series = 2) # Inference for autoregressive parameters; these are partially pooled,
# whereby the diagonal partial correlations are drawn from a single distribution and the
# off-diagonals are drawn from a separate distribution, following Heaps
# 2022 (https://doi.org/10.1080/10618600.2022.2079648)
mcmc_plot(mod, variable = 'A', regex = TRUE,
type = 'hist')
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. # The full error covariance matrix
mcmc_plot(mod, variable = 'Sigma', regex = TRUE,
type = 'hist')
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. # How strongly were the error correlations pooled across
# Couples? A higher value here (closer to 1) suggests stronger shrinkage
# toward the 'global' correlation matrix, while a lower value (closer to 0)
# suggests the Couples differ quite a lot in their error dynamics
mcmc_plot(mod, variable = 'alpha_cor', type = 'hist')
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. Created on 2024-10-08 with reprex v2.1.0
|
Beta Was this translation helpful? Give feedback.
Hi again @4CCoxAU. The latest push to the hierarchical_cors branch should now be able to handle these models (installable using
devtools::install_github("nicholasjclark/mvgam@hierarchical_cors")
. But please note that I'll need to do a fair amount more testing before I bring this over to the main development branch and then onto CRAN. Below is a reprex to show how you can use it, though it would be great if you could try it out with your own data to see if it all makes sense and doesn't fail. There are probably several downstream functions that don't work, but I haven't had the time yet to fully test it out.