forked from m-clark/mixed-models-with-R
-
Notifications
You must be signed in to change notification settings - Fork 0
/
issues.Rmd
177 lines (101 loc) · 18 KB
/
issues.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
```{r chunk_setup-issues, include=FALSE, eval=TRUE}
knitr::opts_chunk$set(
# cache
cache.rebuild = F,
cache = T
)
```
# Issues
<div style = 'text-align: center'>
<i class="fas fa-exclamation-triangle fa-5x" style = 'color:#990024'></i>
</div>
<br>
This section discusses common issues, conundrums, and other things that might come up when implementing mixed models.
## Variance Accounted For
People really love the R-squared value that comes from standard regression. Never mind that it is inherently biased, nor does it matter that there is no way to state what would be a 'good' result for a given data situation, nor that many actually don't know how to interpret it, nor does it even matter that many have no qualms about dropping it from a report at the first sign of trouble.
Suffice it to say that when there are multiple sources of 'variance', talking about *variance accounted for* is not straightforward. Still, many have tried. You might look at the <span class="pack" style = "">r2glmm</span> package and the references noted in the description, or the <span class="func" style = "">r2</span> function in <span class="pack" style = "">performance</span>. See also the [GLMM FAQ](https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#how-do-i-compute-a-coefficient-of-determination-r2-or-an-analogue-for-glmms). I would suggest not even bothering beyond the standard linear mixed model.
This also regards the *intraclass correlation*, which has little meaning outside of a linear mixed model with no random slopes. Again, just because you can calculate something that looks like it, doesn't mean that it actually means what you want it to mean.
## Common Alternatives to Mixed Models
I have a [document](https://m-clark.github.io/docs/clustered/) that goes into more detail about many approaches to dealing with clustered data, but we can briefly talk about some here. Common alternatives used in clustered data situations include:
- Fixed effects models (also panel linear models with fixed, as opposed to random, effects)
- Using cluster-robust standard errors
- Generalized estimating equations (GEE)
The first two are commonly used by those trained with an econometrics perspective, while you might see GEE more with those of a biostatistics or other perspective. GEE are in fact a generalization of the cluster-robust approach, and extend generalized least squares (GLS) to nonlinear/GLM settings. GEE approaches allow one to take into account the dependency in the data, but not actually estimate what might be very interesting, i.e. the random effects and associated variance. There are also fewer tools for GEE in more complicated covariance structures beyond a single clustering variable.
The nature of fixed effects models allow you to control for, but not actually explore, cluster level effects. This makes them a non-starter for many investigations, as those are typically of prime theoretical interest. In addition, the main concern econometricians have that leads them to prefer such models is easily accommodated in standard mixed models, so there is very little reason to employ them, as they are special cases of the more flexible mixed model.
### Growth curve models
With longitudinal data, growth curve models are a latent variable approach that is commonly used in these situations. With appropriate setup, they will duplicate the results of a mixed model. In my opinion, there are few reasons to use a growth curve approach over a mixed model, and many reasons not to, not least of which is that effects which would be simple to interpret in the mixed model approach are now a source of confusion to applied researchers in the growth curve model, even though it's the same thing. Furthermore, indirect effects, growth mixture models and other extensions common in the latent variable approach are more easily implemented in the mixed model approach. In short, only the most complicated models would perhaps require using structural equation modeling, but as such, would also bring with them many other issues. See more [here](https://m-clark.github.io/sem/latent-growth-curves.html) and [here](https://m-clark.github.io/mixed-growth-comparison/). The [supplemental][Supplemental] section has a full demonstration of how to equate these models.
## Sample Sizes
### Small number of clusters
Think about how many values of some variable you'd need before you felt comfortable with statistics based on it, especially standard deviation/variance. That's at play with mixed models, in the sense you'd like to have enough groups to adequately assess the variance components. Mixed models will run with very small numbers, though the estimates will generally be biased. I have a demo [here](https://m-clark.github.io/docs/mixedModels/growth_vs_mixed_sim.html) if interested. One way to deal with this is to move to the Bayesian context, which can be used to induce some regularization in parameter estimates, and better deal with possible variance estimates that are near zero.
This also speaks to the issue some will have regarding whether they should treat something as a <span class="emph">fixed vs. random</span> effect. Historical definitions would unnecessarily restrict usage of random effects approaches. For example, random effects were defined to be a (random) sample from some population. If this were the case, some might take issue when your levels do not deal with a sample, but the whole population, as in the case where your cluster is state and you have all 50 states. This doesn't matter. If you have enough levels to consider a mixed model approach, feel free to do so.
### Small number of observations within clusters
Mixed models work even with no more than two in each cluster and some singletons. Even in the simple case of pre-post design, mixed models are entirely applicable, though limited (e.g. you can't have random slopes with just pre-post). So whenever you have clustering of some kind, you should consider mixed models.
### Balanced/Missing values
We've primarily been looking at <span class="emph">balanced</span> data, where each cluster has the same number of observations within them. There is no requirement for this, and in many cases we wouldn't even expect it, e.g. people within geographical units.
However, if data is only missing on the outcome, or a mix of variables, we essentially have the same issue as with typical data situations, and will have the same considerations for dealing with missingness. If you don't lose much data, the practical gain by ignoring missingness generally outweighs the complexities that can come with, for example, multiple imputation[^mi], even in the best of settings. By default, mixed models assume missing at random (MAR). On the other hand, longitudinal data has special considerations, as there is typically increasing dropout over time.
Having dealt with missingness in a variety of contexts with different approaches (FIML, MI, Bayesian), the end result is usually that you spend vast amounts more time dealing with the missing data than you do understanding your primary models of interest, and yet don't feel any better about the results. Unless the missingness would make you lose a large chunk of the data, and/or you actually know something about the underlying mechanism attributing to missing data, it's probably best just to leave that to the limitations section of your report[^missingreviewer]. If you do deal with it, under less than ideal circumstances, I'd perhaps suggest an approach that is essentially a one-off imputation (e.g. with <span class="pack">missForest</span>) to be compared to the data that ignores the missingness, but still allows you to do everything you want. While you may not incorporate all sources of uncertainty in doing so, it seems to me a viable compromise.
### Big data
Mixed model packages are often not so great with largish data, e.g. thousands, coupled with anything beyond random intercepts. However, I've used <span class="pack">lme4</span> with millions and simple structure, and 100s of thousands with complicated structure, and it does very well (at least for the gaussian case). For truly big data you're not going to have a lot of options though, but you'd need a lot. [I did some testing of lme4, mgcv, and glmmTMB](https://m-clark.github.io/posts/2019-10-20-big-mixed-models/), for up to a million cases with two random effects, and generally even then you may only have to wait seconds to a couple minutes. In applied work with Medicare data of hundreds of thousands, and two random effects with thousands of levels each, those packages were still viable.
Common techniques in machine learning have no special implementation for the inclusion of something like random effects. There has been some effort with trees/forests [here](https://github.com/patr1ckm/mvtboost) (<span class="func">mebt</span> specifically), [here](http://pages.stern.nyu.edu/~jsimonof/REEMtree/), and also [vcrpart](https://cran.r-project.org/web/packages/vcrpart/)). However, most approaches in the ML world will simply throw the clustering variable in along with everything else, possibly even as a lower dimensional word embedding, or have enough data to do by-cluster approaches. While this may be fine predictively, it may not be theoretically interesting to those doing mixed models.
On the plus side, if you're willing to wait, tools like the Stan family will likely do just fine with bigger data as well, eventually. So while massive data may still be problematic, you may be fine with very large data.
## Model Comparison
Model comparison takes place in the usual way in the sense of potentially having statistical tests and information criteria. Unfortunately, the typical likelihood ratio tests one might use in standard settings are not so straightforward here. For example, in order to compare models with different fixed effects, at a minimum you'd have to change the default estimation from REML to ML, and the models must have the same random effects structure, for the resulting test p-value to be correct. It works the other way to compare models with different random effects structure (with fixed effects the same across models), i.e. where you'd have to use REML.
```{r model-compare-lr-ratio, eval=FALSE}
# to compare fixed effects refit with ML
gpa_1 = lmer(gpa ~ occasion + (1 + occasion | student), data = gpa)
gpa_2 = lmer(gpa ~ occasion + sex + (1 + occasion | student), data = gpa)
gpa_3 = lmer(gpa ~ occasion + (1 | student), data = gpa)
gpa_4 = lmer(gpa ~ occasion + (1 + occasion | student), data = gpa)
anova(gpa_1, gpa_2, refit = TRUE)
anova(gpa_3, gpa_4, refit = FALSE)
```
```{r model-compare-lr-ratio-show, echo=FALSE}
# to compare fixed effects refit with ML
gpa_1 = lmer(gpa ~ occasion + (1 + occasion | student), data = gpa)
gpa_2 = lmer(gpa ~ occasion + sex + (1 + occasion | student), data = gpa)
gpa_3 = lmer(gpa ~ occasion + (1 | student), data = gpa)
gpa_4 = lmer(gpa ~ occasion + (1 + occasion | student), data = gpa)
anova(gpa_1, gpa_2, refit = TRUE) %>%
data.frame() %>%
rename(`p-value` = `Pr..Chisq.`) %>%
kable_df()
anova(gpa_3, gpa_4, refit = FALSE) %>%
data.frame() %>%
rename(`p-value` = `Pr..Chisq.`) %>%
kable_df()
```
One can see that the estimated log likelihood is not the same for `gpa_1` (ML) and `gpa_4` (REML), even though they are otherwise the same model.
In my opinion, model selection involves considerations of theory, parsimony, and prediction, and those tests do not. I'm not a fan of such tests even in the standard setting, and would use AIC here to potentially aid (not make) a model choice if I thought it was necessary, as I would there[^lrtest]. We can just use the <span class="func" style = "">AIC</span> function, but see also the <span class="pack" style = "">cAIC4</span> package for a conditional AIC that works specifically for <span class="objclass" style = "">merMod</span> objects. In the following, `gpa_2` has the lowest (best) AIC value, and given that both it and `gpa_1` are notably superior to `gpa_3`, one could conclude that having the random coefficient for occasion is useful.
```{r model-compare-AIC, echo=FALSE}
model_list = list(
gpa_1 = gpa_1,
gpa_2 = gpa_2,
gpa_3 = gpa_3
)
# map_df(model_list, function(mod)
# data.frame(cAIC4::cAIC(mod)[c('loglikelihood', 'df', 'caic')]),
# .id = 'model') %>%
# arrange(caic) %>%
# mutate(`Δ caic` = caic - min(caic))
map_df(model_list, function(mod) data.frame(AIC = AIC(mod)), .id = 'model') %>%
arrange(AIC) %>%
mutate(`Δ AIC` = AIC - min(AIC)) %>%
kable_df()
```
In general though, trying to determine a 'best' model with one set of data is a problematic endeavor at best, and at worst, completely misguided. I think it's very useful to build models of increasing complexity, and select one to focus on based on the available evidence to simplify exposition. Just don't get hung up on choosing one based solely on the outcome of a single statistic. If you have a lot of data, you should consider some sort of explicit validation approach if you really want to compare competing models, but that is not without complication given the dependency in the data. If your goal is more on the predictive side, consider *model averaging* rather than selecting a single model.
## Convergence
Data is as data does. It is likely that you will eventually have issues in conducting mixed models, such as lack of convergence, estimates of zero for the random effects variance, warnings about scaling variables etc. These are not easy models to estimate (at least outside of the Bayesian context), so don't be surprised if all doesn't go smoothly.
I have a more [detailed overview](https://m-clark.github.io/posts/2020-03-16-convergence/), but we can talk briefly about typical problems and solutions. A few common issues[^richparam] I see are:
- Lack of centering/standardizing will often result in scaling warnings for <span class="pack">lme4</span>. It makes sense to do it anyway, so the same goes for mixed models as any other. A very common convergence problem in my experience results from time-based variables, such as incorporating a yearly trend. At the very least, it's useful to have a meaningful zero value, for example, by starting the time at zero, rather than say, year 2008. You may need to standardize the trend variable also to get the model to converge, but can revert back to raw form for interpretation.
- The default optimizer options may need to be tweaked, for example, to allow for more iterations. If possible, you may need to change to a different optimizer.
- Zero estimates for the random effect variance, or $\pm 1$ estimates for correlation of intercepts and slopes, often can be attributed to not having enough data, not having enough clusters, or an overlooked data issue, along with possible scaling. You won't have this issue in the Bayesian context, but in others, you may have to deal with the dependency in some other fashion (e.g. cluster-robust standard errors/GEE).
- Any complicated GLMM or similar model is likely to have problems, so be prepared. If you want to go beyond GLM, you'll have fewer tools and likely more issues. There are packages like <span class="pack">ordinal</span>, <span class="pack">mgcv</span>, <span class="pack">glmmTMB</span>, and others that can *potentially* handle alternate distributions and other complexities, however I think one might be better off with a Bayesian approach (e.g. <span class="pack">brms</span>/<span class="pack">rstan</span>). In practice, I've found others to be prohibitively slow, unable to converge, or too limited in post-estimation options.
If you're using <span class="pack">lme4</span> you have a couple resources and tools at your disposal.
- Start with `?convergence`. This comes up so often that there is a help file just for convergence issues.
- Use the <span class="func">allFit</span> function. If the results are very similar across fits you can feel better about them.
- Consult the troubleshooting section of the [FAQ](https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#troubleshooting).
The main point is that you'll need to acknowledge the warnings and messages that the packages provide, and be prepared to take necessary steps to deal with these issues when they arise.
[^mi]: Multiple imputation is straightforward only in theory. In practice it becomes a major pain to go very far beyond getting the parameter estimates for simple models. Full information maximum likelihood (FIML) is little implemented outside of SEM software/packages, and more problematic in its assumptions.
[^lrtest]: If you really go the statistical test route, see the <span class="pack">lmertest</span> package for additional functionality. Note also that AIC [does not come with a free lunch](https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#can-i-use-aic-for-mixed-models-how-do-i-count-the-number-of-degrees-of-freedom-for-a-random-effect), and as mentioned, see the <span class="pack">cAIC4</span> package and references therein.
[^missingreviewer]: Just note that in some disciplines, reviewers, who will rarely do this themselves for the same reasons, nevertheless will make a big deal about the missing data because it's an easy point for them to make. This is similar to econometrically trained reviewers who shout 'endogeneity!' at every turn, but won't bother to tell you where to get the instrument in the first place, admit that IV analysis is problematic in its own right, or what specifically the approach should be in complex model settings such as mixed models with nonlinear, spatial effects, multiple levels of clustering etc. I've even seen a reviewer say that one 'should conduct a test for whether data is missing at random vs. not, then proceed accordingly'. That was the entirety of their suggestion. Aside from the fact that there technically is no such test, because it would require the very data one doesn't have, declaring a type of missingness doesn't tell you which of dozens of approaches one should take.
[^richparam]: The *Richly Parameterized Linear Models* text discusses convergence and other issues probably more than any other text I've come across. Most don't treat the subject at all.