I know that when there is close to zero variance in either random effect, a perfect correlation can mean that the model isn't able to estimate both effects.
However, I have a good deal of variance in my random intercept and slope. They are, however, perfectly correlated.
I've read some conflicting advice on whether this is a problem.
This (https://stat.ethz.ch/pipermail/r-sig-mixed-models/2012q2/018251.html) says: You should be trying to select the 'minimal adequate' model; in
particular, overfitting the model (including zero and/or perfectly
correlated terms) is more likely to lead to numeric problems, so it's
better to try to reduce the model until all the terms can be uniquely
estimated.
This (https://mailman.ucsd.edu/pipermail/ling-r-lang-l/2015-March/000767.html) says: In general, however, large positive or negative
correlations among random effects can reflect structure in one's data and I
would certainly recommend against taking out random correlations on the
basis that they're close to ±1 when they're included in the model!
(Though this seems to be talking about a correlation *close* to 1, and not one that is exactly equal to 1?)
Finally, there is some advice offered here (https://stats.stackexchange.com/questions/116256/high-negative-correlation-between-intercept-and-slope): This is a standard symptom of an overfitted model. what to do about it (suppress correlation, remove random slope term, penalize using blme package) isn't as clear. glmm.wikidot.com/faq might have some useful information.
So I am wondering:
1) Is this a problem? The correlation actually makes sense given my data, but is there an issue with keeping this correlation in my model?
2) What should I do? The last comment is from Ben Bolker in 2014. Is that still the prevailing advice on what to do?
https://stat.ethz.ch/pipermail/r-sig-mixed-models/2012q2/018251.html
https://mailman.ucsd.edu/pipermail/ling-r-lang-l/2015-March/000767.html
https://stats.stackexchange.com/questions/116256/high-negative-correlation-between-intercept-and-slope