I'm having trouble interpreting the results of some recent bayesian mixed effects models I ran. I am looking to predict a continuous outcome variable ('accuracy') from a binary, dummy coded, within subjects factor ('intervention').

Model 1 is defined as accuracy ~ intervention + (1 | items) + (1 + intervention | subjects), and here intervention negatively affects accuracy.

Model 2 is defined as accuracy ~ intervention + control variable + (1 | items) + (1 + intervention | subjects) ('control variable' is continuous and between-subjects).

Model 2 results in no effects whatsoever, so the previous effect of intervention is lost but also control variable doesn't affect accuracy.

Model 2 has a barely higher R2 (36 vs 37) and a model comparison using lOOIC indicates that model 1 is the better model.

I have looked into suppression effects, but as far as I understand it, none of the conventional definitions of suppressor effects apply to my results because control variable (A) doesn't increase the prediction strength of my model, and (B) isn't correlated with accuracy.

To better understand what is going on I performed a median split on control variable and ran two independent mixed models for low and high control variable values.

Here I see the same drug effects in both low and high models (albeit with slightly higher uncertainty surrounding the effect, presumably due to lower df). This is confirmed when plotting the data, I see the same decrease in accuracy for both low and high control variable groups.

From my understanding all of the above indicates there is an effect of intervention and I should trust model 1 - I would just like to understand what happens in model 2.

I would be highly grateful if anyone could direct me to some relevant literature or maybe provide an explanation as to what might be going on here!

More Bianca Schuster's questions See All
Similar questions and discussions