Hello everyone,
In my experimental linguistics research, I would like to evaluate how an interaction effect is influenced by additional predictors.
To be precise, suppose there are 10 questions in my experiment. The dependent variable is the number of correct responses a subject produced (N) during either the first or second half of the response period of each question. In modelling the data, I treat Half (first/second) as a factor, and there are also three demographic variables (age, education, gender) and 3 cognitive predictors (denoted as CP1, CP2, & CP3).
Here are the two models involved, in the notation of the lme4 package:
Null: N ~ Age + Education + Gender + Half + Half:Age + (1|Subject) + (1|Question)
Alternative: N ~ Age + Education + Gender + Half + Half:Age + CP1 + CP2 + CP3 + (1|Subject) + (1|Question)
Hypothesis: The Half:Age interaction (i.e., the age-related differences in the number of correct responses across the two halves) can be accounted for by the three cognitive variables.
So, I anticipate that, upon adding the 3 cognitive predictors, the Half:Age interaction would be significantly reduced, if not vanished altogether.
I'm using glmer (rather than lmer) to fit the model, due to the "counts" nature of the dependent variable. The lme4 package would output the beta (i.e., the regression coefficient), the std. error, and the p-value for each model.
Question: In my analysis, the beta associated with Half:Age is reduced from 0.012 to 0.006 after CP1-CP3 were included as predictors. I've considered using Welch's t-test (https://en.wikipedia.org/wiki/Welch%27s_t-test) to examine whether the beta has significantly reduced. Plugging in the beta & the SEs into the formula, I found that the reduction in the beta was very close to being significant. However, Welch's t-test is for independent samples with unequal variances. Is there a more suitable test for this purpose?
Thanks a lot in advance!