Hello, I have run a logistic model and I have four categories of a predictor variable. One option I have for comparing groups is comparing each group to every other possible group (like a Tukey's test). However, given the nature of my research questions, I am only actually interested in four out of the eight comparisons, so I lose power and stack on degrees of freedom when I test all comparisons, resulting in some of the pairs being insignificant although they were significant in the original model.

Is it acceptable to run the model a few different times and change the reference group to just get the comparison I am interested in or is this too liberal and increases my probability of a type error? Another method that may fit is using a planned comparisons/contrast matrix to isolate the effects of the pairs I think are meaningful.

Any advice from a stats-savvy person would be helpful. I'm curious to know how you'd approach it!

Similar questions and discussions