The situation is as follows:

I have a dependent variable (Y), measured on a scale from 1 to 11

I have an independent variable (X) with possible values from 0 to 6

I have a dichotomous moderator variable (Z) with the possible outcomes 0 and 1 (Group 0 and Group 1)

I hypothesize that the relationship between X and Y can be characterized by non-linear curves. I assume that the nature of this curve differs between the two groups of the moderator.

My hypothesis is that in group 0 the curve grows quickly in the beginning and then reaches a plateau but in group 1 the curve grows slower but keeps on growing and cuts the other curve at some point and reaches the plateau later and at a higher point.

Put another way: For low values of X I expect to find higher values of Y in group 0 compared to group 1. For high values of X I expect to find higher values of Y in group 1.

It should look like the function in the attached picture.

How can I test this hypothesis with a regression? My approach would be:

I assume that there is a regression of the type:

Y = ß0 + ß1X + ß2Z + ß3X1/2 + ß4X*Z + ß5X1/2*Z

Then I would look at the tests for ß4 and ß5. I would assume ß4 to be bigger in Group 1 resulting in higher Y values for high X values and bigger ß5 for Group 0 resulting in higher Y values for low X values.

Does this make sense? In my field of research there is not much research using higher order regression terms (except X²) let alone interactions with those terms. I´m not sure if it is correct to include the square root term with a linear term in the same equation.

This problem is quite specific and it has been hard finding proper advice so I am thankful for any thoughts or advice on this topic.

Thanks,

Arne

Similar questions and discussions