You need to center your indepoendent variables (moderator would be dichotomous e.g gender would be: male -1; female 1) and create a new variable by multiplying dependent variable and moderator. Then you insert three variables in the regression model and test, if the new variable is significant or not.
If you moderator is not categorical, tou need to calculate : moderator - mean moderator for each variable. Eg if you would test impact of height you would calculate ParticipantsHeight - AvgHeight; and later multiply the new variable with your independent variable.
Let's say you have one dependent variable DV1, three other independent variables IV1, IV2, IV3, and one moderator M1 on IV2 and IV3. Now, let's construct the following linear regression with a moderator (in fact I want to point out that so long as you introduce moderator your model will be non-linear) and conduct a comparison between linear model (1) and nonlinear model (2):
Assuming you have cross-sectional data, now you run OLS on both model (1) and (2) then compare the significance (p-value) of coefficient estimates for example: beta2 VS Beta2'; beta3 VS Beta3'. However before moving on to compare results, you have to confirm that both models are statistically robust and assure that none of classical assumptions of OLS are violated for example normality (Chi-square), heteroskedasticity (White's test, Breusch-Pagan, Koenker) and autocorrelation (Ljung-Box Q) on which you must perform tests accordingly for proper model specification.
Additionally, Newey-West HAC estimator (heteroskedastcity and autocorrelation consistent) is used to improve OLS with autocorrelation or heteroskedasticity in the residuals, but it cannot solely solve the problem.
I 100% agree with Rohit on very detailed explanation of various techniques to address and interpret moderating effect.
I also want to add that besides OLS (Ordinary Least Square) estimation, you might want to try more like GLS (General Least Square), 2SLS (Two Step Least Square), 3SLS (Three Step Least Square), ML (Maximum Likelihood) to get more robust results. P-value of coefficient estimates and hypothesis testing on model specification are the way to conduct such analysis.
Another area you have to be cautious about is using appropriate estimation and model for the type of data you have, for example OLS for cross-sectional and SES (Simultaneous Equation System) for panel. Time series data has its own specific models such as Box-Jenkins univariate ARIMA, Tiao-Box multivariate ARIMA, VAR, G-ARCH and its many variants and so on.
As I understand, CTA is related to Optimal Discriminant Analysis (ODA), which are applied to class or categorical dependent variable (often in social, behavioral, medical studies) compared with regression analysis to numerical variable. Therefore, the question is not exactly clear about what type of data she has and intend to use for. But nonetheless, thanks for your expertise in CTA and explanation on how to use it.
Very interesting explanation! especially impulse power and warp drive. I am Star Trek fan as well. You have broadened my horizon again. Thank you very much.
Another big thanks to you Paul. Fantastic readings! I really like the way you explain things. Since you mentioned ODA and novometry are at the very beginning of their development, I think that is why they have not been widely applied yet at least in my field?
Hi Paul, I am still completing (third-year) my PhD in Management with specialization in CEO compensation, corporate governance, outsourcing, econometric methods, and statistical software. I hope when I learn more from your method, I could conduct a comparison analysis on those topics. It's been pleasure to learn something new everyday. Keep in touch and thanks for sharing your ideas which I support. By the way, I am fan of quantum physics especially in the area of time, gravity, and propulsion, of course a fan of Star Trek. And I wish one day human could break the time barrier, so space time would be within our reach and we could "go where no one has ever gone before."
It is a good question. You can do moderation effect with Linear regression through many ways. One of the easiest ways is using hierachical linear regression. For example:
Anxiety may reduce depression more, for high level of job satisfaction than for low level, so Job satisfaction (M) moderates the relationship between x & Y. In SPSS, you may follow the following steps: