ran OLS regression on my data and found issues with auto correlation due to non-stationarity of data (time series data). I need to conduct a Generalized least Square regression as it is robust against biased estimators
Do you need to want to treat autocorrelation and/or non-constant error variance or something else? I am puzzled because you speak of non-stationarity. If you have non-stationary time series, autocorrelation is strictly speaking not defined. I think you should give more details on your problem.
What i find in YouTube are videos on weighted least square regression (WLS), which in a strict sense is a form of generalized least square. I understand that I understand that when the errors are dependent, one can use the generalized least squares regression (GLS) approach, however, when the errors are independent, but not identically distributed weighted least squares (WLS) can be used.
My test for independence of errors, one of the assumptions of OLS regression, yielded a Durbin-Watson statistic of 0.71, suggesting the existence of correlated errors. I tried 1st order differencing, though the Durbin-Watson statistic improved to 2.01, test for normality using the Normal P-P Plot of Regression Standardized Residual showed marked skewness, which was absent in the original model.
My case is that of existence of correlated errors, hence my desire to implement the GLS regression but don’t know how using SPSS
It seems to me that it is all right. Non-normality is generally not a big problem, except if you want to compute forecast intervals. Autocorrelation is a much more serious problem. Apparently (https://www.ibm.com/support/pages/does-spss-offer-estimated-weighted-least-squares-or-estimated-generalized-least-squares-regression-options), SPSS does not offer GLS but you can use the two time-series procedures ARIMA and TSMODEL to build a model with one or several explanatory variables. The problem is that they are more difficult to use than regression. I don't know if these features are available through the window interface but surely using the command syntax. Unfortunately, I don't have SPSS installe1d so I cannot check. Look at forecasting or time series. In your description, you don't speak about the number of observations and the number of variables. If you want help, you should be more specific about your problem.
Thank you for your answer Guy Mélard, I have one dependent variable and three independent variables. All variables are are measure on the continuous ratio scale and I have 72 observations. There are no mission data points, though the G*Power software indicate a minimum of 77 as minimum sample size for the study. My assumption is that the difference is not enough to impact the validity of the study.
Of course. It should be all right. The requirements on the number of observations (e.g. 30 for testing the mean) are rarely founded (10 may be enough most of the time but 1000 is not enough in some situations). The only problem with the time-series procedures is that their support for the explanatory variables is weak, as far as I remember: there is no test for multicollinearity, leverage, etc. But you may be happy with the results on the differenced series if the lack of normality (which is always too much emphasized) is accepted.
You are right, OLS is, to some extent, robust against the effects of lack of normality, but the issue is that the effect of differencing on a variable is the loss of its long-run effects, what is left is its short-run values. The reason for this being that the differencing reduces or eliminates the effects of time invariant variables in a model. GLS avoids these issues