The result is always the same, in the sense that you're always solving the same equation. By the way, the predicted values do not depend on the order of predictors n the equation. However, differences appear when you make the so-called sequential analysis of variance, ie adding one term after the other, and computing residual df and sum of squares at each step. That's the reason why you should not use such sequential anova for testing the statistical significance of model coefficients.
The predicted value, y*, cannot be different with different orders of the predictors, as long as you are using all of them every time. Otherwise you are either using your software incorrectly, or there is a software glitch.
I noticed that Maddala used an asterisk rather than a hat when he meant weighted least squares (WLS) rather than OLS. I do that also in imitation. Is that what you do? If so, what do you use for the size measure in determining the regression weights? If you are changing that with each case, then that would cause some differences in y*.
Theoretically, there should not be any difference if you change the order of variables in regression.
In my experience, if you change the order of variables in SAS software, there will be difference in the values of coefficients and the prediction is affected.
However, there is no difference at all if you implement that in R.
Yikes. Maybe you could notify SAS. I did when I found 'fishy' results for r-square or maybe R-square a number of years ago, and i asked them to look into it. Though I expected some instability, a small change made a huge change in the statistic. I think that is unusual. To the best of my knowledge, SAS is very reliable, but it sounds like you might have found a problem.
Roumen -
As I noted above, it could be a software glitch, or you may have not used it correctly, as in the case of using different regression weights each time.