My question is based on the fact that I have a sample size of 580, which is very far from multivariate normality (Kurtosis about 200), so maximum likelihood and generalized least squares do not seem like a good option. I appreciate your suggestions.
Advantage: If the 580 data really are independent and identically distributed (e.g., have the same variance - but this seems unlikely since you mention MVN), then, even though the data are non-normal, the Central Limit Theorem suggests that least squares estimators will be approximately normally distributed.
Disadvantage: Least squares provides "best linear unbiased estimators" ("best" = minimum variance) if the response really does have a linear relationship with any predictors. If it does not, then least squares estimators of the data are biased (i.e., inappropriate), regardless of how much data you have.
Such a large data set appears to be amenable to non-parametric bootstrapping, which SPSS Amos apparently supports. Another option is to determine the correct (non-linear and/or non-normal) likelihood, though it is not clear whether Amos supports this. Whatever model is fit, use residual plots to assess the model's fit to your data (whether least squares of some transform of your data, or maximum (non-normal) likelihood estimation).
I completely agree with Parker's answer. But it could be interesting to contrast the same model with different estimation methods for testing the effect of multivariate normality deviation. I also suggest you to use robust maximum likelihood method of MPlus software.