There is no ADF in Mplus, but you can use WLS instead. This can be easily done by specifying items to be categorical. Have a look at the software user's guide. Since you have software, it means that you can ask Linda Muthen for software support.
If your data does not have normality then WLSMV is the option. The same for discrete variables. For continuous variable where there is evidence of violations of normality, you can use robust maximum likelihood (MLM).
Chi square is very sensitive, so if you have a large sample size and skewed data then chi square value will be highly significant. If your data is skewed and all variables are continuous then perform MLM and use Satorra-Bentler test to correct for chi square. It also takes into account the sample size. If the chi square value still comes out significant then report only CFI along with other indices. In SEM, a sample size above 200 is considered large. Chi square is therefore considered "badness of test."
Another is WRMR if your using an ADF estimator. It should be
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6,1-55.
Here's also one paper that discusses the previous one:
Marsh, H. W., Hau, K. T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler's (1999) findings. Structural equation modeling, 11, 320-341.
I have a model with n=813. It is not normal and I decided to use Bollen-Stein bootstrap in Amos with discrepancy GLS. When I put number of bootstrap sample
I believe there is a problem somewhere!. however I experienced Smart PLS, with the same size of data (N=802), and gave me a good results for the non-parametric data. but wait for a specific answer for your case from the other active colleagues.
I prefer partial least squares because the assumption of normal distribution can be violated. The number of constructs is limited in CB-SEM like AMOS. With certain number of items, AMOS will complain that maximum iteration has been which I have not seen in PLS.
RMSEA of 0.041 is not bad. Though Hair et al report that absolute fit indices are debatable. They report RMSEA of 0.03 to 0.08 with 95 per cent confidence interval as example. So what you have is not bad.
As for convergent validity, Fornell and Larcker (1981) posit that if AVE is less than .50 but CR is higher than .60 the convergent validity of the construct is still acceptable.
The most commonly used parameter indices (Absolute and Comparative indices) includes Chi-square ration, RMR, GFI, CFI, TLI and RMSEA. The cut-off point
x2/df= less than 3; RMR=0.95 (Kline, 2005). If our model shows the above psychometric properties, it indicates that the model is properly fit with the sample data.
results less than 0.8 are showing poor fitness of the model with the sample data. In SEM literature, most of the prominent scholars noted that GFI, CFI and TLI should be at least 0.9
Glenn Althor we can refer to the most widely cited books (Byrne, 2016; Kline, 2005, 2011,2016; Hair et al, 2010) in Structural Equation Modeling literature. For me, these books are best references to understand SEM.
Hello guys, i have a problem with my results i'm using smartpls 3 and my model fit is not satisfactory. its for my thesis. SRMR is less than 0,08 but NFI is beteen 0,7 and 0.85 any of you can help me to find a way to improve these results or may be it would be ok to keep these results : in this case, how can I justify ?
Matsunaga, Masaki. 2008. "Item Parceling in Structural Equation Modeling: A Primer." Communication Methods and Measures 2 (4):260-93. doi: 10.1080/19312450802458935.
Ibtissem Hamouda: Try other methods or other software especially AMOS. I don't know if smartpls has modification indices but I'm sure you have that in AMOS, which is preferred for exploratory analysis or scale validation.
Please look at the value of "rms Theta" in Smartpls 3. For theory testing, consider using SRMR, RMStheta, or the exact fit test. Apart from conceptual concerns, these measures’ behaviors have not been researched in a PLS-SEM context in depth, and threshold values have not been derived yet. Following a conservative
approach, an SRMR (RMStheta) value of less than 0.08 (0.12) indicates
good fit.
Source: Hair, J. F., Hult, G. T. M., Ringle, C. M., and Sarstedt, M., A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM) [B]. 2nd ed. 2017, Thousand Oaks: Sage.
Ibtissem Hamouda, NFI of .85 is not bad. Though a threshold of .9 to .95 is usually the case, but Hair et al. 2010 in book multivariate data analysis argue about against absolute indices. You need to look at other fit indices to make a decision.
Now i had to modify my model and i have others results. I think (under your validation) that i have to use SMARTPlS because my sample is low N=72 (even i tried to use AMOS and it was easier and i actually got more positive results)
then, Now i have a SRMR = 0,1 and NFI = 0,500 but a good GOF = 0,5
Titus Chukwuemezie Okeke and Assefa Tsegay Tensay and Anupam Kumar Das do you think it's okay for an exploratory test in my thesis? I'm so anxious ! Thanks for your advises
Even though absolute fit indices are debatable, I don't think it is OK. Other issues like items loadings in the measurement need to be looked. The sample is really small.
Titus Chukwuemezie Okeke The thing is I think it's becausse the number of variable considering the size of my sample i should have maximum 7 variables (10 questionaries) which is not the case in this model where i Have 9 variables... the thing is that i could not reduce the number because of the conceptual frame and the research needs...
Ken Bollen, of Structural Equations with Latent Variables fame, once said that model fit is a latent variable as it is unobservable and as such it requires multiple indicators (ie fit statistics). There is a degree of consensus on what these indicators/fit statistics should be, and we know the strengths and limitations associated with many of these. This is well summarized in the Jackson et al paper. The chapter on model fit in Bollen's book is also outstanding.
Best wishes
mark
Jackson, D.L., Gillaspy, J.A., Jr., & Purc-Stephenson, R. (2009). Reporting practices in confirmatory factor analysis: An overview and some recommendations. Psychological Methods, 14, 6–23.