As long as you acknowledge that your model building is now exploratory, there are a few things you can do: 1) review the model and assess whether you have left out any theoretically meaningful paths/relationships; 2) look at the standardized residual covariance matrix for signs of relationships that were not well explained by your model; 3) look at the model modification indices to see if there are paths that you could add that would improve the fit of the model (and that make theoretical sense); 4) check your assumptions to make sure those have been satisfied.
As long as you acknowledge that your model building is now exploratory, there are a few things you can do: 1) review the model and assess whether you have left out any theoretically meaningful paths/relationships; 2) look at the standardized residual covariance matrix for signs of relationships that were not well explained by your model; 3) look at the model modification indices to see if there are paths that you could add that would improve the fit of the model (and that make theoretical sense); 4) check your assumptions to make sure those have been satisfied.
You can integrate Imtiaz Ahmad and Robert A Cribbie's suggestions.
First, you need to make sure whether have a strong reliability for each construct (latent variable) as Imtiaz Ahmad suggested.
Check the reliability for each latent variable (deleting some items, if necessary). Or, you can use item-parceling approach, if you have too many items compared to sample size (usually, you need a bigger than 1:10 ratio for item # vs. sample size.
Second, check the relationships among the latent variables. Follow Cribbie's specific suggestions here.
One more thing......sometimes indicators (items) for deferent latent variables can be related. For example, if you measure the same items with different referents. The residuals for indicators for different latent variables....then you can correlate the residuals (this can be difficult to understand.....)
You can rely on modification indices (if you are using AMOS software) but do not always and 100% depend on it because it does not know about your theoretical framework. The best way is to remove low factor loadings that you see on each line (pathway) between the latent (circle) and observed (square) variable.
can somebody help me. how can i improve my model fitness? reliability of all constructs are significant ( above .08) and (KMO also significant). all factor loading for all variables above .05 . I tried some covariences in modification indices but still not fit . can somebody advice me on what to do. ( these are some fit indices of my model . RMSEA .095/ TLI.777/ CFI .791 and PCLOSE .000.
X2 : "Chi square", compares the observed variance-covariance matrix to the predicted variance-covariance matrix, theoretically ranges from 0 (perfect fit) to +¥ (poor fit), considered satisfactory when non significant (p > .05), problems: highly dependent on N (meaningless with large samples), difficult to accept the null hypothesis
c2/df Is considered satisfactory when < 3 in large samples (N > 200), < 2.5 in medium-sized samples (100 < N < 200), and < 2 in small samples (N < 100).
AIC: "Akaike Information Criterion", like c2 but adjusts for model complexity, theoretically ranges from –¥ (perfect fit) to +¥ (poor fit), is generally used to compare competing models (the one with the lowest AIC is preferred)
NFI: "Normed Fit Index" (EQS), proportion in the improvement of the overall fit of the hypothesized model (h) compared to the independence model (i), theoretically ranges from 0 (poor fit) to 1 (perfect fit), considered satisfactory when > .90
NNFI: "Non-Normed Fit Index" (EQS), like NFI but adjusts for model complexity, theoretically ranges from 0 (poor fit) to 1 (perfect fit), considered satisfactory when > .90
GFI: "Goodness of Fit Index" (LISREL), like multiple r-squared, theoretically ranges from 0 (poor fit) to 1 (perfect fit), considered satisfactory when > .90
AGFI: "Adjusted Goodness of Fit Index" (LISREL), like GFI but adjusts for model complexity (like adjusted multiple r-squared), theoretically ranges from 0 (poor fit) to 1 (perfect fit), considered satisfactory when > .90
RMSEA: "Root Mean Square Error of Approximation", calculates the size of the standardized residual correlations, theoretically ranges from 0 (perfect fit) to 1 (poor fit), considered satisfactory when < .05