1. HTMT (Heterotrait-Monotrait Ratio of Correlations)
HTMT assesses discriminant validity — i.e., whether two constructs are empirically distinct.
High HTMT (typically > 0.85 or > 0.90) signals that two constructs may overlap too much (i.e., they are not distinct).
Justifications for high HTMT:
Conceptual overlap is theoretically justified: If two constructs are very closely related by theory, some overlap is expected (e.g., job satisfaction and organizational commitment might be tightly linked). You’d need to argue that despite overlap, they capture subtly distinct aspects important to the model.
Formative model context: If constructs are part of a higher-order formative construct, overlap is natural because they jointly define a broader concept.
Sample-specific effects: Cultural, contextual, or industry-specific factors may make two constructs more tightly correlated in your sample (this is common in international or niche samples). You can justify this empirically by showing that prior studies in similar contexts also report high correlations.
You should back this up by:
Referring to prior literature that acknowledges the conceptual closeness.
Running additional tests (e.g., cross-loading analysis, confirmatory factor analysis with alternative models) to show that the model fit does not collapse despite the overlap.
2. VIF (Variance Inflation Factor)
VIF assesses multicollinearity — i.e., how much a variable is linearly predicted by other variables.
High VIF (typically > 3.3, > 5, or > 10 depending on the guideline) indicates problematic multicollinearity.
Justifications for high VIF:
Theoretical collinearity is expected and meaningful: Some variables are complementary or interdependent (e.g., "marketing intensity" and "R&D intensity" as components of innovation capability). You argue that high collinearity reflects real-world dependencies rather than statistical artifacts.
Formative constructs: In formative measurement models, high VIF is sometimes natural because indicators together shape the construct. As long as redundancy is avoided (i.e., variables aren’t measuring exactly the same thing), moderate collinearity is tolerable.
Sample characteristics: In specialized industries or small samples, naturally high correlations arise (e.g., in tech startups, "innovation" and "market orientation" may be strongly interlinked).
Small model consequences: If VIF is high but regression coefficients remain stable and significant, and variance explained (R²) is strong, you can argue that the model is still informative despite multicollinearity.