Hello,

I am working on SEM-PLS environment and I am testing the predictive relevance of two nested models analysed with the same data-set. In the second model I just added a predictor.

Variables are not normally distributed.

I would like to test if the difference in the R2 of the two models is significant. I got Bias-Corrected accelerated confidence intervals for R2 values in both models through bootstrapping procedure (5000 samples).

It is safe to say that if those confidence intervals do not overlap then the two R2 are different? Or should I account for the number of variables added before comparing the R2? How?

Also, I was thinking to just get the differences of R2 for each sample produced through the bootstrap procedure. Then construct the Bias corrected confidence interval on the distribution of those differences.

Do you have any suggections?

Thank you!

E.

More Eleonora Nicolosi's questions See All
Similar questions and discussions