Hello,
I am working on a regression model that tests various factors for satisfaction of restaurant patrons. An established model from the literature predicts overall satisfaction with satisfaction in food quality, service, perceived value, and ambience. When I apply it to my large dataset, I find that R² is about .81 for this established model.
I am interested in seeing if other variables (such as opening hours, availability of specialty cuisines, establishment ownership structure, etc.) improve the model further and, more importantly, how they affect overall satisfaction. So I build additional models in which I add the new variables of interest to the existing model to see what happens.
When I tested these adjusted models for violations of OLS assumptions (especially severe multicollinearity) I found that there is severe multicollinearity in some of the four established factors in the literature (r marginally > .8), but not in any of the new variables of interest I am looking to add to the model.
Can I just ignore multicollinearity among the four IVs in the established model because I am much more interested in the new IVs that don't have issues, or should I look for remedies?
Thank you!