Do you think model diagnosis (e.g., deviance, residuals, or influential covariate patterns) is common or necessary in epidemiological studies, especially for studies examining the expose-disease association?
Some epidemiological data are simply analyzed in 2 X k tables, and thus there is no issue with model fit. Of course, the problem with these tables is exactly that which you are alluding to: there are no covariates in the model, and thus we make the assumption that there is no bias or confounding.
As soon as you include covariates in a regression model, then you have to meet both assumptions about the covariates controlling for bias, and assumptions about model fit. Depending on which type of data you have, different fit statistics may be used. The important thing is to demonstrate that the appropriate model was used to fit the data, and the appropriate data was used in the model to make causal inferences about the relationship between disease and exposure.
Diagnostic for model adequacy is very important. This tells you that how model fulfill the model assumptions. Hypothesis for model fit: H0: Model fits adequately, Ha: Model not fit well. If fail to reject null hypothesis, model fits.
Definitely, model adequacy / goodness of fit is a major issue to al quantitative models. Sofia and Ariel have already addressed some important aspects.
Further considerations are necessary to improve the validity of studies. One of the most neglected aspects is sampling, and identification of gaps that may severely impact on the conclusions. If you claim that your results should allow cocnclusion to generalize for a population with defined properties, then you should crosscheck internal sample structure of your dataset with other, external sources. Specifically, in epidemiological research, this is a necessary step. It may help to understand where ther might be "defects" in your sample that restrict the possibility to generalization. A careful inspection of model deviation (like distinct subclusters, extreme values) may help to identify the problems. Other statistic materials are necessary to compare to, and need to be weighed against the dataset you are focusing. Such external verification may lead to a much more cautious interpretation of results.
Ability to generalize results is the exception, not the rule. You may conclude this from the ongoing discussion on repeatability of studies in the scientific community.
There is a word of caution if you want to claim causality!
Too extended and complicated to discuss it here in detail.
I would recommend to study the excellent book
"Causality: Models, Reasoning and Inference" by Judea Pearl.