I am doing my PG thesis on vulnerability mapping methodology using Water Associated Disease Index(WADI). I would like to know about sensitivity analysis mentioned in a article regarding this methodology.
In the Dictionary of Epidemiology the definition is "A method to determine the robustness of an assessment by examining the extent to which results are affected by changes in methods, models, values of unmeasured variables, or assumptions." (Porta, 2008:226) So it is a general approach to uncertainty. The example you provide refers to the calculation of an index. So you could identify a variable X, and ask 'what if X is over-estimated by 10%?' Then you re-calculate your index with X - 10%. Then you ask 'What if X is underestimated by 10%?' and you re-run your model with X + 10%. This allows you assess the overall sensibility of your model by systematically asking what if you are wrong by 5%, 10%, 25%, etc.
Another example is if you have, say, 75% response rate to a study, and you ask 'What if all of the non-responders had the outcome of interest? What if none of them had the outcome of interest?" This gives you the maximum range of error that could be caused by non-response.
I am very thankful to you sir for valuable information given by you. I would like to know about the resources in knowing in detail about sensitivity analysis. Please forward me any materials regarding this.
Based on various definitions e.g. of the outcome variable, one may find different results. Sensitivity analysis means to check if the results are similar or different if e.g. a broad or narrow definition is used. One may check the results for the full sample and then analyze the sample excluding the cases consistent with the broad definition of an outcome variable to see if the patterns are similar.
See below for another good starting from epidemiology perspective. The first paper is a good review paper of sensitivity and uncertainty quantification in epidemiology. You can also find codes for the sensitivity analysis. You can go to the last author's site to get the paper and/or more information on the codes implementing sensitivity analysis, etc... I hope this help.
S. Marino, I.B. Hogue, C.J. Ray, D.E. Kirschner. (2008) A methodology for performing global uncertainty and sensitivity analysis in systems biology, Journal of Theoretical Biology 254, 178– 196
Greenland S (Department of Epidemiology, UCLA School of Public Health, Los Angeles, CA 90095-1772, USA). Basic methods for sensitivity analysis of biases. International Journal of Epidemiology 1996; 25: 1107–1116
Background Most discussions of statistical methods focus on accounting for measured confounders and random errors in the data-generating process. In observational epidemiology, however, controllable confounding and random error are sometimes only a fraction of the total error, and are rarely if ever the only important source of uncertainty. Potential biases due to unmeasured confounders, classification errors, and selection bias need to be addressed in any thorough discussion of study results.
Methods This paper reviews basic methods for examining the sensitivity of study results to biases, with a focus on methods that can be implemented without computer programming.
Conclusion Sensitivity analysis is helpful in obtaining a realistic picture of the potential impact of biases.
perhaps you can write to professor Alberto Osella in Italy: [email protected] at the Itituto Saveerio de Bellis. He has been working with sensibility analyses in diet-cancer case control studies.
In the Dictionary of Epidemiology the definition is "A method to determine the robustness of an assessment by examining the extent to which results are affected by changes in methods, models, values of unmeasured variables, or assumptions." (Porta, 2008:226) So it is a general approach to uncertainty. The example you provide refers to the calculation of an index. So you could identify a variable X, and ask 'what if X is over-estimated by 10%?' Then you re-calculate your index with X - 10%. Then you ask 'What if X is underestimated by 10%?' and you re-run your model with X + 10%. This allows you assess the overall sensibility of your model by systematically asking what if you are wrong by 5%, 10%, 25%, etc.
Another example is if you have, say, 75% response rate to a study, and you ask 'What if all of the non-responders had the outcome of interest? What if none of them had the outcome of interest?" This gives you the maximum range of error that could be caused by non-response.