In my master thesis, I proposed a set of Human-Computer Interaction (HCI) guidelines for inclusive design focused on users with autism. A challenging aspect of the research was the evaluation of the guidelines' effectiveness, since I coundn't find a well stablished method, tool, technique or framework to perform this task. I decided to use a pilot evaluation through a qualitative survey and then I performed a second qualitative evaluation adapting the methods Level of Evidence and Strength of Recommendations (or Strength of Evidence) applied in Healthcare papers [1-5].

Is there some robust and well stablished method in HCI to evaluate a proposal of guidelines? How to ensure the effectiveness of new recommendations?

References:

[1] BRODERICK, J. P. et al. Guidelines for the management of spontaneous intracerebral hemorrhage: a statement for healthcare professionals from a special writing group of the Stroke Council. American Heart Association. Stroke. v. 30, p. 905-915, 2005.

[2] GRADE Working Group. Grading quality of evidence and strength of recommendations. BMJ: British Medical Journal, v. 328, n. 7454, p. 1490, 2004.

[3] LOBIONDO-WOOD, G. P.; HABER, J. Nursing research: Methods and critical appraisal for evidence-based practice. St. Louis, MO: Mosby Elsevier. 2010. 7ª Ed.

[4] NKF, National Kidney Foundation. KDOQI Clinical Practice Guidelines and Clinical Practice Recommendations for Diabetes and Chronic Kidney Disease. 2007. https://www2.kidney.org/professionals/KDOQI/guideline_diabetes/appendix2.htm

[5] SIMON, S. Special guidelines for overviews and meta-analyses. 2010. http://www.pmean.com/12a/journal/meta-analysis.asp

Similar questions and discussions