07 September 2016 10 1K Report

I want to select the best measure of agreement among three raters. 

My team's research outline is as follows.

  • We want to assess the quality of about 100 health websites using the DISCERN, an assessment survey with 16 items.
  • Three raters will apply the DISCERN independently to the websites.
  • The raters choose a category of an ordinal scale (4-point Likert type scale) for each item.
  • I hope that the agreements will be given by website or by the DISCERN item.
  • What is the best measure of agreement in the study?

    Some recommended the Fleiss kappa, others did the ICC: both of them are available in Stata. But I didn’t find a reference on which I can rely in selecting the best measure.

    Thank you in advance.

    More Yong-jun Choi's questions See All
    Similar questions and discussions