Hi!

I hope someone can help me in these questions.

In my research, I evaluated a screening for language disorder In children. 105 children are both screened by the instrument and also clinical examined by me and a collegue in order to calculated sensitivity, specificity, PPV and NPV.

We were 2 speech therapist who tested the children (almost half of children each). To calculate an agreement (interrater reliability) we tested 10 children together at the same time and then compared the final assessment (4 category of outcomes/severity that lead to conclusion if the child assessed to have language disorder or be age appropriate).

We calculated both Cohen´s kappa and ICC witch give us k= .71(good agreement) vs ICC=.90 (excellent).

My questions are:

- Why so different value depend on using kappa or ICC?

- Is there any minimum limit for amount of cases using in interrater raliability. In my case 10.

- Which one is appropriate for my data? for which reason/argument?

More Laleh Nayeb's questions See All
Similar questions and discussions