HI,

I've got four coders who coded some classroom observation data (all are categorical). I need to conduct an inter-rater reliability check across the four coders.

It seems that Cohen's Kappa can only be used in the case of two raters? And ICC (intraclass correlation) can only be used for testing numerical data?

It is possible to calculate the agreement percentage using the method specified here (https://www.statisticshowto.com/inter-rater-reliability/), but it seems to be not as rigid as the other methods?

Thank you!

Shengnan

More Shengnan Wang's questions See All
Similar questions and discussions