Hello,

McHugh (2012) highlights some guidelines for cohen's kappa (quoted below), but I wanted to pose a question to others. What level of kappa do you usually see in published works? Or if you were a reviewer, what level of kappa would be your cut-off. I have some graduate students coding some data and the kappa statistics are ranging from .63 to .80. If you were reviewing, what would you think? Your thoughts and feedback on this would be appreciated. Thanks!

McHugh (2012)

"Cohen suggested the Kappa result be interpreted as follows: values ≤ 0 as indicating no agreement and 0.01–0.20 as none to slight, 0.21–0.40 as fair, 0.41– 0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1.00 as almost perfect agreement."

More Gregory Callan's questions See All
Similar questions and discussions