I am currently working on my thesis which is partly on validation of assessing learners' speaking proficiency, using CTCM and MFRM.

I have studied papers on this topic, and quite understood their design. However, the issue is that in my study I have to use 15 raters, each of whom assessing only 10 to 20 learners. To solve this problem, I decided to use 2 other raters to reassess all the scores, and use these two raters' scores as my data for the model. Is there any way that I can involve the primary raters and analyze their rating behavior and effect such as their bias or severity?

More Reza Parpanji's questions See All
Similar questions and discussions