I am doing a research and I need some advice, I'll appreciate if you can help me. we ( two coders) read two articles, the first article is the original one, and the second article challenges the first article. in the first article, we find some claims (arguments) and in the second article we find some other claims (counter-arguments) and then we see how counter-arguments challenge the first argument (for example it can challenge by saying for some conditions this claim is not right, and so on). I do not know how I can find inter-rater reliability of coders for the following scenario:

Coder 1 finds argument A1, B1, C1 in the first article Coder 1 finds argument A2, B2, C2 in the second article

and she explains how counter-arguments challenges original arguments. ts.

Coder 2 finds argument D1, E1, F1 in the first article Coder 2 finds argument D2, E2, F2 in the second article

and she explains how counter-arguments challenge original arguments.

Coder 1 and 2 agree on the challenges but they did not find the same claims in the articles.

Or we many have conditions that coders agree on arguments but do not agree on challenges, or we may have conditions that coders agree on challenges but do not agree on arguments, and so on!

I am so confused, how we can calculate inter-rater reliability of these cases. for example, if I only calculate inter-rater reliability based on agreement on challenges, but it does not make sense since the original and counter-arguments of coders could be different.

If I calculate inter-rater reliability for the arguments, then what if the challenges are different! or do you recommend the coders just keep the arguments and counter arguments that match and then calculate inter-rater reliability for challenges they made for each pair of argument&counter-argument? or can we report three different inter-rater reliability in the paper?

Do you know any paper that is published and addressed the same problem.

I appreciate any help

Similar questions and discussions