I plan on screening titles and abstracts for over 400 articles and I have a large team of research assistants (about 5 including myself) who can support with screening these articles. My plan was to act as reviewer 1 myself and have the remaining 4 divide the articles among themselves and act as reviewer 2 together so I can conduct an interrater reliability assessment (Kappa and percent agreement) between myself and the 4 of them combined as one. Would this logically make sense with the purpose of Cohen/Fleiss' Kappa and IRR given they are typically made to evaluate agreement between 2 independent reviewers?

More Jose Alejandro Agravante's questions See All
Similar questions and discussions