There were 8 interviews, each interview had 9 questions, and each transcript was coded independently/concurrently by two coders (same Coding Scheme, 9 different themes). Do I demonstrate inter-coder reliability for each interview, each theme or each question?

The basis of the research is I developed a questionnaire to assess disaster preparedness in a population with chronic medical disease. I then preformed 8 cognitive interviews that examined each question in the questionnaire, essentially assessing if what I think I'm asking is the same thing the participant thinks I'm asking. These interviews were transcribed, a coding scheme was developed with 9 codes/themes. 

Coding Scheme :

A) Would like question to be elaborated on, and/or more information to make questions easier to answer

B) Asks for clarification

C) Gives improper, incorrect or inadequate answer

D) Has a different understanding of what the question than the one desired

etc. 

Then two coders went in and coded the interviews based on the above coding scheme. I'm trying to demonstrate intercoder agreement but I'm not sure how I should assess the agreement, by question, by interview, by theme. 

Thank you for any help. 

More Maddy c Laberge's questions See All
Similar questions and discussions