We interviewed couples and I would like to determine "how much" they agree or disagree on the themes that emerged from the data. Anyone has an idea how this can be done?
The purpose of quantifying things in qualitative data isn't necessarily to do statistical testing or claim generalizability. I have argued for it as a systematic way to find patterns that might not be as easily detected just from reading and similar forms of "immersion." For this purpose, descriptive statistics like percentages are basically all you need.
In the articles below, my approach is basically two steps. First, use systematic counts of codes to find patterns. Second, return to the original qualitative data to explore and interpret the sources of those patterns.
For example, lets imagine that Catherine's counting show systematically higher levels of agreement about some topics rather than others. That patterns tells us something about "what" is in the data, but a next step would be to try to understand more about "why" that pattern occurs. Another example might be that some kinds of couples show higher levels of agreement than others. Etc.
FYI the first of these papers describes the method in general, while the second paper applies it to a specific case.
Depending on your data, you might try cultural consensus analysis or multi-dimensional scaling. You may also want to look at techniques offered by Dr. Russ Bernard, Dr. Steve Borgatti, Dr. Susan Weller, and Dr. Jeff Johnson. They offer a number of mixed method approaches for quantifying qualitative data.
I believe Nvivo can also give you proportion of coding equivalence if you add new codes for each theme for instance which specifically aim to compare two separate coders.
Really good question to anyone who uses software for qualitative analysis. Tom and Darren - could you, please, be more specific, because your answears are interesting. I think Atlas.ti gives opportunity to add special codes that are understood by SPSS - they start with $ but numer after it is still a matter of researcher's interpretation.
I must confess I aim to do it but haven't yet - I thought there should be a way of using the function where nvivo gives the amount co-occurrence of coding (between two separate codes) and using this principal to look at the amount of co-occurrence of coding of the same theme by two coders - I thought I'd simply take their coding of a theme and my coding of the same theme and then copy those to two new nodes which would be, for example, code 1 coder 1 and code 1 coder 2 and then running a query to look at the proportion of co-occurrence between these two nodes. There may be a much easier way but I haven't explored it fully yet. Also whether my plan will work or not I'm not sure as yet but that was how I was planning to do it.
Of course this depends on the type of data, but for instance you can code the interview dichotomously (agree/disagree) and conduct a cultural consensus analysis, and or MDS plot to visualize agreement (or variation) within the group (i.e., examining intra-cultural variation). You frame the interview informants as a network, you could compare dichotomous responses in an n-by-n matrix (i.e., informants x informants, or the same things in the columns and are in the rows), which would allow you to use and MDS plot function to show comparisons of agreement. The work of the scholars I mentioned previously are well more versed in this than I, but conceptually, I believe I am expressing a summation of the method. In the informant x informant matrix, you would be comparing in a binary way (0 = disagree, 1 = agree) each informant to all other individual informants. This would mathematically and visually depict whom agrees with whom on a give question/topic.
There is also a very good book "Analyzing Qualitative Data" (Bernard & Ryan, 2009) that may be of assistance in offering various ways to quantifying qualitative data. Additionally, Bernard's book , "Research Methods in Anthropology: Qualitative and Quantitive Approaches" may also offer some data analysis options.
Additionally, using one of the many qualitative data analysis software offerings would also provide a quantitative output (e.g., Nvivo, MaxQDA, etc). I think MaxQDA would be useful in that you can code the responses as you wish and conduct analysis on codes of your choosing. Folks like Dr. Amber Wutich may be able to give you some insights to MaxQDA.
First off, I'm assuming that you interviewed the two people in the couple separately. If you did interview them together, then you are probably getting into the realm of Discourse Analysis
For separate interviews, you might try and develop an approach that comes from inter-rater reliability, where you would treat each member of the couple as a "coder" on the same data and then assess the agreement in their coding. Of course, this assumes that the data can be "unitized" so that you have a direct comparison of what each person said about a specific codable topic.
NviVo would helping this regard. Manually, you can iteratively read over and over of the transcribed interview text to define the number of the interviewees that has commonality answer in a particular quest and them draw judgement on the level of agreement between the interviews. For instance, if you interview 20 persons and 15 of the participant are similarly saying the same thing in a question. Then the level of agreement is 75% as compared to 25% disagreement. You continue this approach to the rest of the question.
Some questions you might need to address before the one posed here, are:
why do you want to quantify levels of agreement when comparing qualitative data? The answer to this might point researchers to other methodologies.
and
If you manage to quantify this qualitative data, what conclusions can you draw? The answer to this is affected by what method of sampling has been used for selecting the people for interviewing. When the method of selection is purposive or convenience, which it often is, the weight given and generalisation that can be made from, numerical summaries, are very limited. This needs to be understood before embarking on the task.
I don't know why you want to opt for quantifying your qualitative data however, you can look out for the common themes that emerges from your analysis, categorise the themes and rather put them in a graph (bar chart, pie chart or any of the graphs of your choice). Better still, get a good qualitative researcher and seek for his opinion.
If you read Creswell and Plano Clark, 2011; Saunders, et al, 2009, you will understand that qualitative data quantification is always permitted and the results can be compared with a quantitative results for reliability and generalisation.
Hi, I agree with Sally and Nancy that if you are employing qualitative approach to analyze data, it is better to compare different patterns or meanings between different interviews rather than to quantify their information. The question is why don't you get the best out of qualitative methodology instead of trying to find solutions from quantitative approach? I believe that the guidance on triangulation and crosscheck found in Silverman (2011), Miles and Huberman (1994), Patton (2002) can help.
I maybe being naive but wouldn't it be better to just go back to the interviewees and ask them how much they agree with the emerging themes. If the interviewees are a couple, then it might be useful to go back to the interviewees and ask them how much they agree with the emerging themes from the data that you have. Instead of quantifying, member checking might give you a better insight about the data. You can probably ask each person from the couple separately as to how much they agree about the emerging themes and compare the answers of all the couples through out the data. I think this builds up a better credibility and validity to the data. I hope my answer offers something but I am glad to read the responses of the other authors who are experts in qualitative methodology. Gives me something to read and explore more as I am also in the process of designing my own study that involved dyads.
Quantifying data from a purposive or convenience sample of interviews gives very limited information. All you can say from this data is that a certain percentage of the sample thought a particular thing. You cannot generalise numerically from such a sample. To gather meaningful quantitative data a new representative random sample of population of interest is required, so that numerical generalisations can then be made from the data.
I do not believe that you can say anything about what people were *thinking*. Can you read their minds? If not - all you have is *descriptors* not *thoughts*. And this includes brain scans. You may be able to determine that people *are thinking* but not *what* they are thinking. For the question at hand: Yes, it is possible to quantify any kind of data and the relevance of the research results is no different from any other quantitative data analysis. The data can be more or less rigorously processed, what it is useful for, or in what way it is relevant to any particular question will need to be justified in the same way as any other quantitative analysis. If the research is non-positivist the justification of the relevance of the findings for any specific purpose will have to be justified from within the scientific and philosophical paradigm that you are pursuing.
To say that quantifying data from a purposive or convenience sample of interviews gives very limited information is rather meaningless from my point of view. Any and all quantification of data could be described in exactly the same way - so what exactly would be the difference that a researcher would have to take into consideration? If none - what is the point of the comment? No serious academic today should promote a belief that size of dataset determines relevance. Or that rigour of inquiry determines relevance all by itself. That would be to have missed out on approximately one hundred years of Philosophy of Science.
The never ending problem... We can quantify anything, the issue is if that relates adequately to what is studied.
Agreement is about meaning topics and the descriptive amplitude of them. Why not to describe those topicas and amplitudes, perhaps relating with the ways of seeing the topics of those who agre or not? And perhaps you find common way or tendencies. These will describe, with a better generalisation degree, the phenomena of agreeing. This is also comparable with other agreements that might be described in other places with other people
The purpose of quantifying things in qualitative data isn't necessarily to do statistical testing or claim generalizability. I have argued for it as a systematic way to find patterns that might not be as easily detected just from reading and similar forms of "immersion." For this purpose, descriptive statistics like percentages are basically all you need.
In the articles below, my approach is basically two steps. First, use systematic counts of codes to find patterns. Second, return to the original qualitative data to explore and interpret the sources of those patterns.
For example, lets imagine that Catherine's counting show systematically higher levels of agreement about some topics rather than others. That patterns tells us something about "what" is in the data, but a next step would be to try to understand more about "why" that pattern occurs. Another example might be that some kinds of couples show higher levels of agreement than others. Etc.
FYI the first of these papers describes the method in general, while the second paper applies it to a specific case.
Many thanks to everyone for your answers! I must say that I concur with Mr. Morgan's latter comment as my goal is not to do statistical testing on our data, but to help define in which way we should gear subsequent research.
The following might help contextualize my question. We have interviewed couples members (each was interviewed seperately) on what they perceived to have caused the woman's postpartum depression. I would first like to see if there's a correlation between women's EPDS scores (used to screen for PPD) and the level of agreement between partners' perceptions (the hypothesis being that men having the same perception could tend to provide more efficient help - or at least help that is perceived as more in line with the women's needs). I would then like to see if there's a correlation between the length of the paternal leaves and the level of agreement between partners' perceptions (the hypothesis being that men staying for a longer time at home might tend to perceive more the situation as their partners do).
Quantifying qualitative data will always be questioned as the sample size would not be adequate to answer the question, convenience sampling would bring the sampling bias and thes tudy would not have adequate power to answer the question. Some level of quantification could only be in the form of percentages.