The method to collect data will depend very much on your choice of methodology. if you are a phenomenologist, then it's got to be unstructured in-depth interviews. if you use a generic qualitative approach (the most common in health research), semi-structured interviews are normally the method of choice.
Using software to analyse qualitative data is not always advisable, while on other occasions it may be a lifesaver. Remember, you need to learn how to use the software, this takes time and effort beyond the time and effort needed to learn about the analysis. The software will not analyse for you, the thinking is still your job.
Whether or not it is worth learning to use qualitative analysis software depends on the presence of one of two factors. If you need to analyse a lot of data, say more 30 substantial interviews, then using the software can be helpful. Also, if you will need to use the software again after completing you PhD, then it may be worth learning how to use the software as well. In nearly all other cases I normally advise my students against it.
A very broad question that is difficult to answer without specific context. Yes - I am very much a partisan of qualitative analysis - and experienced in a wide range of techniques and qualitative methodologies. I can answer one of your questions quite easily. NVivo is my software tool of choice - and is probably the most utilised out there. That said, I also use 'manual' techniques where I think that software may 'get in the way' of what I would like to uncover.
As to 'what is best practice' depends very much on methodology. 'General' content/thematic analysis works most commonly for most qualitative designs - but others usually require something more specific i.e. grounded theory and the constant comparative method, phenomenology and Giorgi's step-wise technique (where it can be applied), Delphi techniques for creating statements etc.
My preference for collecting qualitative data (although it still depends on the methodology) is audio-taped semi-structured, individual, face-to-face interviews in an as 'natural' an environment as possible - and using field notes, funneling and memo-ing where I can. That said, I also like focus groups where I can employ them.
You might consider using Consensual Qualitative Research (Hill et al., 2011). It analyzes semi-structured interview data using a group consensual approach. CQR is not software based, rather, the goal is to try to methodically capture the nuances of social science phenomena in a group of 2-4 judges in a review and discussion format. It works well if you're attempting to understand complex phenomena, such as psychotherapy. Generally, the optimal sample size for CQR analyses is about 15, however, it could be adapted to working with an N of 30.
I think it is really interesting that most qualitative researchers nowadays seem to be like... afraid, maybe, of their data. Everyone tries to be as positivist as possible: writing down or recording everything, trusting one or another computer program and using categories that come out of someone elses theory.
Actually, I used some interviews for my PhD - and was asked in my defense how come that I just have interview protocols and not complete transcriptions (because I didn't have access to the equipment). As if a transcription could not be completely faked. And the big minds of our disciplines did not do anything like that - or does anyone know of transcriptions or atlas.ti codings in Foucault, Bourdieu, Wallerstein and so on?
My advice: trust your data. Try to stick to it and to derive the theories or hypothesis from the data. Well, something like a post-positivist grounded-theory, I guess...
Foucault, Bourdieu, Wallerstein are all basically theorists -- although Bourdieu did begin by doing some anthropological work. So, if you want to collect and analyze data, that is rather different from doing whatever stimulates the process of thinking "big" thoughts.
With 30 interviews, I would recommend using some kind of QDAa software for the simple reason of keeping track of your data. In particular, the two basic functions that these packages provide are to mark and retrieve your data. Yes, you could mark up transcripts, or listen to the recordings repeatedly, but it becomes more and more difficult to retrieve specific elements of the data once you get beyond a handful of interviews.
So, I don't think that using QDA software particularly enhances the data analysis process in terms of the results it produces, but it definitely can make that process more manageable.
Dorota, yes, indeed, it is possible to publish CQR results! A recent review showed that more than 100 studies have been published that used CQR to analyze qualitative data.
For analysis and teaching qual analysis you might want to see my article in Qualitative Inquiry "A Simple Card Trick. . . ." (Waite, 2012). Let me know what you think. Duncan
To evaluate the trustworthiness of qualitative research requires 4 factors: credibility, transferability, dependability & confirmability (see Lincoln & Guba, 1986 and Sandelowski, 1993). A clear audit trail for transcriptions or whatever format for the data should have satisfied Philip Altman's dilemma at his defense. Shame on the profs for not helping you in your proposal for having specified this issue at the outset.
Most qualitative research, with small sample size and focused sampling, will find it difficult to achieve the requirement for transferability.
Clarity about the theoretical perspective will ensure that you are honest about your assumptions and should relate clearly to the research question(s) that you ask. Different kinds of questions demand different kinds of methodologies. So,for example, if you want to know about a process, grounded theory methodology is appropriate, although there are many flavors and you must justify your selection.
A superb discussion of how to evaluate and synthesize the extant qual literature regarding a particular phenomenon of interest is Meta-Study of Qualitative Health Research (the examples are from nursing, but the strategy remains the same I presume in other disciplines) edited by Paterson, Thorne, Canam and Jillings (2001). The objectives to analyze and synthesize qualitative research is to develop theory, model processes and achieve transferability. These objectives are quite different from that for quantitative researchwhich is to achieve a larger size effect & identify cause and effect relationships.
Lincoln, Y. S. & Guba, E. G. (1986) But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. Educational Communication and Technology Journal, 1986(30), 73-84. doi: 10.1002/ev.1427
Sandelowski, M. (1993). Rigor or rigor-mortis: The problem of rigor in qualitative research revisited. Advances in Nursing Science, 16(2), 1-8.
Have a look at 'systematic analysis of qualitative data,' p.2-3 in the attached document, for an example of how analysis of qualitiative data was approached in a multi-disciplinary longitudinal study of childhood poverty. Thanks for prompting a useful discussion.
The method to collect data will depend very much on your choice of methodology. if you are a phenomenologist, then it's got to be unstructured in-depth interviews. if you use a generic qualitative approach (the most common in health research), semi-structured interviews are normally the method of choice.
Using software to analyse qualitative data is not always advisable, while on other occasions it may be a lifesaver. Remember, you need to learn how to use the software, this takes time and effort beyond the time and effort needed to learn about the analysis. The software will not analyse for you, the thinking is still your job.
Whether or not it is worth learning to use qualitative analysis software depends on the presence of one of two factors. If you need to analyse a lot of data, say more 30 substantial interviews, then using the software can be helpful. Also, if you will need to use the software again after completing you PhD, then it may be worth learning how to use the software as well. In nearly all other cases I normally advise my students against it.
Well the data collection method will depend on the type of data you would want to collect. I use NVIVO to analyze my transcribed in depth interviews and Focus Group Discussions. The software only aids in data organization for in depth analysis. If your aim is not to quantify data, then I bet a software would just offer a systematic and focused data analysis.
I am a medical anthropologist and I rely on qualitative methodologies in most of my studies.
I would suggest a detail transcript analysis using ms word by coding the transcripts and identifying themes as they emerge from interviews. Presentation of cases can be done based on common themes, or more comprehensively based on all themes. I do not support use of software considering the possible alienation from the essence of the original data;however, if needed one can always use it.
I agree with many of the contributors above - your analysis style should be created to best fit your methodology, philosophy and data: meaning, it should help to answer YOUR question.
I know of people who use Nvivo in its pure form, who use it to organize their data and codes and then analyze manually from there, or others (like myself) who use regular old printed paper to conduct thematic analysis (because it fits MY project)
You must also remember that in many forms of qualitative research (constructivist or critical mainly), the data comes from the participant, but the analysis is usually a co-construction of the participant's data, the available literature, the theoretical and philosophical background, and the expert's (you) understanding of where those all intersect. Do not be afraid to interpret, but be sure to be transparent about when and how you do (via reflexive journaling, memos, and an audit trail of all decisions)
The "right" answer, of course is "it depends"! Miles and Huberman's classic text on Qualitative Data Analysis has been updated recently (2014) by Johnny Saldana, who's the leading authority right now (in my opinion) in coding of data. The 2nd edition of his book on coding qualitative data (2013) is truly excellent and would recommend both texts. Again, the method of coding depends upon your research methods, which depends upon your methodology, which depends upon your epistemology, which depends upon your ontology! Do yourself a favour, though, and read Miles and Huberman and, if you possibly can, the most recent (4th) edition. Best of luck!
I'll be provocative and ask: What does philosophy have to do with it? In particular, I just checked the indexes for the most recent versions of both Saldana and Miles, Huberman & Saldana, and neither book has an entry for "epistemology." In other words, it appears that these experts don't believe that qualitative analysis requires you to have a set of metaphysical beliefs about the nature of reality and truth. Similarly, I have never read a research article that said anything even vaguely similar to: "I used a realist ontology as the basis for my analysis."
Instead, I would emphasize what many others have said: Your research goals should determine your analysis strategy.
And if you still feel some need for a philosophy, then I would recommend pragmatism -- since the core of pragmatism is the argument that ideas (such as analysis strategies) only have meaning in terms of their relationship to actions (such as pursuing research goals).
I'll be provocative back and say that philosophy underpins everything! If you perceive there to be only a single objective reality, that the only reality is that which can be observed, smelled, touched etc then asking people's opinions on why they do stuff, why they think and act the way they do, investigating the role of culture, of social structures etc is hardly relevant is it? There are indeed a number of pragmatic middle grounds that lie in the continuum between empiricism and constructivism such as the critical realism approach most commonly associated with Roy Bhaskar. If you are interested in working out "why" and "how" questions, if you figure that there is more to reality than that perceived solely by the senses, if you believe in the stratified nature of reality, in uncovering "chains of causality", that social structures (for instance) might play a role in such questions and you believe that this cannot be investigated very well using tools such as surveys (more suited to a positivist paradigm), then philosophy - every time - plays a key role in determining methods to employ does it not?
I found Blaikie's Approaches to Social Enquiry (2007) most useful to grapple with such issues. But then, I'm a sociologist. I would say that, wouldn't i! It does, of course, very much depend upon which discipline you come from.
Further to the point about ontology not being mentioned in papers: Porter and Ryan (1996) Breaking the boundaries between nursing and sociology: a critical realist ethnography of the theory-practice gap. Uses a realist ontology for the basis of their research, David, and (indeed) advocates the use of realism in health research. I totally agree with your point that ontology is rarely spoken of in papers but this is not because it's not present, just unspoken. It is implicit in the methods employed, it could be argued.
Agree with many of the above...it depends completely on what you are seeking. I am currently using Leximancer for analysis alongside lots of preliminary good old fashioned open coding. I do recommend Leximancer though as it identifies, themes, related concepts and semantic relationships in the data. Fascinating and exciting.
This might appear a bit out of line but as a researcher collecting data from communities whose language is not English, which is the main language of journals in my field, I would suggest adding a step to the analysis which is to define meaning making and not just translating the data. Whatever your methodology is there should be attention paid to your position as an insider /outsider of the population of the study. Attentiion needs to be paid to explanations of what makes sense to you as an 'insider', who shares a common culture and language with participants, and the ones from the 'outside' world . It is important to transform the data into something that is understandable for English speaking readers and yet make sure the meaning has been kept as it was in the original language.
There are not shortcuts (and this frustrates granting agencies!). Such analysis is labour intensive, requires the researchers to be intimately familiar with what research participants have said, and requires the researcher to initially stay away from ANY literature on the topic until the end of the research and analysis.
Will's answer above, however, concerns Grounded Theory Method (associated with the work of Strauss and Glaser) in which you should be completely naive and open to any and all explanations without the baggage of pre-existing knowledge or assumptions. As I said before, though, it depends on your epistomological standpoint as to which methods you adopt. GT assumes a purely interpretivist/constructivist standpoint. In the real world who actually goes into the field without being at least reasonably familiar with the topic in question? Constructivist Grounded Theory Method (associated with Kathy Charmaz's work), attempts to address such criticisms, at at least goes part of the way to addressing these. The methods of coding and method of analysis (which was your original question) will naturally come out of your discussion on methodology. The bottom line is that any method will have strengths and weaknesses and, pragmatically, it's up to you to justify what you decide to use.
Good and accurate points made by Michael; earlier and more recently. Philosophical positions are often 'covert' within studies - although the researchers should have left enough of a methodological audit trail that they can be assumed or interpreted.