In terms of the scale of the quantitative and qualitative 'divide' it is the epistemological arguments that are the most serious, seeing the two approaches as philosophically irreconcilable? Has the growth in mixed methods research designs made this epistemological divide irrelevant? What philosophical position should mixed methods designs take?
David, this is a really interesting question you pose and an important one. I am not sure I have the answer or that there is even 'an answer'. My inclination is to say that all research, regardless of whether it is quantitative, qualitative or mixed method should assume a philosophical position dependant upon the question(s) the research is asking, the sample, the exploratory or confirmatory nature of the research, the area of enquiry and what you wish to achieve with your results. I do not see the two approaches of quantitative and qualitative as being philosophically irreconcilable but I often encounter researchers who are trained and competent professionals in one or other of the approaches which may lead to a comfort zone that is restricted to only one of the philosophies that underpin the approaches. I think that flexibility in philosophical orientation and being willing to adapt our philosophies appropriately is important.
The notion of scale seems important to this question. It probably also applies to the diversity of possibilities involved in "mixed methods", so it seems that any prescriptive answer would not be sufficient. I'm also thinking that epistemological considerations reach beyond issues concerning qualitative or quantitative research methods. Because we live in a binary world, opposite or divergent ways of thinking proliferate -- & it's interesting to observe that where something appears to be irreconcilable for one it is not for another. And one's position depends on whether there is a preference for fundamentalist (disciplinary) categories or relativist constructs, or a mix of them.
Well, i think it is quite the opposite. Epistemological choices and ethical research issues are more clear and contested exactly because multiple methods reveal different aspects not only of the field studied, but also of the limits of every methodology as well. Then, mixed methods designs become deeply philosophical spaces where the politics of research and the positionality of each researcher involved (but also of each bibliography used) needs to be examined, exposed and taken into account. To me, this is the main difficulty of research-doing nowadays, but on the other hand, this is what makes research a deeply political project.
Irene, these are great points you make and I agree with you completely. The different techniques, approaches and philosophies highlight their own and other approaches strengths and weaknesses - if they are used correctly. I also agree that mixed method (and to a large extent qualitative research alone) id a deeply political process in the sense you suggest.
Jon, I am not sure I agree with your notion of binary world. Could you explain this a little more please? To me it seems as if we live in a world of degrees of emotion and thought and not a black and white, cut and dry world, but maybe I'm misunderstanding you.
I think in some aspects I agree with Paul Hacket.
Quantitative and qualitative approaches are only perspectives / categories. Mathematical formulas for example are rather qualitative. Historically qualitative research was founded to critizice to much mathematics in Social Science. But it depends only on the question or hypothesis which kind of method can be used, and this is an epistemological decision or which should be used, which is an ethical and practical decision. Research seems to me to depend on more than Aristotelian or Kantian categories of quality or quantity, or of using / preferring formulas, language and / or measures.
Arguing that there are two tomatoes contains qualitative and quantitative aspects, also the method 'single' case study contains quantifications. Additionally the use of a randomized controlled trial for example is an epistemological and methodological decision and consists of lot of qualitative aspects.
In applying research as a strategy of science (German, Wissenschaft = Make Knowldge) logical aspects are of relevance as well as the aspect of truth, which for example evaluates if a conclusion correspondes with anything observable. My conclusion in this discussion is, that without epistemology research is not possible, so epistemology has priority.
From an epistemological point of view the dicussion about a gap between qualitative and quantitative perspectives is in my opinion overdrawn. It is a little bit like to discuss a philosopical irreconcilability of space and time. Therefor the concept of mixed methods as an addition of so called quantitative and qualitative methods seem to me to be redundant, because no empirical research is conceivable which is either qualitative or quantitative. It all depends on deciding to use the best available methods to answer the asked question and not prescribing to use something that is nowadays called Mixed Methods.
Sorry for the Language use, but philosophical arguments are hard for me in englisch.
Thomas
I find this an interesting but challenging question. As a biologist, I have been involved with quantitative and descriptive research. As an environmental scientist, I have been involved with quantitative and qualitative (i.e. mixed) research at the boundary of the natural and social sciences. I don't think there's an intrinsic qualitative-quantitative divide; I think it is made by the participants (which is perhaps what Irene and Paul are driving at). Even in the 'pure' natural sciences there are intense 'political' battles among researchers. I have found, though, that as one approaches the applied and social sciences (from the position of a 'pure' scientist), the political context becomes more complex, but also more rewarding, in that the cooperating parties tend to like the novelty of the interaction.
I had to look up the meaning of the word 'epistemological' before embarking on this answer. I think of nature as the great teacher who answers every question truthfully and without prejudice; and so the qualitative observer is as respected as the quantitative estimator by the teacher.
Paul, I'm using "binary" in the sense of "dualistic" - whether it is in the domain of categories pertaining to morality or value (good and bad), cosmology (yin & yang), conventional law (right lane / left lane), politics (liberal & conservative), gender (male & female) etc. In a way, qualitative versus quantitative can be binary in this same kind of way. That doesn't mean there is not a middle way, just that concepts often generate conceptual opposites. An example is your comment on David's suggestion of the "irreconcilable" dimension - for you it is "reconcilable". For others, it is perhaps both ... & on it goes, as we conceptualise again & again, taking up "positions". Of course, in the domain of digital technology "binary" has a very precise meaning. Philosophically, there's more conceptual license, but for me the real tension is between fundamentalist & relativist conceptions. Which now brings me to another question: why is it that declaring epistemological positions is important in research? There would exist many robust arguments for doing so. But there also exists a counterpoint: if an instrument of research is "open inquiry" then how is it possible to be truly open & also to know what the new knowledge will be that will be discerned? New knowledge could have enormous epistemological impact.
Thanks to everyone to some very interesting contributions so far. I'm giving a lecture tonight on epistemology in research. I'd like to use a phrase expressed by Thomas: 'Without epistemology research is not possible'. One of the issues debated so far has been that of the 'binary divide' between quantitative and qualitative research. Of course, it is true to say that each contains some of the seeds of the other. But there are still 'purists' in each camp who would deny the relevance of the other approach. So, coming back to one of my original questions, if we are moving to mixed methods approaches, is there an epistemology of mixed methods?
Very interesting discussion. With some colleagues I looked at the issue in the context of future-oriented research or FTA (see below), and there I do not see an epistemology of mixed methods at all, on the contrary. The reasons for that are probably many and complex, but one of the main arguments might be the deeply ingrained differences starting from a divide in education. In the paper we also suggest the existence of a 'epistemological barriers-skills-trust' cycle: "Epistemological barriers are reflected in, and reinforced by, the lack of researchers, practitioners and evaluators skilled in both quantitative and qualitative FTA approaches. It is for instance neither common, nor easy, for those who come from a strongly quantitative perspective to communicate the core of their work to anyone outside their community. This lack of skills to understand and interpret both approaches leads to lack of trust amongst practitioners of each approach. Such lack of trust again reinforces existing epistemological barriers."
Article Quantitative and qualitative approaches in Future-oriented T...
There really seems to be a strong competition of adepts of both approaches. I think it rather is a tower of Babylon problem of using different languages, because both approaches have to deal with quite similar challenges. Both need to produce meaning and validity, both use abstractions (language versus language and formulas) and both have to clarify the link between experience/observation and scribble on paper (words, symbols, sentences) etc.
As I stated above, I believe, that looking closer into both approaches the overlap dominates the differences. Mathematical models for example can be understood as short versions of language, otherwise it would be impossible to translate for example e = mc2 into spoken or written language. In the transference of empirical relations into language and mathematical symbolizations there is the real big gap to bridge, not between so called qualitative or quantitative methods.
One question perhaps is easy to answer, the epistemology of mixed methods are epistemology in general and philosophy of science as theoretical frame work or discipline. Following this suggestion means that the discussion we are in is an epistemological discussion in the field of philosopy of science.
What do you think?
My recently completed PhD addresses this issue! I adopt qualitative and quantitative research both to express and to deconstruct binary concepts.
My own research concerns cross-dressers which in itself deconstructs the gender binary. There have been feminist writers who have identified the philosophical divide in the perspectives of gender and research - Quantitative research (masculine) and Qualitative research (feminine).
My work is shaped by postmodern concepts aware that much social science research is becoming increasingly unstable as well as the areas being researched.
The modernist structures of academic research and epistemological perspectives are being deconstructed. It was hard to fully write my work aware that my concepts go beyond the restrictions of language.
We should not attempt to adhere to that 'divide'. To do so is both restrictive and perhaps conforming to sexist standpoints.
I agree to the need of epistemological perspective in a research. The choice of epistemological perspective is based on the nature of the area (problem) investigated. I often use lens as an analogy of epistemological perspective to explain it to my students in a class.
In research, we can use different lenses (perspectives). If we want to see from distant but to have wide range of coverage to a given area (problem), we have to adjust our lens to cover this wide area. On the other hand, if we want to focus on relatively small area but we want to dig deep another adjustment of the lens is needed. In addition, if we want to study a relatively wide and at the same time want to dig deep, we need to have a double lens. We can use this double lens either simultaneously or one after the other which is based on our design of the research from the outset.
As some scholars of the mixed method say, I agree to the terminology (pragmatism) to the perspective used in the mixed method because the main objective of the mixed method study is the practicability of the study but not to the epistemological war.
We should not support the false categorization of research methods into the categories qualitative versus quantitative. There is only the dimension of adequate methods. Mixed methods seem to me as kind of stipulated mixture. Probably it is meant pragmatic as Bekele stated, but does that mean appropriate? To speak about mixed methods is rather a research fashion, probably because disputes (here between the supporters of either qualitative or quantitative researcher) are currently seen as the enemy of real research. But avoiding the dispute and therefor speaking about "mixed methods" does also not really help. From a practical researchers perspective there are only methods and not "mixed methods" (as a noun). And different methods can always be mixed, when there is a requirement in answering a research question.
The used methods and their concrete contribution to the concrete research program have to be well founded and justified in every case and not should not been seen as an order from above. Therefor, using text analyses, randomized controlled trials, experimental strategies, single case research, interviews, questionnaires, participating observation, measuring, efficacy or effectiveness strategies, group discussion ... must be adequate (and all of them consist of quantitative and qualitative aspects and much more). And - it seems to me - what is adequate is always a rational decision consisting of methodological issues and epistemological foundations.
Depending with the context of your study (or your research questions), one method may be used to inform the other, more still if this is not necessary then go ahead and use whatever method (single) is appropriate. Agree with Thomas, there is a bit a of both in either method anyway.
I think for any research, reflection on the matter should be considered essential. It doesn't just inform the reader that the author is aware of the difference but most importantly can help the researcher clarify their own limitations.
Everyone here has made excellent points, perhaps it must be added that methods that were developed within an epistemological tradition sometimes escape those traditions. Their outcomes may be read in the light of other epistemologies. The rule to do this should be not to stretch the scope of the data beyond its capacities. For instance, survey data could be read from a constructivist perspective, but understanding that these are descriptions that have a 'limited' second order reach.When triangulated with qualitative data, quantitative data will help with the discussion of the observations that aided the construction of the instruments in the first place, but can be hardly used to discuss emergent perspectives.
I think there are two mistaken assumptions behind much of this debate. The first is a misreading of Thomas Kuhn's concept of the "incommensurability" of paradigms. What Kuhn meant by this is that there isn't a neutral language into which two paradigms could be translated, so that they could be "objectively" compared. He did NOT imply that paradigms were incompatible or couldn't be used jointly. I like Dega's "lens" metaphor; different "mental models" provide us to with different lenses for making sense of the things we study, and can be used together for this purpose. The best argument for this approach to mixed methods research is Jennifer Greene's "dialogical" stance (Greene, Mixed Methods in Social Inquiry, Jossey-Bass, 2007). See also my attached paper, "Paradigms or toolkits? Philosophical positions as heuristics for mixed methods research."
The second mistaken assumption is that qualitative and quantitative research are each based on a single set of ontological and epistemological assumptions that form the necessary "foundation" for each approach. I think this is empirically false for both approaches (see, for example, my book "A Realist Approach for Qualitative Research" (Sage, 2011). It leads to the idea that mixed methods researchers need to adopt a single foundational "paradigm" (usually assumed to be pragmatism).
However, this doesn't mean that qualitative and quantitative researchers share the same basic philosophical assumptions. The "mental models" of the two differ in many respects, a source of some of the tensions between the two. In particular, quantitative researchers tend to conceptualize the world in terms of variables; qualitative researchers do not. See my attached paper "Using numbers in qualitative research."
Sorry, I apparently can only add one paper at a time. Here's the first one.
Thank you very much Joseph, although I did not begin this debate (thank you David), your papers will be very useful in a discussion I'm working on.
I believe that the largest of this problem originates in the assumption (largely from Guba and Lincoln) that there are only two basic forms of epistemology: constructivism (usually associated with qualitative research) and realism (usually associated with quantitative research).
With the notable exception of Jennifer Greene, most subsequent researchers have agreed that these two broad philosophical positions are indeed incommensurable. But Greene's own example of her research that attempted to bridge this divide, as related in her book, Mixed Methods in Social Inquiry, was an admitted failure. (The project ended in an unresolvable disagreement between two opposing sets of stakeholders who advocated very different goals and assumptions.)
My own answer is to advocate pragmatism, as a serious philosophical stance, that gets past the realism/constructivism impasse. This approach goes beyond the everyday use of pragmatism as mere practicality or "what works," to look at the work of philosophers such as William James and John Dewey. The key emphasis there is on the actions we take, and our interpretation of the consequences those actions have. as the basis for our further actions.
I have a paper on this forthcoming in the journal Qualitative Inquiry, and I have attached the "available first online" version.
Finally, I know that Prof. Maxwell would rather offer "critical realism" as a way out of the binary opposition between constructivism and realism, but my argument would be that pragmatism has the distinct advantage of being based on action as basis of all belief (AKA knowledge). Otherwise, we are left with the 3000-year old problem that epistemology is all about thinking without regard to doing.
David Morgan makes some important points about pragmatism as a valuable stance for mixed methods research. Our disagreements, I think, stem from his view of the constructivist-realist difference as an "impasse" that requires a third philosophical stance to "get past." (See my previous comment on why incommensurability does not imply incompatibility.) The view of these two paradigms as unified, incompatible positions that must be accepted or rejected as wholes has repeatedly been criticized by researchers (e.g., Martyn Hammersley, Deconstructing the Qualitative-Quantitative Divide, in his book What's Wrong With Ethnography?), and the philosopher Ian Hacking (in his book The Social Construction of What?) has analyzed how a wide range of phenomena (rocks, mental illness, nuclear weapons, child abuse) can be seen as both "real" and as "social constructions." Critical realism is in fact an example of how the two can be joined; it combines ontological realism (the world exists) with epistemological constructivism (our understanding of the world is necessarily our own construction, and multiple valid constructions of any phenomenon are possible).
I can't find Greene's discussion of the "failure" of her dialogical approach in her book; David, can you give us a full reference? in any case, the fact that two groups of stakeholders could not agree says nothing about whether a researcher can productively use multiple philosophical "lenses."
I think Maxwell and I agree on more than we disagree. In particular, most of the problem about epistemology and social science research can be traced back to the claim that there are two basic approaches to epistemology in the social sciences, realism and constructivism, and that were entirely incommensurable in a strong Kuhnian sense.
My reading is the critical realism offers a reconciliation of the original divide while maintaining an emphasis on ontology and epistemology. In contrast, pragmatism would do much the same, but with a rejection of classical philosophy of knowledge concerns as a starting point.
FYI, Ian Hacking's "Social Construction of What" is also one of my favorite treatments of the issues. I see his contribution as pointing out that the philosophers themselves have little problem with maintaining a realist ontology and a constructivist epistemology. This differs sharply with Guba and Lincoln, who assume that a realist ontology automatically requires a realist epistemology, and vice versa.
As for Greene, my copy is at my other office, but it is an extended example of a program evaluation, and really the only example that she gives in entire book. I believe it is in Chapter 2, but as I say, I'd need to check up on that.
A common mistake is to assume that the epistemological issues (only) relate to QUAL vs QUANT, but it is important to note that different qualitative methods may be based on quite different epistemological foundations. It may therefore be easiest to mix/combine quantitative methods with realist qualitative methods.
A good point, Richard. However, epistemological assumptions are properties of researchers, not of methods per se. The same methods (e.g., open-ended interviews, inductive coding, statistical significance tests) can be used by researchers with very different epistemological views, although these views would influence how the methods are used. I'm not sure I could identify anything that I would call "realist qualitative methods," specifically.
I totally agree with you Joseph and Richard and feel that the point that Joseph makes about epistemological assumptions being characteristics of researchers a point I made earlier. Researchers tend to be feel epistemologically comfortable in Qunat OR Qualis.
I like the ongoing discussion here. Thank you for sophisticated contributions.
Some questions are coming up in my mind.
1. Do we really need realism to build research on? As I stated above, there is a mix of the concepts of truth and realism in some answers. Empirical points of view can be for example phenomenological as well as realistic or both.
2. I like the idea that constructivism belongs to the researcher and realism to the outside world, but one can criticize that a researcher / observer can also be seen as a part of the world.
3. I believe that the discrimination of qualitative and quantitative research is artificial. Yet, no argument convinced me of the opposite. It really is a social historical construction. It seems to me that it is has grown to tradition. A genealogy of this tradition could probably help.
At this place of discussion definitions of what qualitative research / methods and what quantitative research / methods is, would be helpful. Otherwise it stays unclear what is mixed in mixed methods.
What I'm challenging, in my "Paradigms or Toolkits?" paper, is the idea that research is "built on" philosophical "foundations." I see philosophical assumptions as tools (or lenses) that can be used jointly to provide a deeper understanding of the things we study. In this, I see researchers as very much part of the "real world," and their assumptions as real phenomena that influence their research. See my book A Realist Approach for Qualitative Research.
I also think that the "mental models" of the majority of qualitative and quantitative research are quite different; this is not an artificial distinction. This is particularly apparent in their differing approaches to causation; see the attached paper, as well as the previously-attached paper on Using Numbers in Qualitative Research. I don't think there is as single "defining" difference between the two, but the distinction is profoundly important and has major practical implications for applied research.
I think the question of whether the distinction between qualitative and quantitative research is either real or artificial is an nice illustration of the fundamental points underlying much of our discussion here. In particular, I don't believe that this distinction is not real simply because it is socially created and sustained. At a minimum, it exists as a concept that we can discuss. More to the point, where you stand on it can affect your professional relationships and even your acceptance within a field.
For a pragmatist, this intersection between beliefs, actions, and consequences is what matters most, rather than abstract discussion about the nature of "reality" and "truth"
I think the quantitative and qualitative divide is seriously misunderstood. First, it is sometimes taken to coincide with the divide between the investigation of the objective vs. the subjective, when ‘objective’ and ‘subjective’ are understood as a division between natural/physical/physiological (or, mind-independent) vs. social/psychological mind-dependent phenomena. So understood, quantitative research is thought to be much like natural science, using objective methods to measure objective variables, while qualitative research is conceived as the investigation of the merely subjective, of social and psychological constructs of the mind that defy quantification. But, quantitative research in the social sciences is not like natural science and it is predominantly interested in the same kind of things as one would want to investigate in qualitative research, e.g. attitudes. What is the difference between asking someone in a survey what they think of the foreign policy of the liberal democrats, on a scale from 1–7 or to ask them the same thing in an interview? The difference is mainly in the way we interpret the data, and then mainly in the possibility of excluding various interpretive biases. The choice between the two methods is, or should be, the result of a cost/benefit analysis with respect to one’s aim: do we want rudimentary superficial answers to uncomplicated questions in a way that allows unbiased analysis, or do we want in-depth nuanced answers that are difficult/impossible to analyse in a way that excludes interpretive bias? So, do not confuse the quantitative vs. qualitative divide for the divide between natural vs human science.
Second, the divide is sometimes taken to coincide with the divide between the use or non-use of methods of measurement that allow the assignment of a numeric value to outcomes, and often it is assumed that this divide coincides with the above mentioned divide between an investigation of the natural vs. merely subjective. The assumption is that if you use a method that allows the assignment of numeric values to outcomes without the interference of the investigator (the respondent puts the value in him/herself) you are automatically investigating objective variables. This misunderstanding is particularly widespread among students. Obviously, the assignment of a numeric value to some degree of belief or strength of attitude (pro or con) does not automatically turn an attitude into a physical or physiological variable (or even an objectively measurable variable). Still, the divide between the use or non-use of numeric assignment comes closes to what the divide between quantitative vs. qualitative research is ultimately all about, and this divide, by association to the other ways of dividing the two creates all sorts of misconception about the nature of the research and the validity of the findings. Quantitative researchers tend to think they are superior and not engaged in wishy-washy subjective interpretation of any kind (even when they are just measuring attitudes in the population, say, towards organic produce), while qualitative researchers tend to think they are so immersed in subjective interpretation that no quantification or generalisation is ever going to be meaningful. What is wrong with making 1000 interviews, assign numeric values to different kind of answers and run all the statistics? It has been done you know.
The real epistemic choices that we face have to do with an analysis of the subject matter that we are investigating. Choice of method is always secondary to the subject. If we want to investigate illegal drug use in sports, we use chemical analysis; if we want to investigate attitudes towards use of illegal drugs in sports, we use interviews or questionnaires (depending on the detail and depth of what we want to find—interviews for more depth and detail—and the detail and depth of our previous knowledge of the issue—questionnaires the more we know beforehand). It is possible to charitably interpret the growth in mixed-methods research as the result of a growing awareness of exactly this fact: to investigate a subject matter fully we have to first analyse what the relevant variables of the subject matter are and how best to measure them, invariably realising that an adequate elucidation of the subject matter requires a combination of methods. In philosophical terms, this means that ontology always comes before methodology. Some further thoughts on this are found in my paper 'The Natural vs. Human Sciences: Myth, Methodology, and Ontology': https://www.academia.edu/3553833/The_Natural_vs._Human_Sciences_Myth_Methodology_and_Ontology
Anything that is mixed method, that it is uses quantitative methods is by defintion and default ontologically and epistemologically objectivist and positivist. Any methods that are qualitative alone, can if they (their creator) choose to be, ontologically and epistemologically social constructionist, subjectivist and interpretivist.
I disagree, Andrew. The authors of a major quantitative/experimental work (Shadish, Cook, and Campbell, Experimental and Quasi-experimental Designs for Generalized Causal Inference) stated that “all scientists are epistemological constructivists and relativists” in the sense that they believe that both the ontological world and the worlds of ideology, values, etc. play a role in the construction of scientific knowledge (p. 29). Also, it is possible to be both an interpretivist and an ontological realist; see my book A Realist Approach for Qualitative Research. Ontology doesn't determine epistemology, and neither determines methodology.
Hi there, I disgaree too, think the key words are 'play a role'; depends on the role they play and all determines methodology.Agree on can be both an interpretivist and an ontological realist.
Andrew and Joseph
Andrew, it is true that quantitative methods is ’by definition’ ontologically and epistemologically objectivist and positivist, if ’by definition’ you mean whatever a dictionary will tell you about quantitative methods (or some of them if Joseph is right). However, in this case, dictionaries arguably only report popular belief and not what is actually the case. One clue to unmasking the mistake can be found in the definitions themselves. For instance, the Wikipedia entry on ‘Quantitative Research’ begins like this: “In sociology, quantitative research refers to the systematic empirical investigation of social phenomena via statistical, mathematical or numerical data or computational techniques”. The clue is in the term ‘social phenomena’. If, as is often recognised, social phenomena are in some significant sense socially construed, then quantitative research into social phenomena are ontologically social constructivist, and hence to a degree subjectivist as well as interpretivist. The realisation that they are social constructions (in the ontological sense) will affect our ideas about how they can be investigated and to what degree we can rely on the data acquired and how it is to be interpreted.
Suppose you ask, in a questionnaire, “are you male or female?” and you assign values 1 and 2 respectively, to the answers. Does the data acquired represent objective features of the world, or only represent how the individuals in the sample socially construe their gender? If you take the latter view, then you have assumed that you are investigating a socially constructed reality, and you interpret the values 1 and 2 in light of that. This is all a matter of some controversy, of course, but it will be very difficult to understand all the variables measured in quantitative research as natural objective properties.
As a consequence, I disagree with Joseph about epistemology and methodology not being determined by ontology, although, by ‘determine’ I really mean ‘influence’. Our research into the nature of the world can of course alter our ideas about the ontology, so there is a give and take relationship at play.
No social science research is objective. It doesn't matter if it's qualitative or quantitative research. Interactions with interviewees/respondents can cause changes in how the interviewees/respondents see themselves and others.
Hi Rognvaldur, I think I was a bit loose with the using the word definition, thanks for this! I am also new to adding to a debate in this forum and feel I was a bit candid. I really appreciate Joseph’s post (and will read up) and yours. I agree with you on:
The realisation that they are social constructions (in the ontological sense) will affect our ideas about how they can be investigated and to what degree we can rely on the data acquired and how it is to be interpreted.
I spend quite a bit of time doing field research in very remote Australian Aboriginal communities. The research is aimed at social change. I am an ex statistical consultant, but an interpretivist or interpretative researcher. From time to time I am brought under pressure by funding agents (the Australian Commonwealth) to use ‘questionnaires’ in my research. Their desire in suggesting this is they want something objective. Even if I used questionnaires and I do sometimes, (to keep the funds coming in), and I interpret responses on these as not objective data, and I do interpret the data that way, they see it differently, so I think the divides and roles maybe are real, somewhere.
Rognvaldur, I think we agree, but I'm making a sharp distinction between "determine" and "influence." Obviously philosophical assumptions (as real properties of researchers) influence their methodological decisions, but so do lots of other things, including training, goals, the context of the study, stakeholders, and simply what kind of work they most enjoy doing. You can't make methodological decisions based simply on your ontology and epistemology.
Joseph, I agree to that, and can admit to strategically exaggerating the importance of ontological issues because they are too often neglected and/or misunderstood. And of course, because with my background in philosophy, that is where I think I have something to contribute with : )
I have a question for Lee Middlehurst: Does he think that his statement, "No social science research is objective" is true, or is it just his subjective construction?
No offense intended, but, philosophers love to play those kinds of games, and I've never understood why we as social scientists let ourselves be drawn into such debates.
Andrew, you are in a predicament I recognise well from my experience of field researchers that I have supervised (I am not an empirical researcher myself, although I have co-authored qualitative research publications). One major problem is the discrepancy in the literature on practical research, and the discrepancy between the literature and the popular ideas which often influence the funding agents (e.g. decision makers at national institutes of health), as well as the discrepancy between research ideals and the practical reality facing the field researchers. It is a mine field, as I think Joseph's answers suggest too. Yes, we are influenced by training, goals, contexts of studies, stakeholders and personal interests, but they often conflict with each other in ways that can make you despair. One thing that interests me, is that research methodology is an area in which questions in the philosophy of science, hermeneutics, phenomenology, collide with the research reality of a range of disciplines, their theoretical frameworks as well as the particular data they accept as valid. However, while nobody is an expert on this battleground, then everybody defends the authority of their particular discipline in the mish-mash and nobody gets any the wiser, except for the occasional exchange of confronting viewpoints in forums such as this. Brilliant isn't it?
I really like this discussion.
Some aspects I want to add. I agree with Josephs commentary, that a lot of things have an influence on methodological decisions. But arguing vice versa is also true that concrete research (decisions) influence a lot of things.
Especially thinking research on an ontological basis of social construction (constructivism), as it is done often here, leads in the end to the consequence that research itself is an ongoing process of social construction (especially in social science, psychology, clinical medicine etc.). Therefor I am a little bit more careful with the word realism. Why should all researcher "be" realists? Perhaps in a very weak way, in assuming that there is a reality. Following for example Kant it is something like the Ding-AN-SICH, which is associated to the danger of hypostazizing.
In consequence I think that epistemology is necessary especially as a critical force. Because in many research fields it seems to me that aquisition of scientific knowledge is rare and the construction of realities is manifold, it does not matter if qualitative, quantitative or mixed.
There is in this discussion thread some concerns about the distinction between realism and something that is supposed to be contrary to realism. I think we only need to worry about one popular assumption about this distinction, and reject it. It is the idea that realism has to do with the ideal of an objective study (in the epistemological sense of ‘study unbiased by subjectivity’) of an objective reality (in the ontological sense of ‘mind-independent reality’), while the contrasting non-realism is the idea of a subjective study (or, what Braithwaite contemptuously described as “a deep sigh followed by free association”) of a mere subjective ‘reality’ (somebody’s particular world-view). This is a contrast nobody should be worried about today. To begin with, a perfectly objective study of the first kind does not exist. Even quantum physics is to some extent biased by subjectivity, notably of the quantum physicists. And even qualitative research strives to minimise subjective bias. Secondly, the contrasting non-realist alternative has only ever been seriously considered by people who should now be understood as exploring how far one can take the position, ending up ‘too far’ away from any sensible position: ending up in the idea that there is nothing beyond the content of our minds; everything is subjectivity. Or at least in the idea that the very idea of an objective reality is unintelligible. This is now better known as the postmodernist movement, and I think we should now be grateful that they did what they did, although it did create ripples on the water that are still muddling the waters a little bit. They pushed the boundaries of objectivist ideas that were too much taken for granted, and that resulted in what is widely accepted as a perfectly plausible idea, the idea that social phenomena are, in a particular sense, social constructions and yet perfectly real phenomena. The legal system of any particular country is something that has been socially constructed. It exists and has certain features (for the time being) because a collective of humans decided that it should be like that. This makes it a social construction and yet perfectly real: it exists, it has certain features, it can be studied. However, very plausibly, social constructions like that evolve and change in a different way than the way the solar system or any purely material system changes, simply because they develop according to development in human thinking and behaviour, and we do think that human thinking works differently than inanimate material systems. However, for those who still think of social constructions in the old postmodernist way, notably as ‘only in the head’, the idea of social constructions still carry the association to free fantasy and nonsense. Bottom line, even if we still have to worry about epistemological choices and ontology, we don’t have to worry about realism except in so far as some people associate realism with a rejection of social constructions. The kind of realism I have in mind is very close to the ‘weak’ kind of realism Thomas mentions, i.e. just a belief in a reality that exists and is a certain way independently of how we happen to believe it is. Just be careful to include social constructions in that kind of reality.
In many fields of research there is a strong justification pressure. For example in clinical medicine and therapy research. In these fields an "objective realistic effect" of treatments is demanded (by health care systems). But probably because of the allegiance problem and some other severe sources of error the outcomes of these studies are often positive biased. So the whole research program is doubtable. In 20 years of research a lot of researcher found out what they put in and constructed a lot of normative institutions and careers on these doubtable results (I am aware that I exaggerate and see it too sinister, but it is a tool to focus on some problems). Some of our colleagues believe that there is a way out of the misery by using qualitative research or mixed methods. I do not. But only by the strict use of epistemology and critical philosophy as well as epistemologically informed methodology we can cope with such problems, Therefor epistemological choices are relevant in all kinds of research.
I doubt everyone has been reading the new literature these days; people like Annie Oakley have discarded almost all educational research as invalid, and the new researchers have claimed some of my favorite techniques (follow me, not whom I might follow) and really may not use it appropriately. Sociologists I have worked with were in the hey dey not trained in quantitative techniques and it was a different way and ways of thinking (e.g., the phenomenologists); many other people were trained as I was in three separate statistical courses (mental health/clinical psychology; rehabilitation; business management) with no qualitative at all other than commentary. I then, of course, took three advanced qualitative courses, and finally felt balanced and like I could speak with "everybody" (which is practically no one). So, since I am not a sociologist or anthropologist, I can only teach research methods in areas that are required by those departments to be largely statistical (will not do both is the academic management) is the hiring criteria.
Brilliant Julie. And every time they declare that the old stuff is bad stuff, they reinvent some 'new' method which is usually just a novel mixture of old stuff under a new label (but you must do three advanced courses to really understand it), and they pray no one comes along to declare their new miracle method to be bad stuff, at least not until they have made a career out of it.
Rognvaldur: Coming from the "hard sciences" of chemistry, physics and biology, I was quite prepared to take in the qualitative and sociology with me as the "underdog" of disability research. Believe it or not, I tried back in 1991 to stand by one national qualitative study that way, and could not ever get even a subcontractor to do the study in their research review lists (Taylor, Bogdan & Racino, 1991; Racino, 1991). With my last review in 2013, I still have research professors of 20 plus years who haven't even seemed to meet their women professors yet on gendered research in largely women workforces.
I think it depends on what type of mixed methods research design you are using and what the specific research questions are. One approach is to do the qualitative study first, as a means to develop definitions and then means of measuring the relevant variables quantitatively. The epistemology of this approach is essentially positivist, as the purpose of the qualitative study is to help select what to measure. I think this can greatly improve the subsequent quantitative study because too often researchers just choose variables that other researchers have measured before because there is a consensus in that particular field that these are the important concepts to measure.
Another approach is to do a quantitative study first and then the qualitative study as a means of understanding the quantitative results more fully. This is also basically positivist as the assumption is that the quantitative study produced "real" findings which can then be explained from the qualitative data.
Thirdly, both studies can be done independently, as two different ways of investigating the same phenomenon. This is more of a constructivist approach as neither type of study is regarded as providing "the answer" and the product of the research is often a set of constructed concepts or, more rarely, a new theory.
I'll give an example of why qualitative research is necessary; in areas such as family support, I may design and implement programs across a state or in the US, but I do need to see "what they look like" and "what they do" and "what the outcomes are" and "who benefits", "who are the staff" and so forth. Sometimes, I only need to know what researchers might call a surface level (believe me, in some places, a program might not even exist without monitoring); in other places, I need to know if an agency will deliver what I ask (e.g., cash subsidy programs).
On the "real research side", sometimes I do not want more scrutiny than with other programs (e.g., have I changed everyone's lives)....right direction. I also do need a summary of the literature, and across different areas; be smart, in this field, the conclusions are compared to this field, and compared to this discipline. I still have big, big gaps in areas such as child abuse and neglect, and sexual offenders (where I know I have outdated laws and no one feeling responsible; and police transfers going on).
Other times, do not tell me that clients have choices when they can't have a key to their place, choose a sexual relationship, can't watch tv after 7 pm, why do they have to be out of their own home by 8 and not return to 3 pm, or decide on the kind of program (truth in lending) that they want or not or that there is only one provider or nothing. I also don't need millions of dollars to say that clients choose to be homeless....The question is why would someone make that choice around the services, providers, and people who are there in their world. I'll start sending the researchers the bill.
Victoria, it is true that qualitative research is often (even mostly) described as essentially positivist, because the popular idea is that it consists in the observation (or documentation) of social phenomena in their natural settings and in a way that (at least initially) is not guided by theory. This is a deeply problematic idea, even bordering on the self-contradictory, at least when it is supposed to be reconciled with the idea that qualitative research involves the hermeneutic interpretation of social phenomena. Positivism and hermeneutics are pretty much complete opposites, and it is to a large extent thanks to hermeneutic criticism that there occurred a shift away from positivist to falsificationist ideas, even in the natural sciences.
To get clear on the sense in which this idea about qualitative research threatens to be self-contradictory, we need to understand first that the idea that qualitative research proceeds (initially) without hypothesis and then generates hypotheses inductively, has close affinities to the positivist idea that to avoid bias we must rid ourselves of preconceived ideas (hypotheses) and see things as they really are, and then we withhold judgement until we have an adequate number of observations on the basis of which we can generalise about regularities. In qualitative research this is described as inferring or generating theory from data. Then we have to reconcile this idea with the basic principle of hermeneutic interpretation, notably that interpretation essentially requires what is called pre-understanding, i.e. a kind of prejudice about what it is we observe. Essentially this means that theory, of some sort, always comes first, and that whatever we pretend to infer or generate from the data is going to be biased by that pre-theory we pretend to be without. At first blush, at least, it is difficult to reconcile the idea that qualitative research should proceed without pre-understanding, and yet cannot proceed without pre-understanding.
There is nigh enough total consensus in the philosophy of science generally that the positivist ideal of research without hypothesis, is fatally flawed; it is called naive inductivism. There simply is no such thing as a theory-independent observation, not even in natural science. So why should qualitative research want to emulate an ideal that is largely abandoned? Now, all you might have wanted to say is that the research is based on observation, but that is not really the essence of positivism. The essence of positivism is that we should strive for observations that do not contain any element of interpretation, and that simply cannot be done in social science (nor in natural science for that matter).
Victoria, I think you've left out an important option: to do quantitative and qualitative research concurrently, but NOT independently--to really integrate the two approaches in both the conduct and the reporting of the study. There are many examples of this, from the work of Jane Goodall, Frans de Waal, and others on chimpanzees, and classic sociological studies such as Middletown (Robert and Helen Merrill Lynd, 1929), Marienthal: The Sociography of an Unemployed Community (Marie Jahoda et al, 1933), and the Yankee City studies by W. Lloyd Warner and associates, to the more recent research described in Thomas Weisner (Ed.), Discovering successful pathways in children's development: Mixed methods in the study of childhood and family life (2005). For an extended discussion of this approach, see the attached draft paper.
Rognvaldur,
One thing you should know is that too many North American researchers (especially qualitative researchers) have little or no knowledge of classical positivism as actually practiced in philosophy. Instead, it is taken as a sort of extreme scientism based solely on deductive hypothesis testing (without any recognition that "theory choice" based on hypothesis testing is inherently inductive).
Thus, when you describe this "positivist ideal of research without hypothesis" this will confuse many North American researchers who have been trained in a rather naive view of what positivism as a philosophy actually was.
(For those who would like a approachable appreciation of these issues, I recommend William Shadish, 1995, "Philosophy of science and the quantitative-qualitative divide: Thirteen common errors," Evaluation and Program Planning, 18(1), 63-75.)
David,
Yes, I suspected as much, although the predicament is not unique to North America, in my experience. I think philosophy has an equal share in the blame for this (and I am a philosopher). Philosophers simply haven't paid social science much attention, in particular not qualitative research, and so, exactly as Shadish says (great paper by the way), the social scientists that have taken it upon themselves to at least try to construe a philosophically informed basis for research methodology, have been left to their own devices, not always with good results (but much better than none). Unfortunately the philosophers couldn't really have done a better job, because they in turn are more ignorant of the research practices and aims of social sciences than they like to admit. So, difficult to push things forward.
I like the turn in the discussion, especially by the contributions of Rognvaldur. One statement of Jaspers, I found in a contribution of Slife translated into English (Slife B.D. (2004): Theoretical Challenges to Therapy Practice and Research: The Constraint of Naturalism. In Lambert M.J. (ed.): Bergin and Garfield‘s Handbook of Psychotherapy and Behavior Change. 5th edition. John Wiley & Sons, New York. S.44-83.)
‘There is no escape from philosophy. The question is only whether [a philosophy] is good or bad, muddled or clear. Anyone who rejects philosophy is himself unconsciously practicing a philosophy.’
I agree with Rognvaldur that an observation without pre-understanding or pre-theory is impossible and want to add that many positivists were not really realists as also stated in the Shadish publication. But in qualitative research one often is confronted with the assumption that qualitative approaches are closer to the observed ‘reality’ compared to quantitative researchers. But transforming phenomena or observations into language or numeric relatives is always problematic, because paper is very flat compared to the observation. Language as well as mathematics is abstract (symbols, semantics, grammar, axioms).
In psychotherapy research, which may be an example of a field, where the debate of qualitative and quantitative research plays a relevant role, observations depends a lot on approach-oriented assumptions. The question is not qualitative or quantitative. It is: ‘what aspect is relevant’, otherwise results of observations are fully arbitrary. Focusing on one aspect always means to focus on a specified quality which is defined by our pre-assumptions (pre-theory …). If one prefers language or mathematics in formalizing the observation (transforming observation, empirical relations into language or numerical relations depends on decisions and the questions the researcher is trying to give an answer to.
I agree that there is no possibility of pre-conceptualized observation (or experience); all observations are the result of the interaction of the impact of the external world on our sense organs and the "mental models" that we use to make sense of these. For a particularly sophisticated argument for this, see physicist Karen Barad's development of Neils Bohr's theories, in Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning (2007).
However, I would argue that there is one sense in which qualitative approaches are closer to "reality" than quantitative ones, in that quantitative research inherently involves converting initial perceptions into measurements of "variables," an additional transformation that (despite its undoubted advantages) takes you further away from the initial perceptions. A nice example is statistician David Freedman's analysis of John Snow's discovery that cholera was a water-borne disease, in his paper Statistical Models and Shoe Leather (reprinted in his book Statistical Models and Causal Inference, 2010), and the contrast between Snow's approach and the statistical analysis that Farr used to support the competing "miasma" theory (Sarah Irwin, Data Analysis and Interpretation, in Hesse-Biber and Leavy, Handbook of Emergent Methods, 2008).
Failure to recognize this additional step in creating quantitative data leads to what I consider the misleading idea in mixed methods research that both qualitative and quantitative data can be "transformed" into the other form. Transforming qualitative data into quantitative measurements of variables involves a REDUCTION of information about each unit or case, in order to make aggregation of the data possible. Converting quantitative data into qualitative thus requires ADDING information, which is not a "transformation" in anything like the same sense, because it is essentially arbitrary what information the researcher chooses to add.
Thomas, Jaspers was spot on. The failure to realise this can lead to very funny self-refutations, e.g. when Stephen Hawking argues that metaphysics is dead, invoking arguments that are metaphysical. It is however your comment on approach-oriented assumptions that caught my interest. I completely agree that what matters is the question 'what aspects are relevant' and the answer is always dictated by our pre-understanding. There is, however, a popular assumption in todays qualitative research which is that we shouldn't have preconceived ideas about what is relevant, because then we project our preconceptions on the reality we observe, and there should try to have no hypotheses. In terms of methodology this manifests itself in the recommendation that interviews should be unstructured or at best semi-structured rather than structured, and that we don't have predefined categories that we use to analyse data. The end result, especially for younger researchers and students doing field work, is that they don't properly research the issue before they go into the interview, and then don't know how to manage the data they get. Then when they finally have their categories, they think they have 'discovered' them even though they very rarely end up with something you didn't already anticipate they would find. They simply 'discover' the categories that already were a part of their pre-understanding. A much better way to be open to the results, is to have as many hypotheses as possible. It is often overlooked, that hypothesis doesn't mean 'what we think we will find' but is only a guess about what we might find, but which we haven't committed to. So, the best way to proceed is always to accept as possible both the hypothesis and its negation, and ask in a way that allows you to determine, at least in theory, between the hypotheses. Let me explain. If you want to understand why people get hooked on internet poker, you might interview some of them. But instead of asking 'tell me about internet poker' and just accept anything you tell them, you investigate the range of things to which people can get addicted and why. So before you even ask you might suspect that some people have a need to be 'where the action is' and so get attracted to playing if they think it is in fashion. Some might be attracted to the risktaking, others to the rush of being in the flow…etc. then you design questions that could give you answers that can allow you to discern between the possibilities. You ask if they like taking risks, like the feeling of adrenaline, and if they say "no, I hate it" then the chance is they are not attracted to internet poker because of the risk taking. So, interviews can very much be designed to test hypotheses. And if you interview 5000 people and none of them like the risk, you have a basis for drawing a statistically significant conclusion to the effect that people are not drawn to internet poker because of the thrill of the risk-taking.
I think Joseph and Thomas have captured some of the strengths and limits of both quantitative and qualitative research.
On the quantitative side, selecting a particular theory to operationalize into measurable variables does indeed limit what you are able to observe. But at least the researcher makes those limitations more explicit.
On the qualitative side, it is easy to slip into the trap that one will observe the data "directly" without prior theory. But at least the research is open to observing a wider range of phenomena.
My own approach to mixed methods is to link the qualitative and quantitative studies sequentially, so that each can maximize its own strengths within its portion of the overall project. The key issue then is integration, which requires a careful specification of how that sequence of results will create a contribution from the strengths of one method to the needs of the other.
From my pragmatist point of view, this sequencing addresses the connection between beliefs and actions. That is, we start with existing beliefs that lead us to take a set of actions for the first study in the sequence. Based on our interpretation of the consequences from those actions, we update our beliefs in ways that guide the second study in the sequence.
(And on a different part of the thread, I personally don't treat the "conversion" of qualitative data in to quantitative variables as a meaningful form of mixed methods. Instead, I think it relies so heavily on the principles of quantitative research that it falls entirely within that realm. In the end, it is just an alternative way of collecting quantitative data.)
David, I think you may be overlooking that quantitative research is to a large degree collection of qualitative data, converted into numerical form. In fact, it can be argued that the most interesting variables that are thus measured by quantitative methods, are in fact qualitative. The numerical form of the data misleads people to treat it as a quantitative variable, forgetting the basic statistical distinction between qualitative (or categorical) and quantitative variables. Now, it is true that some qualitative/categorical variables represent natural/objective distinctions, such as male/female (when considered as purely biological distinctions), but the most interesting variables of that kind represent socially constructed categories, such as goth, vegan, conservative, etc... So, arguably, quantitative research already is an activity in which we measure qualitative variables and convert the data into numbers that are meaningless without the code that tells you how to convert the numeric data back into qualitative data (and in the process ADDING information as Joseph puts it) . For instance 1=goth, 2=punk, 3=vegan, 4=conservative, etc... To make quantitative research thoroughly quantitative, we must restrict questions to variables that have well defined quantities, such as how much a person weighs (kg), how tall they are (cm) and how many times they have been to the doctor. We can't ask about their likes/dislikes, pro or con attitudes, or anything else that we typically regard as interesting regarding the social/psychological aspect of things, because then we always have to reconvert the number to some qualitative description of exactly the kind that we typically get in an interview. In the end then, the only difference between qualitative and quantitative research, is that we standardise the form of the qualitative answer before we ask, instead of after (well, some think we cannot even paraphrase peoples answers in qualitative research, even though that is exactly what quantitative research does).
Well, for a conversation today, Rognvaldur, let's start with the number of times to the doctor. Well, who determines that figure? The reason it was collected is there is an assumption that a person goes to the doctor because they have a need which might be a regular checkup. Well, we never figure out what the required number is and by whom (for example, when I "went on Medicare" in US, I was told that I could not see my gynecologist for two years, even though I always, due to serious problems, saw him every 6 months), and how many of those are what the government will pay for versus what the person needs or can afford (in US, we have deductibles and premium payments which result in people not going to the physician if they have limited funds). And, take the dentist, 6 workers paid at health care, and no dental (assumed, with additional premium over $600 at no income) for those on Medicare, but I can go to the gym!! So, yes, researchers are needed to be clear, and not too overly complicated. [My community framework for disability is on PInterest, Community and Policy Studies]
Rognvaldur, I think we fundamentally disagree about the qualitative-quantitative distinction and what is "qualitative" about qualitative research. In my view, it is NOT the traditional natural-science distinction between categorical (nominal-scale) and numerical measurement; this is a widespread misunderstanding of qualitative research, particularly by quantitatively-oriented researchers. A forced-choice survey in which the response categories are words is just as "quantitative" in basic conception as one in which the choices are numbers on a scale. This "words vs numbers" definition of "qualitative" leads to the conclusion that "descriptive," nominal-level categorizing is simply the most basic and primitive form of measurement, with no fundamental philosophical or methodological differences between this and quantitative measurement. (E.g., King, Keohane, and Verba's influential book Designing Social Inquiry: Scientific Inference in Qualitative Research.) To me, any form of measurement is essentially a quantitative activity, because it involves thinking of the world in terms of variables. See my previously-posted paper Using Numbers in Qualitative Research for a fuller explanation.
Most qualitative researchers understand what they do very differently from this. Qualitative research involves description, in a broad sense, RATHER THAN measurement. A description of the activity in a school classroom is very different from the categorization of that activity as one or another "type." This description involves the use of categories, as does all cognitive activity, but this categorization is simply a tool, rather than the goal of the activity. The primary goal is understanding this particular event, in terms of its meaning for participants, the influence of the specific context in which it takes place, and the processes that are involved in what happens there, rather than the comparison of this event with others. See my previously-posted paper on Scientific Inquiry for a more detailed explanation.
Joseph, it is possible that we disagree in some way, just not in the way you have just described. I do NOT think that the qualitative-quantitative distinction rides on the distinction between categorical and qualitative measurement. I think you have interpreted my criticism of the way the distinction is drawn today, as an attempt to draw the distinction in a different way. That is not the case. I am only pointing out things that challenge the received view, e.g. by pointing out that even so-called quantitative measurement has a qualitative element. I wasn’t seriously suggesting that proper quantitative social science must focus only on quantifiable variables; that was meant to illustrate an absurdity.
I am more inclined to think that quantitative and qualitative research is not so different after all, or at least need not be. That is, that the distinction, if it exists at all, is a question of degree. I think misconceptions about the nature of research, has exaggerated the differences and resulted in two very different practices. For instance, the reason why qualitative research often limits the number of subjects is because researchers think one cannot in principle generalise from the findings of qualitative research and therefore they don’t even try to collect enough data to do that. They are also hampered by the idea that the collection of qualitative data must be done with as little previous structure in mind, wherefore they often end up with an amorphous lump of text that they have little or no idea about how to analyse. So, they restrict the number of subjects simply to make the data manageable. As far as I can see there is no principled reason for doing things this way, except the misguided idea that qualitative research proceeds without hypothesis to see things as they really are, only then to infer from the collected data some patterns and regularities that miraculously emerge from the data (but which typically end up conforming to already existing social theories/categories). This idea could be taken verbatim from the logical positivist manual for natural science in the 1930’s and it is just as false today as it was then.
I do think that the distinction between the NATURAL and HUMAN sciences rides on distinction between the STUDY of (a) the purely physical and (b) that which in some sense owes it nature to thinking sentient beings (e.g. social interactions, or social systems). The study of the purely physical involves only the study of physical properties, and they appear all to be easily quantized (well, they allow quantization). So, natural science is in a sense quantitative, but this is not controversial. What is controversial is whether social science is or should be regarded as quantitative in the same way. I don’t think so, and I struggle to find a good basis for drawing a sharp distinction between quantitative and qualitative research at all. To be sure, some social studies measure physical properties, but that is never the final aim of the study. They only collect such data to shed some light on other more interesting social data. They sometimes measure opinions or attitudes using questionnaires, and sometimes using interviews. In the former case the opinions are pre-formulated and the subjects choice of formulation is translated into numbers, and in the latter they are not pre-formulated and not translated into numbers (although they could). The numbers are then statistically treated, while the qualitative data is not (although it could have been translated into numbers and statistically analysed). But, in the end we all want to ask the question, what is the best explanation to this data?
You suggest that the distinction is one of means vs. ends. I am not entirely convinced. For instance, I don’t think you would find consensus among qualitative researchers on the idea that qualitative research is descriptive. Indeed, I find that your claim that we strive for understanding contradicts the claim that it is descriptive. We don’t come to understand a situation by simply describing it. We come to understand it by interpreting it. Interpretation always involves going beyond the level of description. It involves asking about the data: “what does it mean?”
My own impression is that it is quantitative research that is more often described as descriptive, e.g. as in the way a questionnaire may be considered as a tool to collect data that describes a population. Whether the quantitative study that deploys the questionnaire then turns out to be descriptive or explanatory depends on what the researchers do with the data; do they stay on the level of description, or do they attempt to argue that the data supports one or other hypothetical explanation about why the population is the way the data describes it as being. In qualitative research, such ideas about why the population say the things they say (which inevitably involves interpretation), is described as ‘understanding’ rather than as ‘theory’ or ‘hypothesis’, because the idea is that we can never know if the way we ‘understand’ things are true, either because we cannot statistically justify it, and/or because we cannot generalise to the wider population.
Sure, qualitative research can be descriptive in the same way, depending on whether the researchers rest content with merely reporting the data; either more or less as they receive it (more or less report transcripts), or if they paraphrase on the level of individuals or groups, or with respect to some theme. However, research of that kind is usually very unsatisfying and trivial. The interesting question is what the data can tell us about the social phenomena that we are interested in, and in this respect research that attempts to go beyond the level of description, whether quantitative or qualitative, is always more interesting.
Rognvaldur, I still think you misunderstand qualitative research. Qualitative researchers don't "limit" their number of subjects because they think they can't generalize; they usually have a smaller number of participants (the preferred term in qualitative research) because qualitative research is labor-intensive, and qualitative researchers usually don't have the time and resources to include large numbers of participants. (There are exceptions, such as Michael Huberman's The Lives of Teachers.) Although generalization isn't the primary goal of qualitative research, qualitative researchers often do generalize their findings (although the usual term is "transferability"); see the attached paper on Generalizing In and From Qualitative Data. Also, many qualitative researchers make explicit use of prior theory; they just don't restrict their data collection and analysis to what that theory suggests.
Of course qualitative researchers don't limit what they do to description; they interpret and theorize about what they describe. My point is that in doing so, they aren't engaged in demonstrating relationships between variables, but in explicating the complex meanings and processes involved in the individuals and activities they study, and the complex ways in which these are influenced by the contexts in which these occur. This doesn't rule out causal explanation; see the attached paper on The Importance of Qualitative Research for Causal Explanation.
I also don't think the natural science/social science distinction is helpful in understanding the differences between qualitative and quantitative research. Some natural sciences can be substantially qualitative (field geology, ethology); see my previously-attached paper on Designing Integration for examples.
Joseph, I should add that I think that in the end we are going to agree much more than disagree about qualitative research. I say this because some years ago I did select your book Qualitative Research Design: An Interactive Approach as course literature for a postgraduate module in qualitative research methods for which I was responsible. It was by far the best I could find among many choices, and you know there is quite a selection out there. I haven´t read your book on Realist Approach for Qualitative Research, but suspect it would agree with me.
Joseph, this reply should come before the one above. for some reason it didn't register when I added it:
Joseph, yes, there is clearly a disagreement between us regarding qualitative research, although of course I don’t think I am mistaken (not yet anyway). Your view is closer to the received view, the one I find to be philosophically problematic (in certain respects).
You say qualitative research does not limit the number of participants because they think they can’t generalise, but rather because qualitative research is time consuming. First, this is an inaccurate description of what is actually the case although we may perhaps agree that this is a bad reason. Lots of papers excuse the number of participants by saying that the point of qualitative research is not to generalise. Second, qualitative research—in the way it is currently conducted—is labour intensive and therefore limiting regarding participants, mainly because it goes about collecting and analysing the data under the guidance of the idea that one shouldn’t have any preconceptions about what you are going to find, and that categories and themes will emerge from the data if you read it often enough. Of course there are other practical details that make qualitative research time consuming, e.g. because interviews take longer than filling out a questionnaire, but this is nothing compared to the time spent on the so-called inductive analysis of the qualitative data.
The idea that it is possible to ‘generate’ hypotheses from the data without theory is completely contrary to all ideas about hermeneutic inquiry, and this idea derives from the positivist idea of science as inductive inference from theory free observations. The basic fallacy is to think that to have preconceptions is to be biased. To have preconceptions in unavoidable, and the trick is to have the right attitude towards them, notably to realise they may be wrong and that you therefore don’t commit yourself to their truth. The best approach is not to have no preconceptions, but to have as many preconceptions as possible, and to explicitly recognise always the possibility of the opposite to whatever preconception you may have. That puts you in a position to ask in a much more detailed and yet neutral way, get a much richer data, and still find analysis much easier. And it still allows you to be open to the possibility that you might lack some relevant preconception, something you didn’t ask.
Joseph Maxwell: Per Webster's, measurement. The act of measuring or the process of being measured. I contend I am in the process of measuring Research Gate. Webster's contends there are three major systems of measurement which are the US Customary System, British Imperial System, and International Metric. Which do you contend your measurements "fall under"? Or are neither my measurements or your measurements described in the section on measurement? Julie Ann Racino P.S. I am an admirer of your work, by the way.
I agree that there are no "bright line" boundaries between qualitative and quantitative research. So far, we have done away with the assumptions that qualitative research can be purely inductive, that causality has no place in qualitative research, and that generalizability is a genuinely achievable goal in quantitative research.
I would add my own objection to the idea that qualitative researchers can't start with hypothesis testing as a goal.
That said, I think there are very meaningful and useful cultural differences between qualitative and quantitative research. And (getting back to a portion of the original question), I don't think that mixed methods research resolves those differences. Instead, current practices in that area are primarily oriented to research designs that borrow from the different strengths of different methods. I basically follow the position that mixed methods is a "third way" of doing things, rather than an epistemological bridge between qualitative and quantitative research.
FYI I too prefer the term transferability over generalizability. For example, if I do a statistically generalizable study of the US it will little relevance for the same topic in Japan, Yemen, etc. etc., and thus have limited transferability.
David, nice summary of points dealt with so far (although everyone may not have accepted them as 'refuted'; I definitely have). However, I am still a little unsure of how exactly you think of mixed methods. In particular I am wondering whether you think—like I sometimes do (but still am unsure whether this is representative of mixed methods today)—that instead of being a fusion, mixed methods research often proceeds by really doing two separate studies on the same subject matter (one qualitative, one quantitative) whose results or conclusions are brought to bear on the common issue very late in the process. One can compare this to treating a schizophrenic patients with drugs and with therapy simultaneously, hoping that the drugs and therapy somehow interact to cure the patient although we don't really know how. So I am really wondering if you think mixed methods should be like this, or whether we should be looking for a much more integrated fusion of the two? I am myself sympathetic to the idea, although I don't see clearly how exactly it is supposed to go, and haven't come across any examples of it. I think perhaps that a first step in the exploration of such a fusion would be to develop a qualitative research programme that is more realist, and more hypothesis driven, and then hope that in the process it will become more clear how the two could fuse or at least brought to complement each other more closely than before. Maybe Joseph already has sketched the outlines of something like this in his book on realist approaches in qual res? I realise I must get round to read it.
Rognvaldur: first, thanks for your nice comments about my Qualitative Research Design book. Second, I'm not sure how to decide when a mixed methods study is "two separate studies" or a "fusion" of the two. This seems to me to be a continuum (or better, a complex n-dimensional space. Take a classic mixed methods study (although one rarely recognized as such), Stanley Milgram's Obedience to Authority: An Experimental View. Milgram not only measured the extent to which participants were willing to inflict pain on a feigned "subject" under different experimental conditions, but also provided detailed qualitative descriptions of the experimental procedures and the participants' behavior and reactions to the researcher's orders, and interviewed participants afterward about their thinking (to better understand the process of obedience). His arguments for the validity of his conclusions drew on both types of data. Is this two separate studies, or a fusion? I don't think either term adequately captures what he did. For a more detailed discussion of integrating qualitative and quantitative approaches, see my previously-attached paper on Designing Integration in Mixed Method and Multi-method Research.
Joseph. Thanks for the nice discussion.
My question is: Why should it be a qualitative method to provide detailed descriptions of the experimental procedures and the participant's behavior ... . All researchers should do so. I think the Milgrim studies are good research and it is not necessary to categorize it post hoc.
I am sorry, but in the end I can not see the 'common idea' underlying a category called qualitative research (or quantitative). There are different methods that can be mixed or not but what is the justification of putting some of them in the category of qualitative and others in the category of quantitative research. I ask myself, what is the real difference? And I what to ask Rognvaldur how to realize a pure qualitative or quantitative study? It is not data, it is not method, it is ... .
Probably we are talking about a dimension or many dimensions of either we prefer language or mathematics or to produce laws or single case descriptions (nomothetic or idiographic; Windelband), or to generalize more or less, or to use questionnaires or open questions or try to understand single persons, small groups or universals ...
Sorry for the provoking statement.
Regards Thomas
Thomas, I don't think there is a single "common idea" that distinguishes quantitative from qualitative research. I DO think that there are important differences between the two, and one of these is the difference between thinking in terms of variables and thinking in terms of events, meanings, and processes. Milgram's detailed account of what happened during his experiments wasn't simply a description of the "experimental procedures"; it also included narrative, idiographic accounts of how individual participants responded to the researcher's instructions and the "subject's" increasingly agitated reactions to the "shocks" being administered. To me, these (and the interview data) are clearly qualitative, and Milgram used these to develop and support his conclusions. I don't think Milgram SAW these as "qualitative"; if he had, he could have used more systematic methods of qualitative data analysis. But remember that this study was done in the early 1960s, before Glaser and Strauss's The Discovery of Grounded Theory put qualitative analysis on the map, and even before the term "qualitative" became widely used for this approach.
Joseph, I think you are perfectly right about Milgrams experiment. I hadn't thought of that. It is a good example of a fusion rather than the paralell application of two distinct procedures. As an answer perhaps more to Thomas' query, then I would suggest that the degree to which Milgrams study is qualitative is not perhaps on the description of the experimental conditions (everybody describes the experimental conditions) but in the behaviour of the participants in response to the test conditions. This leads to the question, is it then the practice of describing the behaviour/replies of participants in qualitative terms (non-numerical) rather than in numerical values of predefined variables that distinguishes qualitative from quantitative (which is, I think, Josephs thesis)? I think I can agree that this is a good way to describe current practices. But, I am not yet persuaded that we should think of this difference as a difference between two categories, rather than two extremes of a continuum. I think one can argue that even Milgram, in his qualitative description, uses terms that really serve as variables, although they are not expressed in numeric values. We can't really avoid using typifying terms in our qualitative description (conformist, rebellious, etc...), although of course these terms don't always have a clear definition (lack construct validity).
Thank you for answering my provocative statement. I agree with Rognvaldur that we should imagine it as continuum, I would add as a multidimensional continuum. I also agree with Joseph that historically the conceptualization goes back to the sixties... I fear a little bit a growing gap between rather qualitative or quantitative research, or researchers. Therefor I prefer not to split the method canon into two categories.
Let us have a look on a simplified case study: In psychotherapy research for example there was first, if we want to evaluate it post hoc, a phase of rather qualitative research (case examples, Freud, Jung etc.), but today these case studies - especially by historcal research - are identified as often arbitrary, sometimes close to pure fiction. To cope with that problem group statistical methods were developed, but they also came to their limits. Afterwards critics came up from researchers who called themselves qualitative researchers. They identified a huge amount of critical aspects and helped a lot to enhance validity, credibility, humanity and fidelity. Both groups, the traditionals who were further on called the quantitative researchers and the new group, the qualitative researchers learned from each other, but they also built up an institutionalized distinction that as I stated (and followed Rognvaldur) is rather a multidimensional continuum.
My consequence would be, that both approaches should be (re)united to produce better studies and better research by respecting sophisticated arguments from both points of view. I think multiperspectivity helps a lot on the way to validity (it is often successful in other disciplines to combine methods, like physics, or biology). In my opinion every special study can be improved by critics from other points of view and additional expertise.
(I have to admit, that I am surprised, that the very small discipline music therapy seems to repeat the described development of psychotherapy, but years later.)
Regards Thomas
I think there is a difference between what mixed methods as a field might be capable of doing and what is currently the dominant approach there, which I see as a technical emphasis on the practice-oriented aspects of research design. I have to admit that I do quite a bit of latter myself, and think it is a real improvement over the more limited options that we used to have. Still...
Some of us are indeed pushing for new paths such as pragmatism or critical realism, but I think that a lot of people would prefer that all the philosophical discussions would just go away. That is the sense I get from most of the folks who want to concentrate on technical aspects of research design instead of the bigger issues.
Joseph, I searched for your paper on "Designing Integration in Mixed Method and Multi-method Research" and didn't find it. Maybe you posted it earlier and I just missed it?
David, I posted the paper earlier in this discussion, but have attached it again to save everyone time finding it. It is a near-final draft of a chapter to appear in the Oxford Handbook of Mixed and Multi-method Research.
Rognvaldur, my thesis is that there is no single, "basic" difference between quantitative and qualitative research. They both involve complex "mental models" through which their practitioners typically view the world. An important aspect of these models is that quantitative researchers typically conceptualize what they are doing in terms of variables (which can be measured either with verbal categories or numbers), with a goal of aggregating data on these variables to produce general findings, while qualitative researchers typically do NOT think this way, preferring to develop particularistic, in-depth descriptions and interpretations of the phenomena they study.
This is a really interesting discussion, dealing with issues I faced whilst researching for my PhD! There are some researchers who are die hards at either the qual or quant ends of the continuum, their views may never be reconciled. However, I think we should draw a line under all the arguments about the differences between the two and embrace both for what they can offer to research. Teach students to consider which would be the best method to answer their research question and use more than one if possible to gain better understanding of the research topic. Why do we need philosophical debates? Surely, it is more important to use the best method/s used ethically rather than become embroiled in the stance we are taking?
Stephanie, I perfectly agree that what we should be teaching students is that they must always begin to formulate a research question and only then consider what is the best method to use. Furthermore, that in making that choice they be well informed about the strength/weaknesses of each method. And, let us also agree that both qualitative and quantitative research, even as it is conducted now, despite all its potential flaws, is good research well worth doing. There is only the question of whether it can be done better; not whether it should be done at all.
However, there is no getting round the philosophical debate, simply because the questions 'what is the best method/s to use' and 'what is their ethical application' are both philosophical questions, or at least essentially involve questions that are essentially philosophical. The special sciences are focused on various domains of research questions, say, 'how do we keep/retrieve our health' (health sciences), and in pursuing that question they are not investigating the nature of valid inferences, various types of bias, what knowledge is, and objectivity, subjectivity etc. These notions are the subject matter of the philosophy of science, and are all relevant to the question 'what is the best way to conduct research'.
Of course, we may have different ideas about what it is for a discussion to be 'philosophical'. To my mind, whenever we turn our attention from what it is we are doing, and ask ourselves 'what am I doing, and should I be doing it this way?' we are engaging in philosophy. It is when we reflect on the things we take for granted and ask ourselves whether we should think of them in the way we actually do, that we become philosophers. This is what is meant by the 'emancipative power' of philosophy, notably its power to liberate us from false ideas about how things must be, that simply have been taken for granted because they were never challenged. For instance, that homosexuality is 'unnatural', that women are subordinate to men, and that qualitative research proceeds without hypothesis to see things as they really are, in order to then generate hypothesis through some inductive process. In this sense, 'philosophy' is not restricted to the activity of the academic discipline that we call philosophy, but simply is whatever we do when we engage in self-reflective and critical thought. It is just that philosophers are particularly trained to engage in that kind of self-reflective and critical thought. Unfortunately, the fact that sometimes philosophers take self-reflective and critical thought to previously unheard of levels of absurdity, gives philosophy a bad reputation : )
Stephanie, I agree with Rognvaldur that we can't do away with discussion of philosophical issues. An important reason for this is that philosophical ideas and assumptions are real properties of researchers, ones that strongly (and often unconsciously) shape their theoretical and methodological decisions. I do agree that the prevalent view that the main philosophical issue is the conflict between two monolithic and irreconcilable "paradigms" is seriously misguided; see my "Paradigms or Toolkits?" paper, posted earlier in this discussion.
However, I disagree with the widespread assumption that the initial and determining decision should be formulating your research question, and that your methods decisions should follow from this. The issue is more complex than simple linear determination; research questions are often (and should be) influenced by what methods are feasible and appropriate for the study, and what validity threats these raise. Neither the question(s) nor the methods should be the determinant of the design; each should inform the other. See the attached paper; there's a more detailed explanation of this perspective in my book Qualitative Research Design: An Interactive Approach (Sage, 3rd ed., 2012).
Thomas, the explicit and systematic use of mixed methods goes back a lot farther than the 1960s; the usual view of this history in the mixed methods community is shockingly myopic and ahistorical. See my previously-postedpaper on Designing Integration in Mixed and Multi-Method Studies.
Joseph, I think we may be back again to a disagreement turning on nuances of 'determine' (i.e. as single sufficient determinant vs. influence). So, I admit that when we are deliberating about our research questions, background knowledge of feasible research methods make up the pre-understanding with which we interpret our situation and thus influence (determine in the weaker sense) what we perceive as an interesting research question (and that is just a paraphrase of what Kuhn said about research paradigms). When we have a rough idea of what we want to investigate, methodological concerns will enter more explicitly in the further refinement of the design of the particular study, including the refinement (or further specification) of the research question (which I think is a rough paraphrase of how you think concerns to do with method and research question interact in the design of the study). I take this to be an unavoidable consequence of the hermeneutic thought process, rather than a normative statement about how we ought to do, because this can be a double edged sword. In the ideal case our background assumptions about methods are correct and complete, and we make a good call about what kind of method to use for a particular research question (and vice versa). However, if our methodological knowledge is askew, there may be problems. If we think quantitative methods is THE method to use, we may be mislead into approaching problems with that method that really aren't suitable (similarly for those who are overconfident about qualitative methods). In the former case, interaction is good, in the latter it is not. In any case, interaction of this kind between methodological and research questions cannot be avoided. And yet, I think it is right to say that when we are initiating a research project, questions about the subject matter is at the focus of our attention while questions about method are what we attend from. And that is perhaps the only way in which I would insist that we first focus on the research questions, what it is we think is an interested research problem.
Let me end by saying that I am surprised to hear you say that it is a widespread assumption that research questions come first and the methods later. I would have thought that social and health science in general are better characterised as an activity where researchers make an initial choice (even already as undergraduates) between joining the cadre of quantitative researchers, or that of qualitative researchers, and then they stick to their guns. The cases where people start out with one (or both) and then flirt with the other seemed to me to be the exceptions rather than the rule. I would however better trust your judgement than mine when generalising about these things.
Joseph,
I do not want to be misunderstood. I am aware, that qualitative methods are older, but the discussion (e.g. deutscher Positivismusstreit) was of less relevance in the topic I described (Psychotherapy Research). Furthermore I only wanted to give a statment that the split of methods into two categories is a problem that in my opinion should be solved to produce better methods. I think we do not need two methods with different methodology for the production of research on the same contents.
And I agree with you that also the methods determine the questions (paradigmata) and that research questions do not follow linear determination. But on the other hand we should not use methods for the only reason that we have them.
Stefanie,
not to reflect methods and methodology would mean to me that it is better to act than to think. Both aspects are relevant.
Regards
Thomas
Rognvaldur, I was referring to the standard view presented in research methods texts, that the research questions should determine the methods used, rather than vice versa. It is certainly true that many researchers have already made (consciously or unconsciously) methodological commitments prior to formulating their research questions, but methodologists typically inveigh against this, going to the other extreme. I think both extremes are dangerous to developing the most effective design; this is why, in my research design model, I place the research questions at the center of an interactive system, rather than as the determinant of other design components.
Joseph, I should have been more precise. It has seemed to me that despite what many textbooks on methodology say (by no means all of them do), the reality is that researchers commit to method first and ask questions later, or, which may be fairer, that they limit their choice of questions to those that allows investigation by the method they have adopted. It also has seemed to me that many textbooks take for granted that the choice of method is already made, which is why we have specialist books in quantitative vs. qualitative methods, as opposed to books that cover research methods in a discipline generally. Finally, I do agree that once you explicitly start to consider an effective design—and I take it that we are discussing the design of a particular study that aims to investigate something in particular—there needs to be a balance between research question and methods used. The same kind of balance we need to have between consideration about aims and of means. I think my position is really that although difficult to separate from each other, then surely aims have some sort of priority over means. You seem to disagree. Having said that, then I get the feeling that we have now narrowed our disagreement more or less down to the level of 'which came first; the hen or the egg'. I am happy to concede that research is a continuous ongoing process where questions about research questions and methods are intertwined in such a way to at least make it pragmatically sound to consider them to be on an equal footing. If they still happen NOT to be on an equal footing, I can't see we would have risked making very serious mistakes, or even that we would have to change our pragmatic thinking to any significant degree. I may have been strong headed in this debate because I am used to discuss with people who do favour one method before the other and let their choice of research questions be governed by it, and indeed can't understand the appeal of the other.
One problem in our debate is, that there seems to be no advocate of the quantitative approach who is engaged in our discussion. Therefor I sometimes try to take over parts of them. In my opinion your Table 9.1 and the explanation of the paper you kindly provided exaggerates differences and it is in some aspects not really valid.
Purposes: So called quantitative researchers would not agree that their purpose is pure measurement, or to establish relationships between variables or inference from sample to population. They would also tell you, that meaning is what they are searching for, but they often measure, correlate and use inference methods (and they also build up models like causal models). Quantitative researchers are also interested in context (they call this interfering variables etc.) and in process (process can also be quantified if you will. Unanticipated events, influences, and conditions are also relevant to the quantitative branch. And it is an ethical value, that in research, the participants should be understood. That it is often forgotten does not belong to the method.
And a lot of work has been invested in the understanding of single cases by “quantifying case studies”, for example as measures of quality management.
And about the aspect of the inductive development of theory we debated a lot in that block. I am with Rognvaldur, that it is not possible to start without pre-assumptions.
Conceptual framework: So called quantitative researcher would refuse, that all quantitative research belongs to the category of variance theory, there is also Bayes theorem and other quantifying possibilities (non parametric). Process theories exist a lot in ‘quantitative research’. For example most process-outcome studies in psychotherapy research imply process theories etc.
Research questions: So called quantitative researchers would not believe in variance questions, they use variance sometimes to operationalize research questions. It is nearly the same with correlation. And the ‘truth of any proposition’ should be relevant to both, the quantitative and the qualitative perspective. What means presence and absence? Both aspects can play a role in many research approaches. Hypothesis testing is what quantitative researchers sometimes do, but they also describe datasets or conduct exploratory studies.
Process questions are often object or interest of quantitative research. This does not only belong to qualitative approaches. The questions ‘how’ and ‘why’ are relevant for both approaches, not only to qualitative research. They belong to theories rather than to research procedures. Questions about the context are also relevant for quantitative researchers. To use hypotheses as a part of conceptual frameworks is explicit what a lot of researchers in psychology do, when they are for example validating a psychological test. The causality problem is too complex to discuss it here.
The questions ‘What is happening here?’ and ‘What are the characteristics of this phenomenon?’ are relevant in every research.
Research methods:
Relationships: The argument that quantitative researchers are looking for objectivity and see researchers as extraneous variable, whereas in qualitative research influence is used as a tool and researchers are part of process is an interesting part of reductionism critics, but should play a role in every research (Rosenhan, Rosenfield). And qualitative or not, observations in ethnology can be emic or ethic …
Sampling: I can’t see why quantitative researchers should not purposeful sample data. What is meant by the concept of ‘probability sampling’? And establishing valid comparisons is not only an aspect of sampling.
Data collection: Here the main point seems to me, that adapting to particular situation is qualitative in your view. But if for example a RTC is conducted a lot of work is always invested in adapting to the particular situation the RCT is constructed for. Standardization is really often done to compare different aspects. And here we again were before in this discussion. That probably the only real difference is the data that are used (textual, visual or measurments). And is it really true that quantitative data collection is typically pre-planned, structured, and designed to ensure comparability across subjects and sites. A lot of studies using quantitative data are exploratory and the use of quantitative data in single case course studies is widespread.
Data analysis: Usually in quantitative methods numerical analyses are preferred. Yes, there is a difference.
Validity:
Internal validity: I do not know, what statistical conclusion validity means. But if you understand it as the use of mathematical / statistical models you may be right in identifying a difference. But what you have put to the category of qualitative research is also relevant for quantitative research(ers). Especially interpretative validity is a huge enterprise in every research. Every journal attaches importance on the discussion of a presented study, if it uses qualitative or quantitative data does not play a role. Also the idea that causal validity implies the identification and assessment of alternative explanations does not belong to the analysis of qualitative data, it is also relevant in the analysis of quantitative data (see Popper).
Generalizability: External validity is not only comparability for so called quantitative researchers it is also transferability and the question of the fitting of a theory to the data.
To me it seems that I can reduce the difference of qualitative and quantitative research to the use of numeric relatives, statistical methods and models on one hand and text and text analysis or group discussion … on the other hand. Every other aspect seems to me of relevance to every use of research methods. The integration of both institutionalized categories to me seems easy and possible. Therefor I vote for the integration of these artificial categories and not for mixing them, especially because I think that researcher can learn a lot of the debates and arguments that were brought up by qualitative researchers. But there is no need (in my opinion) to build up a split in research methodology.
Additionally there is the danger to be a reductionist in both approaches. Text, words and sentences as well as numbers and mathematical formulas are abstractions. And every research that is done in the field of social sciences is to understand humans, human events and in the end are human affairs.
Often in your table and argumentation you reduced the work of your quantitative researchers to some statistical methods ore concepts (variance), whereas in the qualitative category you argue more abstract. But research is never only quantitative. The consequence is, that the comparison often does not really fit.
Sorry, for being provocative again, but I think provocation fosters discussion.
(Additionally I want to be understood the right way, I am not a supporter of the naïve use of quantitative methods and do not define myself as quantitative researcher. I sometimes took the position, to underpin that the categorization of qualitative and qualitative methods is limited, because it splits the world of research.).
Regard Thomas
With regard to the extent to which questions influence methods or the reverse, I always use the examples of how likely is it that a lifelong qualitative researcher will suddenly decide to use structural equation models because she or he realizes that would be the "best" method to address a given question? Alternatively, would a committed quantitative researcher decide to become an ethnographer because that was what a question "required"? (I suppose one could find an example of one or the other of these extreme cases, but the point remains that virtually no on would even consider such a switch.)
As a sociologist and social psychologist, I also go beyond the question driving the method or vice versa to include some other important factors. These include: personal interests, available resources, desired audiences, and power relations. My feeling is that we too often "decontextualize" the factors that influence researchers' choices, and instead concentrate on a strictly methodological decision making process.
With regard to the other thread, I definitely agree with Thomas that quantitative research is often more free form. I have published any number of papers using regression (and even one where I did structural equation models), and I purssued a lot of "inductive" exploration in my data to come up with the most meaningful "narrative" that I could construct. My publications then used "reconstructed logic" in the problem statement and literature review to show why the work I did was what my research area needed. Of course this is nothing new -- Latour and Woolgar's "Laboratory Life" made similar points through a detailed ethnographic study of prize winning biological research.
And to bring my two lines of thought together, when I do both my qualitative and my quantitative work, I unavoidably tap into my personal interests, as constrained by my available resources, and shaped by the audiences I want to influence. As for the influence of power, probably best to leave that to speak for itself (sigh).
Thomas, I really like your most recent contribution. I think you pretty much hit all the spots. Which is of course not to say that we have got it sorted now, rather I think you've identified the questions we should be asking, the doubts we need to express. You are exactly right about one thing, which is that provocation fosters discussion; at least when the provocation comes in the form of a good argument in favour of a contrary position, or elucidation of a position taken for granted.
Might you all agree to analyse our discussion as one where one side explores the basis of making a distinction (Joseph?) while the other explore the possibility of collapsing the distinction (me, Thomas, David?), perhaps ending up agreeing in the end that we have a continuum of possible research orientations where individual studies end up more on the quantitative or qualitative side of things. I am prepared to accept Joseph's table as outlining possible extremes (as least as mental models), but due to the kind of concerns that Thomas expressed so well, then I am perhaps not prepared to think of them as ideals to be copied; I even doubt that the extremes can be realised in practice, although reflecting on them may help us understand research better. I am not sure Joseph's table is meant to do anything more than guide understanding of the various dimensions in which various studies can differ in degree.
There is of course, as a matter of historical fact, a rough distinction into two research traditions. However, the question is if that distinction should prevail and to what extent. I am certain that existing research traditions within the sciences, and the structure of the way the sciences foster new researchers, have tended to push people towards a choice between a career in one or the other tradition, which in turn has served to push people towards the extremes; that is the downside of specialisation trends. How feasible it is to actively work against it, I don't know. But, if each tradition comes to a better understanding of itself through self-reflection they might fuse anyway.
Thanks to Rognvaldur and to David for some supportive statements.
In my experience, and I did (or was part of) both, qualitative studies and better published quantitative studies but in the end it was always a blend (if you will). My experience is that all methods are limited. I am really critical about typical, common or mainstream methods (e.g. RCTs, effectiveness studies, experimental studies) too despite I have been part of them. These approaches often are put into the quantitative category.
To point out some of the crucial aspects of for example RCTs:
• RCTs are often not reflecting real practice in psychotherapy (research).
• Homogenous groups are often constructions that cannot reproduced in clinical reality.
• With the choice of specific control groups one can modulate the expected effect. Therefor a smart researcher is able to earn what he sowed.
• Only few researchers reflect p-values and effect-sizes as estimations or think about the meaning of these mathematical models.
• …
Added with what’s going on since very critical studies (e.g. Fanelli 2010) were published (something that is currently oft called the crisis of confidence) the only possibility seems to build up new and better reflected approaches. It has for example been identified, that the amount of positive results is exaggerated and replication studies are few. But there are a lot more problems. Just to name some of them: allegiance / stakeholder problem, systematic errors, failed self-correcting mechanisms in science, the production of justification research instead of research that wants to produce knowledge (German Wissenschaft), research as control mechanism despite there is rare knowledge (e.g. typical us of meta analyses).
On the other hand, there is the tradition of qualitative research, which now could be seen as alternative approach to that disaster (but that would be cynical). In my opinion, a coupe of these problems could be solved by the integration of some critical discussions that were contributed by qualitative researchers.
Some literature about current crisis:
• D. Fanelli: „Positive“ Results Increase Down the Hierarchy of the Sciences. PLOS one. 2010, 5 (4) e10068.
• J.P.A. Ioannidis: Why Most Published Research Findings Are False. PLOS Medicine. 2005, 2 (8) e124
• N.S. Young, J.P.A. Ioannidis, O. Al-Ubaydli: Why Current Publication Practices May Distort Science. PLOS Medicine 2008, 5 (10) e201
• H. Pashler, Ch.R. Harris: Is the Replicability Crisis Overblown? Three Arguments Examined. Perspectives on Psychological Science. 2012 7: 531, DOI: 10.1177/1745691612463401
• J.P. Simmons, L.D. Nelson, U. Simonsohn: False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Perspectives on Psychological Science. 2011 22: 1359 DOI: 10.1177/0956797611417632
• H.M. Fuchs, M. Jenny, S. Fiedler: Psychologist Are Open to Change, yet Wary of Rules. Perspectives on Psychological Science. 2012 7: 639, DOI: 10.1177/1745691612459521
• J.P.A. Ioannidis: Why Science Is Not Necessarily Self-Correcting. Perspectives on Psychological Science. 2012 7 (6) DOI: 10.1177/174569161246056
Rognvaldur, you are right, with your proposal that a systematic discussion of the advantages of 'both approaches' could be an interesting enterprise.
Regards Thomas
At the risk of being an outlier, here goes. When I was younger, this appeared to me the way to answer the question - engage in an epistemological discussion. Now I am not so sure. For the most part knowing is a tricky affair, and error and muddle are much more common than we allow. One reading of Kuhn is that ways of knowing are so inherently flawed that only a community within which debate occurs around a shared set of rules as to what constitutes knowing can come to a consensus and that consensus is provisional. What we know for the most part and what we study are what Searle might term institutional facts. These facts are primarily outcomes of intersubjectivity. I prefer a different approach in which the object of study is tightly constituted as suggested by Bourdieu. Method then becomes a practical activity, a set of material practices, and not a surrogate for how we go about knowing how we know. what is important is not the knowledge generated by method - whether multi-method or not - but rather whether the methods can be replicated successfully and the findings either challenged or confirmed. What the findings mean is an issue separate from how they were drawn.
Hello everyone! It took me about a month to find the necessary time to read the discussion with the attention it deserved. I would like to thank you all for the discussion, the references, the papers and the ideas [it seemed like an intensive course in epistemology to me]. What i feel i need to add here, is the practical problem that f.ex. people like me, who share all your worries and quests, face a tremendous barrier while conducting research. I mean, in economics which has been my research field, even to discuss the relationship between qualitative and quantitative methods seemed too much out of the point. Qualitative methods seem of lower status to the point that my own supervisor used to tell me "this is not economics, what you are doing". Even if the researched topic is something we know nothing about, and there is no literature worldwide to give any hint on "what is this?", the need for choosing a mix of methods according to the topic, to the peculiarities fo the research questions, ti the ethics towards the research participants and their communities etc is not taken into account. The funny thing is that the "looking for hard facts" economists have no other way to approach a topic, so they just do not conduct research on the topic instead of at least proposing their own methods and proving the appropriateness of those methods for a specific field. Once someone wants to do this work, then "the methods are never enough" because it should be something like state agencies' statistics. No matter how much literature i provided people with on this epistemological topic, it seemed not enough to shake their bold assumption that economics is hard statistical data and hard statistical data only.
Irene, I've encountered this problem as well, although not specifically with respect to economics. I don't know how you've argued for the value of qualitative approaches, so I can't give specific advice. I've attached a paper in which I addressed this issue more generally, but I'm not sure that any of the strategies I describe will solve your problem. I also think that the idea of an epistemological "divide" between quantitative and qualitative researchers isn't either accurate or helpful.
I apparently can only add one paper at a time; here's a second, more recent paper that discusses the "mental models" of quantitative and qualitative researchers, and implicitly argues for the joint use of both models, what's generally called "mixed methods" research, an approach which has been used in economics.
Here's a paper that cites some mixed methods studies in economics.
Irene,
You should consider reading several of John Kenneth Galbraith's books, notably "The Affluent Society", and "The New Industrial State" . He has many useful things to say about the state of economic theory and methods.
For a different perspective entirely, you might want to read" Is a Disinterested Act Possible?", a paper by Pierre Bourdieu which I believe is reprinted in his book "Practical Reasoning". Bourdieu was not against qualitative or quantitative methods and was a quintessential multi-methods sociologist/anthropologist. As you probably know, he was not a big fan of regression models as they did not deal well with the relational aspects of ideas, people and fields. It will be quite hard for you to go against the conventional wisdom of your discipline. As a new entrant in the field, trying to create capital for yourself and trading on the capital of your supervisor, you have the low risk strategy which is to conform or the high risk strategy to subvert if you want to maximize your capital. The latter is very tricky in a five year Ph.D. program.
Things to consider. Good luck dealing with the dynamics of your field.
Irene,
I sympathize with your situation, and I wish that I had some helpful advice. To my mind, you are up against a strong set of cultural assumptions, where the professors you describe cannot even conceive of what you are proposing.
Sadly for you, there are elsewhere the are some few departments of economics that have a greater acceptance of work that is oriented toward discovery and exploration. In the U.S. this is known as political economics, and it was once a thriving sub-disciple, as practiced by people such as J. K. Galbraith. Now, most of the work here is limited to intensely quantitative analysis of secondary data sets.
Of course, at the other extreme, there are still some anthropologists who are suspicious of anything quantitative, so it can go both ways.
I guess the larger lesson is that methods do not have some free-floating reality, instead they depend on context and culture.
Thank you so much for the advice and the papers. I finally opted for doing what my subject-matter was "shouting" for, which obviously did not made most economists quite happy. I have finished my PhD and i insist in being aware of the multiplicity of methods available to any social scientist. What some athnropologists say about quantitative methods is also problematic, because quantity is also an aspect of human societies anyway, then we cannot avoid it and even if we do, our participants will remind us about.
THen, i appreciate very much the advice (i had also downloaded the papers offered previously in the discussion), because i want to work more on this, i.e. on methodological issues in economic research. Moreover, given that i had gathered a lot of quantitative data i realised once i had them at hand, that the statistical methods and tests we have in economics are completely useless for that specific data. I mean, i had data concerning non-monetary transactions or multiple valuings of the same items performed not in official currency, then i had to deal with the data within the context (non capitalist structures) they exist and not to transfer them into some official currency value, which could not be possible anyway. At the end, i was thinking that probably the statistics we have in economics are too into capitalist structures and economists would need other quantitative tools to deal with other types of economies. I had no time and no math brackground to research in that direction during my phd at all.
However, given that i continue to work on topics quite far from the mainstream economy, i face these problems every day. Thank you so much!
Great question, and something I've been thinking lots about recently. The epistemological divide between quantitative / qualitative deductive/deductive etc becomes even more confusing when you consider mixed methods within the qualitative world...Eg does narrative analysis assume constructivism but grounded theory assume positivist view? It is my belief that we can mix a variety of data collection and analysis tools as long as there is consistency in the philosophical perspective, and a flexibility and willingness to be able to step outside our paradigm in order to critically evaluate that which we represent. I do not think we can change our epistemology easily, chameleon-like. There is in such thing as objectivity, we bring our Selves to this field. Some interesting reading: http://hpq.sagepub.com/content/5/3/285.short