As my discussion with fellow researcher on the usage of qualitative research to enrich and add variety of undergraduate thesis. There are discussion of descriptive,explanatory and exploratory methods. Some said it have degree order as descriptive is lower than explanatory, some say as qualitative research its use a mixed and combine the method as the purpose of the research.
The first side is quantitative that needs clarification for qualitative that base on its nature is finds out phenomenon.
I need your expert opinion on this.
Yes, we do quantify quality to give it a benchmark, but the quantity is frequently not a relevant benchmark to the conversation. At its core, qualitative research addresses questions that are NOT quantifiable (if they were, they would be addressed using quantitative methods), not because they are "immature," but because the individual experience needs to be examined in itself, not as a reflection of a larger truth. There is an epistemological assumption about what research is attempting to learn when one assumes that a case represents a group, and that quantity identifies some specific characteristic of that group. If you are not operating under that epistemology, the process used in quantitative research is nonsensical and by definition will lead to incorrect inferences.
And I am as quantitative as they come when it comes to my own research. But that is because I am typically asking questions about large populations, about which small scale studies are limited in their ability to inform.
We need to quantify the qualitative data into (statistical or mathematical models) to understand them, so as to draw conclusions, fit models generalize the results, etc.
Besides, qualitative researches are not mature as quantity researches are, and therefore, not reliable by some of the decision makers,
For example if someone said that "product A is good", we cant draw a significant conclusion that product A is good. However, if we counted the positive words mentioned by all the interviewees against the negative words about the same product we might be able to draw a significant conclusion.
In computer science we call this natural language processing (NLP), a large number of qualitative software packages depends mainly on NLP, the understanding part of NLP is not mature yet. However the future is promising.
This is a good question with many possible answers.
One approach to answering this question is to come to terms with the notion of quality itself. This is done in
M. Pedersen, Image quality metrics for the evaluation of printing workflows, Ph.D. thesis, University of Oslo, 2011:
file://localhost/Users/jfpeters/Downloads/Pedersen_PhD_Thesis.pdf
See Section 2.1, starting on page 13, for the definition of quality. Basically, the quality of an entity is the conformance of the entity to requirements. It is as @Ahmad Hassanat has suggested: we quantify qualitative data to better understand them. Taking this a step further, it is natural to ask the question: To what extent does a particular quality of an entity conform to a requirement? Quantifying quality gives a benchmark by which we can compare the quality of an entity with the quality of other entities.
to describe objectively whole research of qualitative outcome one must quantify. it is not numerical analysis but only to describe to what extent the specific phenomenons are as the outcomes. It all about making further hypothesis.
Yes, we do quantify quality to give it a benchmark, but the quantity is frequently not a relevant benchmark to the conversation. At its core, qualitative research addresses questions that are NOT quantifiable (if they were, they would be addressed using quantitative methods), not because they are "immature," but because the individual experience needs to be examined in itself, not as a reflection of a larger truth. There is an epistemological assumption about what research is attempting to learn when one assumes that a case represents a group, and that quantity identifies some specific characteristic of that group. If you are not operating under that epistemology, the process used in quantitative research is nonsensical and by definition will lead to incorrect inferences.
And I am as quantitative as they come when it comes to my own research. But that is because I am typically asking questions about large populations, about which small scale studies are limited in their ability to inform.
I think sirs that it is this stage of quantifying the qualitative work where most of the endeavors in a scientific venture are spent. The mark of an excellent researcher is that the measures he uses are "fair" and are not aimed at misleading the readers but rather a means to quantify the actual performance as you have stated. Such quantification is helpful for a researcher to rate different methods and help in selecting approaches to attack new and more diverse problems.
However this cannot undermine the task of qualitative research, as it is requires to build a sound quantitative measure on top of it. Nothing can be good without both the foundation and the visible building structure being immaculate. The quantitative analysis helps in presenting your work in an effective manner whereas the quantitative part ensures that what you present will have a resounding effect on the community standing the test of time.
Dear Julia .
I did not mean that qualitative researches are "immature", I meant the number of quantity researches is much larger the that of qualitative researches, and the "understanding" part in NLP used by many qualitative analysis software packages is definitely "immature".
Quantitative analysis of qualities is possible, but even though measured exactly, the outcome not always can be satisfactory. It is good, if we can unequivocally answer to a binary question: is the even A important or not? Then the answer “Yes” or “No” is clear and perhaps useful. If the answer is “it is 49% important” is exact and can be understood as the event A is important in 49 cases. But this answer might not be as useful unless we know exactly when and where in the continuum of 100% the event A becomes important. In some of such cases perhaps Bayesian approach can help?
Both methods i.e. qualitative and quantitative techniques are important. Some times, in case of very new study, qualitative research becomes the basis of quantitative research.
Hi Dony,
Could it be because scientists with a math background have been educated in the spirit that quantification leads to a more objective view and possibilities for interpretation on (qualitative) research results. By the way why do sociologists apply statistics? By the way, why do physicians collect statistics on the occurence of disease and causes of death in what is called epidemiology. For the fun of it? Like collecting stamps?
I don' think so. They try to learn more about human social behaviour as well as the origin and frequency of certain diseases and even more, the causes of death. You can describe this qualitatively as well. But isn't that a waste of time, money and even more,... lives?
And what is the problem with maths and statistics by the way? THey are efficient tools in the natural sciences. Why not apply on qualitative sciences. Not possible? Think again!
Cheers,
Frank
There are a number of different issues here, and there are often good reasons to quantify qualitative data (see attached paper). However, it is simply not true that quantitative research is more "mature" or "objective" than qualitative, although I'm not sure that all of us mean the same thing by "qualitative." It is also not true that qualitative research is necessarily "exploratory," and only quantitative is "explanatory." Often, qualitative research is necessary in order to explain quantitative findings--to uncover the mechanisms and processes that result in a correlation. This is why Creswell and Plano Clark, in their textbook on mixed methods research (2009), label research designs in which an initial quantitative study is followed by a qualitative study as "explanatory" designs.
Creswell, J., and Plano Clark, V. (2009). Designing and Conducting Mixed Methods Research. Los Angeles: Sage.
Here's another paper that addresses the explanatory value of qualitative research.
Steps of a quantitative research may contain the following steps
(1) Obtain experimental data
(2) Propose a mathematical model
(3) Justify the experimental data using the mathematical model
It may so happen that the experimental data does not match with the mathematical model. Here comes the importance of qualitative understanding. Unless we understand the system qualitatively we will not be able to find out any error that has been introduced in the experimental procedure. In mathematical modeling also abstractions are necessary. Depending on the domain of interest we choose a particular modeling level in the abstraction hierarchy. Here also qualitative understanding plays a very important role. Therefore maturity comparison possibly does not exist.
Within the competition for substantial and reproduceable 'proof', though mathematics can be made to deceive, most people accept "numbers do not lie". Thus 'truth-asserting' numbers can also corner and oppress-from such, I think, arises the sense of immaturity in description that is missing 'substantialness', a sense of stability and permanentcy' in the form of physical measurements. ....conceptual energy that is divided from a humanly determined status quo, confined and frustrated for expression.
to add: in anthropology it is the consensus that determining forces originate externally beyond witness of populations and occupied spaces studied.
It is to increase the external validity of results and make them more generalized.
Several unique aspects of qualitative research contribute to rich, insightful results:
--Synergy among respondents, as they build on each other’s comments and ideas.
--The dynamic nature of the interview or group discussion process, which engages respondents more actively than is possible in more structured survey.
--The opportunity to probe ("Help me understand why you feel that way") enabling the researcher to reach beyond initial responses and rationales.
--The opportunity to observe, record and interpret non-verbal communication (i.e., body language, voice intonation) as part of a respondent’s feedback, which is valuable during interviews or discussions, and during analysis.
--The opportunity to engage respondents in "play" such as projective techniques and exercises, overcoming the self-consciousness that can inhibit spontaneous reactions and comments.
http://www.qrca.org/?page=whatisqualresearch
I think both quantitative and qualitative enquiry hold much ground. It depends in the end on the question you are trying to answer through your research. It is not the matter of justifying or falsifying one or the other method but rather bout utilising the appropriate method to provide the best results that ultimately support the research findings.
re: "in Anthropology it is the concensus that DETERMINING forces originate externally" (my own post)
I think it is important to emphasize that "change occurs from within". DETERMINING is perhaps not the best term to use but to refer from REACTION at the first perspective to the external reaction, to the external reaction, ad infinitum. The direction and parameters of change are facets only of possibility.
Very good question and some interesting responses. I would however like to add that the quality of the researcher also matters.
Quite simply, many researchers get the balance between the two approaches wrong or chose the wrong approach to begin with. In such as case, the fact that a need to quantify your research subject has arising in the course of doing qualitative research suggests that a change of approach might be required, and vice verse.
Many of the replies appear to assume you are asking about social research methods. Is that the case?
Qualitative data is a key aspect of many scientific endeavours, e.g. historically the colour of a litmus stick. The quantification of the colour is a heuristic device to help us think about the extent of acidification (or is it alkanisation, my chemistry is a little out of date) in a particular instance. Quantifying a qualitative result is not necessarily useful, especially if one wants to understand the dynamics of a particular case. The exception of course is if one can develop a formula that explains all the dynamics and can be applied to other cases.
What matters is whether the method chosen is useful (answers the questions being asked) and is accurate. If one wants to know how many people are likely to vote in a particular way, then quantitative methods are very appropriate and reasonably accurate. If one wants to know why people won't buy a particular product, quantification is less useful, though if you understand the factors in such decisions and can measure them, one can do factor analysis.
In relation to social research I just came across an old article by J Clyde Mitchell that is relevant 'Case and Situation Analysis" Sociological Review, 1983.
The qualitative research methods are often employed to answer the whys and hows of human behavior, opinion, and experience— information that is difficult to obtain through more quantitatively-oriented methods of data collection. Researchers and practitioners in fields as diverse as anthropology, education, nursing, psychology, sociology, and marketing regularly use qualitative methods to address questions about people’s ways of organizing, relating to, and interacting with the world.
https://www.sagepub.com/sites/default/files/upm-binaries/48453_ch_1.pdf
Since the mid-800, the two ways of understanding reality, positivism and interpretativismo survive, with ups and downs, within the social sciences along two parallel tracks and contending in each field of investigation. What, at least until the 90s, survived was the idea of the profound diversity - and therefore distinctiveness - the two approaches and, in some cases, their incommensurability.
On this basis there are two lines of research, each corresponding to a particular paradigm, the quantitative approach (positivism) and the qualitative approach (interpretativismo).
At present, however, the differences between qualitative research and quantitative research are identifiable only theoretically. If the comparison is moved from the theoretical to the concrete practice of research then not only could highlight what the criteria for distinctness are weak but also that "there is not a single act, a single decision of research, it is an inextricable mix of quality and quantity "(Campelli, 1996, 25)
Market research can seem complicated when you are about to undertake it for the first time – what may help is that it broadly falls into two distinctive areas that each have their own strengths: Qualitative and Quantitative research.
A good piece of market research will need to include both of these areas, but certain industries or marketing objectives may end up only needing one depending on the specific information you want to find.
http://www.bl.uk/bipc/resmark/qualquantresearch/qualquantresearch.html
When people talk about quantitative and qualitative data collection they often see them as diametrically opposed to each other but this is plainly wrong. The difference is not at the stage of data collection but at the stage of data processing. To give an example: a common method to get quantitative results is to use multiple judges to assess a situation. Each of the assessments on its own is typically a qualitative judgement, but once we take a summary measure combining the judgement of multiple judges the resulting measure is typically seen as quantitative: a single judge calling something “beautiful” is qualitative, but 6 out of ten judges giving the judgement of “beautiful” allows us to establish the quantitative score of “0.6”.
So when it comes to data collection the difference is not really important. And in fact, when applied judicially typical qualitative research findings stemming from, for instance, anthropological field work methods, can be used to get quantitative estimates even at the national level (such as proportion of households below the poverty line).
If we come to that, many national “numbers”, and what is more quantitative then the national income of a country, are at least partly based on rough qualitative estimates.
So the best reply to doubts of the value of qualitative research is to look at the quality of the numbers used in quantitative research.
By this answer I do not imply that qualitative research is superior! I believe that quantitative analysis is very important and leading to great insights.
For proper understanding, analysis and comparison of qualitative characteristics, we need a quantitative measure. For example, academic performances are analysed by a quantitative metric called marks or grades. So, in higher level qualitative research problems, there may be many analyses, estimations, tests for validity etc.
Yes one can reliably quantify some qualitative data: Ph values used to be quantification of colour; and the colour itself was an indicator rather than a direct measure of acidity. (Nowadays we directly measure the concentration of H+ in the solution so we don't need to use a qualitative measure.) Similarly, if the phenomenon we are testing is relatively simple, such as understanding of a particular subject, then we can assign numerical values relatively reliably.
However, we need to recognise that in qualitative social research we are often doing something different. We are often trying to understand complex dynamic interactions or negotiations: and we collect very rich data on individual cases. What stands out when we do so, is the complexity of the phenomenon being investigated and the wide variability across cases, as well as some common features.
Grounded theory shows that one can reduce the rich data and compare cases to see if one can generate hypotheses or understandings that may explain the individual cases or generate more general 'rules' (see http://www.groundedtheory.com/ or Anselm Strauss 'Negotiations'). One can also assign values (Yes/No, True/False) to the qualitative data and use something like Qualitative Comparative Analysis (see http://www.socwkp.sinica.edu.tw/CharlesRagin/Ragin_NTU-day1.pdf or get the book Rihoux and Ragin (2009) 'Configurational Comparative Methods' Sage) using boolean logic to compare cases. There is even software one can use (I use Kirq). (BTW a word of caution, the logic of causality in some explanations of QCA is problematic, do NOT assume that necessary causes are enough to explain a phenomenon.)
However, if one does assign values to qualitative data, one needs to recognise that one has reduced the rich data and lost much of its value. Moreover, the process of assigning values can be highly problematic. Again it depends on the complexity of the subject being investigated, the range of variables involved in everyday experiences of that subject, etc. So the premises one uses to assign values may mask very important aspects of the data. The process of assigning values also modifies the data. The values data are no longer the things that were collected but something else, which may or may not bear much relation to the issue being investigated. The process of assigning values, even with rubrics etc., is also a qualitative and largely subjective process; and, if unmoderated, a highly individual assessment.
The issues increase if one assigns numerical values, particularly if one then treats such numbers as scalar values. Can one assume that the difference between 'highly satisfied' and 'Satisfied' is the same as the difference between 'Satisfied' and 'Neither satisfied or Dissatisfied'? I would suggest you cannot. Quantification by itself does not make the data 'better' and one has to be careful to test the validity of such quantification before making comparisons between cases and before conducting analysis.
So, Yes it is possible to quantify qualitative data but there are a range of issues that need to be considered before doing so.
i think a special consideration in the use of computer analysis is the very goal of scholarly pursuit to edify the scholar and thus his community; as stated complex analysis can skip over real meaning related to the researchers knowledge acquired from his path of study. Professors can be advised to 'skip it', impossible for the rationale to trace beginning to end. I think infinately more important is the experienced path from observation that becomes deleted. Even when analysis seems commensurate with expectation, reasonable and explanatory it accomplishes nothing towards producing the seasoned and wise
Yes, it is essential to refine, generalize, and more acceptable results.
Deare friends, usually the argument is qualitative vrs quantitative, but in my humble opinion both have limitations depending on the nmber of cases at hand in reality, but when ever possible you can see in almost all research departments all over quantitaive research is usually encouraged or preferred as it has what it is called " an illusion of precision".
A third option is qualitiative comparative research which may be better to use when the numbers cases is too many for traditional qualitative research to work well and not enough for quantitive research to work well. Ideas about this can be found in:
Non-Traditional Research Methods and Regional Planning Needs in Developing Countries: Is there an Ideal Methodology?
https://www.researchgate.net/publication/26422725_Non-Traditional_Research_Methods_and_Regional_Planning_Needs_in_Developing_Countries_Is_there_an_Ideal_Methodology
Good day to everyone!
Article Non-Traditional Research Methods and Regional Planning Needs...
Lucio, I'll read your article and get back to you but I offer the following as an immediate response.
I am quite a fan of QCA but I really don't see it as a bridge between qualitative and quantitative. Nor is it, as some have suggested, mixed methods. It is a useful analytical tool for people doing comparative case studies.
There is nothing new in the case study elements, qualitative researchers have been doing comparative case studies for years, as Ragin's references to Mills recognise, at least implicitly.
What is useful about QCA is the systematisation of the analysis using Boolean logic. However, there are logical issues with some of the arguments put by Ragin et al. I won't go into the statistical arguments (See Simon Hug; Braumoeller; and others).
The biggest issue I have is that some practitioners appear to forget that the data being used for the analysis is an interpretation by the practitioner. It is not the raw data that is being compared but the categorisations made by the researcher. This give rise to potential errors, particularly if the categorisation is not moderated or tested.
Another issue is the contention that observation of a common feature in different cases means that feature is part of the cause of the outcome phenomena observed. It is a typical Type 1 error of logic. One may reasonably establish a null hypothesis if one can see a common feature in both success cases and failure cases. In which case one has ground for concluding that feature is NOT sufficient to cause either condition, though it may be necessary.
Another aspect one has to consider is that one has simply not observed the features that contribute to different outcomes.
Good day DAvid, thank you for your comment. While you read keep in mind that QCA is only one of QLC tools possible or the only one people may be familiar with; and in the conditions of my research area when I wrote the article was not the appropriate approach.
Have a nice day;
Lucio, Point taken. When I have read your paper I'll get back to you but perhaps not in this thread. I think we are moving away from the original question.
Considering that both have methods positive points it is als best to use both and their strengths. Quantitative research may be used when a lot is known and can be captured in data with confidence while qualitative data can be used in situations where there is not much knowledge and theories need to be formed. The theories can than be tested quantitatively. In this way a reduction or digestion of the reality occurs. Then some other researcher could argue the theory is not complete in all situations or needs refinement (that in itself is qualitative). A qualitative research could find out what needs to be refined of which situations are not covered. This can than be put in theories etc. Of course all research is theory forming and reducing/ digesting reality which by definition is limiting. A theory for all (like in physics) is not possible similar to a perpetuum mobile.
Maxim, I see the strengths of qualitative and quantitative approaches in case study research rather differently. Quantitative research is most useful in showing THAT a relationship exists between two variables; qualitative research is most useful in explaining the processes through which, and the condition under which, that relationship exists.
This distinction is not the same as that between numbers and words. Qualitative research often use numbers, for example in stating how many participants mentioned a particular theme, or how often something was observed.
See the attached papers for further explanation of these points.
Let me start by saying that there are different kinds of qualitative and quantitative research. For example, some quantitative research uses experimental conditions to test how people respond in the situation of interest. Other quant research counts actual behaviours. Some qualitative research observes what happens in situ and will often entail counting events, people, relationships etc..
The three examples mentioned above tend to have different foci (hypothesis testing for the experimental approach, exploratory data collection for the second two). However, all three of the examples involve different forms of direct observation and as such are more similar to each other than to either survey questionnaires or focus groups/interviews which rely on observation of self-reported behaviours, intentions, etc.
The problem is that the reported numerical ratings, indeed the reported behaviours, intentions, etc., are not direct observations of behaviour and relying on such reports makes assumptions that are not supported by cognitive science. The assumptions relate to
On the first, there are numerous examples where observation shows that what people say is different from what they do. Further, cognitive science shows that there are two distinct kinds of memory (and probably two types of processing) procedural memory and declaratory memory, again meaning that what we say is not the same as what we do. Not only that, both what people do and what they say varies with the situation. Individuals can and do say different things in different contexts and the things they say often do not match what they do; which also changes with context.
So the first point to note is that there is no single 'true' value (numerical or otherwise) that might be elicited by interview or questionnaire. We should be searching for the range of behaviours and things that individuals say. Also importantly, the differences between what people say and what they do is not 'bias' (because there is no single 'True' value). Indeed, what we get are generally (there are some who try to hide their responses) 'accurate' responses to the questions asked, in the context in which they are asked; but they do not necessarily reflect what the person might do, or what they might think, in the situation being investigated. Attempting then to quantify responses to questions just adds another layer of complexity and abstraction away from the issue being investigated.
The second is complex but I will try to keep it brief. Yes some people can, and do, rationally assess a numerical rating for a particular phenomenon. Examples include examiners rating essays, chemists rating Ph using litmus paper, etc. They do so using quite strict guidelines for how to rate the qualitative material presented but even then different examiners give different ratings and such results are often moderated and discussed until a consensus is reached about how to rate the result based on the criteria used. Even the pre-moderated rating is a very intensive and demanding intellectual task that takes a significant amount of time and energy.
As a rule, the questions asked by social researcher in interviews and questionnaires do not afford their respondents the criteria, the time or the space to undertake analyses such as those above (some techniques do, but even they often rely on implicit processes - see below). It is not surprising then that most respondents' answers to questions appear to use what is sometimes called 'satisficing' (near enough is good enough) as a technique to develop answers. Underlying the 'satisficing' phenomena are some cognitive processes:
Schwartz and Hippler showed many years ago that responses to a scale are actually based on
The actual numbers reported are largely an artefact of the scale used in the question (along with the influence of previous questions). Respondents are largely unaware of the process they used to develop their answers (and in any case they answers are firmly in the realm of declaratory knowledge and often bear little relation to actual behaviour). The criteria used to assess the rating are individual and not shared with others. The problematic assumptions are numerous
Cognitive science gives us plenty of information to help us understand such phenomena: including limits on explicit processing and short-term memory; use of schema, etc. But the bottom line is that attempts to quantify qualitative data are inherently problematic and require extensive processing before they can be treated as meaningful data. They may, or may not, reflect the situation being investigated and therefore need to be tested using other means.
Unfortunately, much social research fails to critically assess its assumptions about how answers are developed.
Because quantitative researchers are under positivist approach and think that all events are measurable. So they try to quantitative any phenomenon even qualitative one.
Because qualitative method would not be enough to make generalizations. However, I would prefer to read qualitative results for it usually shows deeper, more comprehensive details of understanding and exploring a certain phenomenon. Quantitative researchers rely so much on quantitative results for them to be convinced that the study is more reliable, more valid and more empirical. It is a matter of satisfaction of oneself.
Basically, science and knowledge are not meaningful without ability to generalization. Qualitative methods are limited only in statistical generalizability. Other types of generalization, such as analytic and interpretative, can be deduced.
Norman Raotraot Galabo I agree with the general tenor of your comments with two caveats. First, there is no technical reason why one cannot conduct qualitative research with random samples and generalise from them. It is just expensive and raises other practical challenges around coding, etc.
I also suggest that faith in quantitative results is misplaced. The recent elections in the US, UK and Australia all show that quantitative research can get it wrong. There has been a lot of discussion suggesting that technical issues were to blame and I suspect that is true but there are other issues.
One such issue is that a major quantitative tool, the survey, asks people what they will do, or have done, or think etc. The response is inherently declaratory knowledge, what people SAY. What people SAY is strongly affected by implicit self-categorisation (and sometimes by explicit self-presentation). That can be reasonably accurate when we are dealing with something that uses declaratory knowledge (e.g. self-categorisation in elections), though even then there are issues (Krosnik for example reports that in the US, more people say they voted than actually did).
However, when the research question is about procedural knowledge, what people DO, what they SAY can be quite different (see Phillips, LaPierre, Briggs, etc.). This does not mean that one cannot use surveys for such issues but it does mean that we can't just treat the data as a direct reflection of what people will DO. We need to understand the connections between declaratory knowledge and procedural knowledge; and those links are different for every issues, every individual and possibly every situation.
I agree with David Roberts's posts, but there is a more fundamental reason that quantitative research isn't inherently more generalizable than qualitative. There are two separate issues here. One is sample size; quantitative methods can address much larger samples than qualitative methods. However, generalization beyond the population sampled is an entirely different matter from inferences from a sample to a sampled population. The latter requires understanding the processes by which, and the context in which, the results occurred. This is an issue for which qualitative methods are far more useful than quantitative ones.
A particularly clear statement of this is by Shadish, Cook, and Campbell, in their highly regarded book Experimental and Quasi-experimental Designs for Generalized Causal Inference:
"the unique strength of experimentation is in describing the consequences attributable to deliberately varying a treatment. We call this causal description. In contrast, experiments do less well in clarifying the mechanisms through which and the conditions under which that causal relationship holds—what we call causal explanation" (2002, p. 9; emphasis in original).
A devastating critique of the idea that randomized controlled experiments (RCTs) are inherently generalizable is Nancy Cartwright and Jeremy Hardie's book Evidence-based Practice: A Practical Guide to Doing it Better (2012). Their argument, similar to that of Shadish et al., is that successfully implementing a policy in a new setting requires a qualitative understanding of how that context will affect the processes and outcomes of the policy.
First, I don't think that generating a small random sample will do very much to improve the generalizability of a qualitative study because the confidence interval for the results (if they could indeed be measured) would be far too broad to be useful.
Second, the idea that qualitative studies (especially election) polls are not perfect, is hardly an argument for using qualitative research instead. And the same could be said for the lack of perfection in Random Controlled Trials.
Overall, if we are going to argue for the value of qualitative research, then we should recognize that generalizability to larger populations is simply not going to one of our strong points.
I think, Dr Hamid has put the issue on table, that the positivists are inclined to quantify (or simplify?) everything as they believe that the existence (or being?) of a "thing" must be numerically measurable. As a likely follower of qualitative perspective, I decide to consider the quantification of qualitative research a new contingency for the emergence of a new research paradigm, unless an effort to quantify the 'qualitative research' an sich as a separate research approach which could deconstruct both the essence of a qualitative research and the nature of quantitative research per se.
By the way, many thanks to Prof Maxwell who reveals this interesting topic.
I mean, thanks to Mr Dony Saputra, the one who has successfully provoked us to get into this interesting realm of scholarly discussion.
I completely agree with David Morgan that small random samples do little to improve the generalizability of qualitative studies, both for the reason he gave, and also because random samples say nothing about the generalizability of the results beyond the population sampled. However, the latter type of generalization IS a major strong point for using qualitative methods, as the quote I gave previously from Shadish et al identifies. Quantitative results, from random samples or randomized experiments, do nothing by themselves to support such generalization, which requires understanding the processes by which, and the conditions under which, that result might apply more broadly. Such understanding typically requires qualitative methods.
Cartwright and Hardie, in their book Evidence-based Policy: A Practical Guide to Doing it Better (corrected title), state at the outset that "You are told: use policies that work. And you are told: RCTs—randomized controlled trials—will show you what these are. That’s not so. RCTs are great, but they will not do that for you. They cannot alone support the expectation that a policy will work for you. . . For that, you will need to know a lot more. That’s what this book is about.” They provide a detailed account of the sorts of qualitative understanding that are needed for you to have confidence that a given policy will produce similar results in a different setting.
See my paper "Validity and Reliability of Research: A Realist Perspective"
What an interesting discussion! Assuming that the qualitative researchers are not using a random sampling strategy (as it would be pretty uncommon), I argue that the main reason to indicate the numbers of participants who contributed text to a finding is to convey to readers something about the "salience" of the responses (Levitt, Hill, Butler, 2006; Levitt, 2016; Levitt, Pomerville, Surace & Grabowski, 2017, etc.). Basically, these numbers tell you something about how often people are accessing and presenting a description. This can tell you something about how pressing an explanation was within the group interviewed. The number can give you a sense of the commonality of experience but also of the dominant narratives related to a specific topic and the culturally available explanations--which may lead to very different interpretations. This number still can be helpful to consider, especially in certain cases. When interviewing is semi-structured, however, it does not indicate anything about the frequency that something was actually experienced -- even within your participant group -- unless the interviewers asked the same question directly to all participants and in an unvarying manner. Many participants may have had experiences that they do not indicate in interviews because they either do not have the language for them, because they are assumed to be understood, or because they assume that the interviewer will not understand them, or because they are too vulnerable. It may be the rare highly verbally gifted or relationally confident participant who is able to label a dynamic that others describe more obliquely but that all experience. The researcher would have to argue for the interpretation to be made, given the research topic, participant characteristics, study aims, interview protocol, and the relationship between the researcher and participants.
Basically in a research we can use purposeful or random sampling. Each of these two general approaches pursues a specific goal but two approaches are looking for generalizability!! Without generalization new knowledge don't emerged. But there is a significant difference between generalization in quantitative research and quantitative research. Quantitative research results can be generalized to the statistical population through random sampling and sample size. I'm surely agree with Maxwell, Morgan and others that large number and random sampling are two key factor for quantitative research.
In the other hand although qualitative research use purposeful (non-random) sampling method and the sample number are limited, but generalization is nature of research and research without generalization is fifty without five!! I emphasize that there is a fundamental difference between generalization in quantitative and qualitative research. Statistical generalizabiliy is only one type of generalization and is different from Logical/ Rational generalizability.
I think that Joesph Maxwell and I might reduce our disagreement if we distinguished replicability from generalizability in experimental design. In particular, the increased ability to replicate a result based on random assignment, does indeed ignore any claims about the underlying source of the results. But the ability to replicate is essential if you are going to make any claims about the results one has obtained, even if the validity of those claims is a separate issue.
Further, I think this kind of replicability needs to be distinguished from statistical generalizability, which involves questions about the ability to infer population values from survey responses. Again, this depends on the validity of the observed responses, but the formulas for translating sample values into population estimates are not in doubt.
Of course, blind faith in either RCT or random samples is unjustified. But I still object to somehow using claims about the weaknesses of these procedures as an argument in favor of qualitative methods. If we state things in comparative rather than absolute terms, quantitative methods are typically better at replication and generalization than qualitative methods are -- even if that is not the case in absolutely every single instance.
We, myself included, seem to have moved away from the initial question about why some people attempt to quantify qualitative data. It may have something to do with paradigmatic or psychological preferences for particular types of data; but I am not sure anyone has done a study that would elucidate the question. In any case, I am not sure it is necessary or helpful in the long run to do so. What is important is that each piece of research be useful and valid in its own terms. If a piece of research uses data inappropriately, that is a matter to be addressed individually.
So to explore the new direction taken in this thread, I have already discussed one issue, which applies to self-reported data in both qualitative and quantitative collection approaches. I won't go into that again here.
While, I did not mention it before, I also have a problem coming to grips with the argument that we can generalise about a population based on a random sample. Cartwright and Hardie's work is useful (and thanks for the reference Joseph Alex Maxwell) in that it illustrates the problem that, what works for one group may not work for all. Similar results can be found in many evaluation studies, especially some of the realist studies, even when the cultural differences are not as great as those described by Cartwright and Hardie. In my view, this may be more than just an issue of replicability.
The fundamental assumption underlying statistical generalisability is that the differences between people in a population are controlled by randomly selecting a sample. I would suggest there is a second, usually unstated, assumption that the range of 'controlled' variables is relatively small and/or that such variables have little impact on behaviours and decisions, or reports of such. If there were a large range of variables that have more than a very minor impact on individual decisions, the sample size would have to increase to take account of the range of such variables. Indeed, much research looks at different 'cells', or sub-samples, to take account of what are anticipated to be different sub-populations.
A key assumption of mine, is that human beings are complex and exposed to many different situational, cultural and environmental factors. I also assume that the range of such factors and of different experiences is wide. Experience (and qualitative research) suggests that there are many different influences and differences between people even in a relatively homogenous culture. I find it difficult to accept that the range and extent of such variation can be controlled by random sampling. So I can't help thinking that the statistical assumption can only apply if we have a very narrow view of the differences between people; and/or that the differences between people are somehow irrelevant, or have a minor impact, on most of the issues we attempt to research using surveys.
I am NOT arguing that random sampling cannot produce reliable generalisations about specific populations. There are many issues where random sampling has been tested against reality with positive correlations. In most such cases the differences between people appear to have only a limited impact on the generalisability of the survey. However, there are also examples in which some of the differences between identified sub-populations lead to quite significant differences in results. A very crude example is a very, large-scale health survey in the UK in which men reported they had an average of 2.5 heterosexual partners in the preceding 12 months and women reported 1.5 partners.
Now there are lots of conclusions that can be drawn from those findings, but for this argument, the point is that the differences between people produced different responses. We happen to know about the different response because the researchers anticipated that there would be differences between the two sub-populations.
However, in many instances we simply do not know what variables might be significant But we act as if any variables that we have not thought of will be controlled by randomisation; while at the same time recognising that there are some variables we 'believe' will cause different results and control for those. The methodological contradiction is something that really needs to be worked through; and we need to have a better basis for making assumptions about when and if randomisation will control the variables we have not considered.
There are a lot of important issues in David's article. From basic assumptions about quantitative research and sampling to the generalization process.َ Its is true that I was away from initial question so I have to return.
There are many reasons why some people attempts to quantify their qualitative data. Two reason in my view point are important:
First must be said that I see it as a function of the researcher's philosophical perspective.Positivism was dominant on various fields of science over many years in particular the humanities, and getting rid of it simply does not happen.
The second may be due to a lack of familiarity with the qualitative research presuppositions.
We documented that the use of numbers in qualitative research is indeed increasing over time (at least in psychotherapy research, which was an early area of adopting qualitative research in psychology). Interestingly, found that the researchers' epistemological beliefs (when explicated) were overwhelmingly constructivist and much more rarely post-positivist/realist. This suggested to us that there is a disconnect between researchers' philosophical perspectives and their reporting practices. We thought that the use of numbers is often mandated by journal article reviewers and journals (see Levitt, Pomerville, Surace & Grabowski, 2017). It was surprising to us.
Thank you Heidi Levitt for the informative research. It is fascinating that there is an apparent disconnect between the epistemology and the research practice. It may not simply be a disconnect between epistemology and practice. (There may well be a disconnect. There is clear evidence in cognitive science for a difference between declaratory (SAY) and procedural (DO) knowledge.)
The other possibility is that our characterisations of research and/or epistemology are 'ideal' descriptions (i.e. pertaining to ideas) not borne out by 'reality.'
There may also be a sort of paradigm bleed. If the researchers are psychotherapists their psychotherapy practice is inherently qualitative and constructivist. They may, and many do, use quantitative tools to help with the diagnosis but, even so there is usually an element of qualitative analysis in the diagnosis and much more so in the therapeutic interventions.
I am reminded of several studies (afraid I can't recall the references off hand) into the use of diagnostic tools. The findings were generally that the use of such tools (even some of the qualitative ones such as projective techniques) improved the reliability of diagnosis. However, mostly the results were compared to other diagnostic tools and/or group conference diagnoses. Importantly the issues being dealt with are essentially qualitative so the numbers were, and were clearly understood to be, a proxy for qualitative differences between individuals.
To my mind what the studies actually showed was that a qualitative diagnosis by an individual could be different, rightly or wrongly, from the consensus of other professionals. I am aware of some therapists who resolutely refuse to use such tools. However, unless one is very confident of one's diagnostic ability; and/or if one wants to avoid professional approbation and potential liability issues, it would seem to be smart practice to use the tools. Unless and until the tools (or a tool) could be shown to be miss-diagnosing, they are likely to continue to be preferred.
Me parece que a veces, se cree que si no se cuantifica, no tiene valor y esto es un error. Lo cualitativo o cuantitativo va a depender del objeto de la investigación. En todo caso, y respecto a lo cualitativo, se ha de defender el hecho de la capacidad argumental, el análisis y el contraste de hipótesis como elementos para construir una serie de resultados que perfectamente tienen validez sin que obligatoriamente tenga que cuantificarse.
Such visualization mostly aims to improve the comprehension of qualitative studies, especially when contrasting 2 elements for example, and can provide a clear and simple illustration of the dominant themes and ideas for lay readers. This is a good article for more background about this topic Article Data Display in Qualitative Research
An this is another relevant reference about the topic as well Article Visualizing Qualitative Data in Evaluation Research
Interesantísimo. Una pequeña aportación.
Entiendo que la finalidad es siempre la compresión de una realidad que, a su vez, busca algún tipo de aplicabilidad (aunque no necesariamente).
En tal caso, lo primero debería ser la comprensión general de objeto de estudio y de fin del mismo que justifique el tipo de aproximación. Esto se conoce como “descripción fenomenológica”. Tiene la gran ventaja de favorecer la apretura disciplinar y la consistencia con otras disciplinas. También la posibilidad de aumento y confluencia terminológica y semántica. En todo caso, la ciencia -decía Aristóteles- busca la etiología y la cuantificación no la alcanza; y si se aproxima es en condiciones muy muy débiles.
(luego lo escribo en inglés)