In a toolbox there are usually many different types of tool, each designed for a different purpose. No one tool is better than any other tool. The same is true for research paradigms. A colleague of mine recently explored ways of improving meal time experience in a residential aged care home. Action research was the ideal paradigm for this particular piece of pragmatic research. In her case, attributes such as generalizability or reliability weren't particularly important. When we undertake drug trials, or pristine interventions that we want to generalize, then reliability, validity, and generalizability are important, and we would be unwise to undertake action research.
In a toolbox there are usually many different types of tool, each designed for a different purpose. No one tool is better than any other tool. The same is true for research paradigms. A colleague of mine recently explored ways of improving meal time experience in a residential aged care home. Action research was the ideal paradigm for this particular piece of pragmatic research. In her case, attributes such as generalizability or reliability weren't particularly important. When we undertake drug trials, or pristine interventions that we want to generalize, then reliability, validity, and generalizability are important, and we would be unwise to undertake action research.
Validity in qualitative research differs from quantitative research. Some even reject the basic assumption that there is a reality beyond our perception of it. That means it does not make sense to be concerned with the “truth” or “falsity” of an observation with respect to an external reality (which is a primary concern of validity). While it is commonly accepted that certainty in scientific inquiry is futile (Maxwell, 1990), validity standards in qualitative research are even more challenging, because of the necessity to incorporate rigor and subjectivity as well as creativity into the scientific process (Johnson, 1999). In addition, disparate qualitative methods espouse different evaluative criteria.
There are three types of indicators, internal validity, external validity, and reliability, as the validation strategies. According to Creswell (2003), reliability and external validity are not crucial in qualitative research, contrary to quantitative empirical research. Reliability in qualitative research depends on constancy in responses (Creswell, 2003) and the same indicator in quantitative research depends on the measuring instrument. This instrument should capture the same thing when used several times. On the other hand, reliability in qualitative research lies in veraciously reporting the dynamism and the evolution of a given phenomenon. The qualitative research paradigm dictates that it is unlikely that an emerging phenomenon remains static and unchanging (Trochim, 2002). Keeping in mind the philosophical particularity of qualitative research, we formulated our interview questions in a way that captures the dynamism of offshore outsourcing of manufacturing SMEs and how they develop dynamic capabilities in collaboration with their supplier firms.
There is broad consensus among researchers on external validity, that the generalizability in the sense of producing definite rules that can be applied universally is not a useful standard or goal for qualitative research. Generalizability in qualitative research is best thought of as a matter of the “fit” between the situation studied and others to which one might be interested in applying the concepts and conclusions of that study. In our study, we explored the way offshore outsourcing creates value by enhancing competitiveness and these advantages can be sustained through developing organizational dynamic capabilities, and these findings fit the theoretical understanding that when firms cooperate they can create synergetic advantages and create value for all the partners. According to Creswell (2003), external validity consists of truthfully and thoroughly specifying the detailed mechanisms in which the results were generated, so that future researchers can judge to what extent they can use the mechanisms in a different setting. Broad descriptions of qualitative research are crucial. Detailed descriptions of both the site in which the studies are conducted and the site where the studies may generalize are critical, and these detailed descriptions can highlight the similarities and differences between the situations. Analysis of these similarities and differences makes it possible to make a reasoned judgment about the extent to which we can use the findings from one study as a working hypothesis about what might occur in another situation. Multi-site studies can also enhance the generalizability of findings. In our study, we have sample firms from different industrial sectors, geographical locations, and clusters. Different sources of data, such as semi-structured interviews, document analysis, and other publicly available sources of data, enhance further the breadth of the database and triangulation is likely to reinforce external validity (Kan & Parry, 2002). However, we need to be careful “when evaluating conclusions drawn from small samples of qualitative studies and the difficulties inherent to any attempt to make generalizations about populations from small samples” (Bock & Sergent, 2002, p. 240). Miles and Huberman (1994) even mentioned that the results generated from qualitative research cannot be generalized.
Reliability concerns the ability of different researchers to make the same observations of a given phenomenon if and when the observation is carried out using the same method(s) and procedures. Reliability of qualitative study can be enhanced by standardization of data collection techniques and protocols and documenting in detail all the steps, time, place, instruments, and procedures and to reveal that categories have been used consistently. It can also be improved with proper tabulated data of findings that are open to supplementary examination by both the researcher and readers to enable them to articulate their views about the position of the researched, in relation to the research and the researcher. That is to say, that more the data fit with the conclusion, the better the validity is. In this study, we have used several strategies that assured the validity and reliability of the study. We ensured triangulation in our data collection and triangulation between data sources represents a refutation strategy. Creswell (2003) mentioned that the researcher should play the devil’s advocate by gathering counter evidence useful to assess the robustness of the research outcomes and determine their scope. Theoretical and methodological research coherence (Morse, et al., 2002) was achieved through submitting method statements and interview guide/questionnaire for this study to two methodology experts, and obtaining valuable feedback and helpful recommendations. Achieving multiple data analysis iterations is favorable to internal validity (Morse, et al., 2002). In this regard, it is useful to spend more time on the research site and participants for in-depth comprehension of underlying dimensions and reality of the studied phenomenon. In this study, we have spent much time discussing with the participants not only at the firm sites but also informally in their professional meetings, seminars, and workshops and have enhanced our understanding on underlying research issues. Submitting the case reports to the participants has also further enhanced the validity of the study. Furthermore, the credibility of the study was ensured by understanding the phenomena of this study of interest through the participants’ eyes; the participants are the only ones who can legitimately judge the credibility of the results. Transferability was ensured by describing thoroughly the research context and the assumptions that were central to the research. In fact, the person who may transfer the result of the study to a different context is to judge the extent to which the study is transferable. Guba and Lincoln (1994) proposed four criteria for judging the soundness of qualitative research and explicitly offered these as an alternative to more traditional quantitatively oriented criteria. The four criteria are credibility, transferability, dependability, and confirmability. Dependability of the study was ensured by describing the ever-changing context within which research occurs. The researcher is responsible for describing the changes that occur in the setting and how these changes affected the way the research approached the study. Confirmability refers to the degree to which the results could be confirmed or corroborated by others. Confirmability was ensured by documenting the procedures for checking and rechecking the data throughout the study. The researcher paid particular attention to the cases and instances where the data contradict prior observations.
Agree with Adrian. Action Research (AR) is one of my 'favourite' research approaches. When applied and reported well - it demonstrates 'research in action' that actually changes things - rather than 'navel-gazing research' or 'research for research sake'. To me, that 'trumps' generalisability etc. You identify AR as 'non-traditional' research - which is part of the problem for AR. There are those that do not know of it, those that think that it is a new approach (despite Lewin's seminal early work) and those (i.e. funders) who cannot get their heads around the 'leap of faith' that is required around most AR projects are longitudinal and the outcomes cannot be predicted at the outset.
Many thanks for your insightful answers. In my study, I am opting for AR with teachers to attest whether AR grounded knowledge will help to improve the understanding, ownership and practices of a teaching and learning pedagogy. I agree with you Dean, in terms of contextual generalisation. The ultimate aim is to use the AR in parallel with other methods to create narratives that can inspire and encourage other teachers.
Action Research is not a form of research like Qualitative or Quantitative. Rather, is is an approach to research. In AR, we do research while engaging with practitioners to solve one of their problems. McKay and Marshall (2001) provide us direction on how to integrate the demands of research with the demands of practicality.
As part of setting up an AR engagement we negotiate with the practitioners that in exchange for helping them solve their problem, we get to collect data for our research. The first part of the study, problemitization, often looks like a traditional research study in which the researchers attempt to identify the causal factors for a particular situation. It can be qualitative or quantitative. It will have the same generalization, reliability and validity properties of any other study. In the next phase, solution development, working with the practitioners, the research designs an intervention. Following implementation of the intervention, the research collects data base the effects of the intervention. This can be again another qualitative or quantitative study that explains the results of the intervention.
So action research is nothing so mysterious that it needs any special qualification, rather it is just a very practical and relevant way of doing research.
McKay, J., and Marshall, P. 2001. "The Dual Imperatives of Action Research," Information Technology & People (14:1), pp. 46-59.
Thanks Doreen - I like your spin on 'narrative creation'. Michael - I agree with your take on AR being 'practical'. Action research 'enjoys' the unenviable task of being 'difficult to locate' in the research world. Many people mistakenly cluster it under the qualitative paradigm - even if most of the study may be of a qualitative design. This, to me, is purely out of ignorance or comes from those with rigid, positivist 'blinkers' on - who cannot move beyond the fact that 'if it is not entirely quantitative - then it cannot be quantitative'. I personally look to the 'neutral ground' and classify it as a 'mixed-methods' and/or critical 'emancipatory' approach. For those that view it this way, AR may be viewed as a philosophical approach, and some even refer to it as the 'third paradigm'.
I'm not sure I like the idea of thinking of qualitative and quantitative research methods as different (i.e. opposing) paradigms. I prefer to see them purely as methods (or tools, as someone wrote earlier) that unfold their respective strengths and weaknesses depending on the epistemological backdrop the researcher chooses to apply. Doreen, I would recommend reading up on "pragmatism" - as was said before, validity, reliability and generalizability were criteria developed for a certain type of research and certain epistemological assumptions. "Quality" is relative to the goal of your research and to your beliefs on how knowledge can be obtained. Maybe this could be a helpful starting point:
I agree with some of the above responses - AR is an approach to research not a method. Within an AR project you can utilise many different methods, qualitative and quantitative, which you choose depends on the issue you are addressing. In relation to Participatory Action Research, in my opinion the philosophy of democratic, collaborative working with people traditionally viewed as participants being viewed as co-researchers is a major issue when looking at project outcomes. If you are concerned with a more traditional view of validity, reliability and generalizability just refer to the literature related to the actual method you use within the AR approach.