Happy new year to all, I will be glad if anyone can share with me the steps used in a Realist theory-driven evaluation in a scholarly work especially a doctoral thesis.
I am not sure I would bother. I think that educational testing is similar and, if Finland is any example, and they should be, testing should be minimized. And the reasons for this are fairly clear as well: the expense is not balanced by the results, and; educational and social programs deal with humans who are all adaptive systems. As Kahneman and others argue the perception of threat (as when someone is evaluating you) drives adaptation into the fast tract of thinking that bypasses the cortex and results in variations of flight, fight or freeze (see school violence as examples). At Common Sense Medicine we argue (see our book, The Boids and the Bees: Guiding Adaptation to Improve our Health, Healthcare, Schools and Society) that programs that enrich the environment and make that environment safe drive adaptation in better ways. Test the environment, as they do in Finnish schools, and leave the people alone to play.
Theory-driven research (and realist research more specifically) is increasingly been used in health and social research.
The 'bibles' of the field include Realistic Evaluation by Pawson and Tilley and The Science of Evaluation by Pawson. If you google their names, you'll find plenty of papers by these authors. You may want to check www.itg.be/tdi - not fully updated but it contains links to a number of relevant publications. Also check Realist evaluation on BetterEvaluation website.
I personally have found the system described by Pawson and Tilley to be quite abstract, especially as described in their book. Since Benedict is asking for specific steps he should follow, it would be helpful if someone could supply references that provide that information.
I think that there are no "recipes" for realist evaluation research, as much will depend on the aim of the research, the policy or programm that is evaluated and perhaps most importantly, ressources at disposition (time, field access/data, etc.). So beware of "cookbook methods".
Pawson provides a hands-on example on his systematic review on mentoring relationships. Alltough he describes the steps to follow for a systematic review, (not for single research studies) there are many hints on how to identify the mechanism - context - outcome constellations for single research studies. This may, as David already stated be of limited utility for providing specific steps, but give you an idea what realist evaluation research might actually look like. Pawson and Tilley (2001) also give a range of examples that are really usefull for understanding "what to look for".
On the realist evaluation of social programs there are other usefull ressources, for instance Kazi (2003) who provides a range of different examples, using both qualitative and quantitative research. In addition, there are some Social work scholars that have published on that issue. Articles I considered usefull are by Blom and Moren (2011) describing the concept of "generative mechanisms" - the latter are, in my opinion what one would try to describe in critical realist research practice.
A cautionary note: I would interpret critical realism as a philosophy of science with specific ontological and epistemological claims, rather than a narrow evaluation approach suggesting a specific way of doing research. In my opinon one would have to start from there and then customize a methodological setup and a research strategy. Eventually it might be usefull to have a glance on books like Sayer (1992)
Hope you found this usefull and good luck for your research!
Pawson, R. (2004). Mentoring relationships: an explanatory review. ESRC UK Centre for Evidence Based Policy and Practice (https://www.kcl.ac.uk/sspp/departments/politicaleconomy/research/cep/pubs/papers/assets/wp21.pdf).
Pawson, R., & Tilley, N. (2001). Realistic evaluation bloodlines. The American Journal of Evaluation, 22(3), 317-324.
Kazi, M. A. (2003). Realist evaluation in practice: Health and social work. Sage.
Blom, B., & Morén, S. (2011). Analysis of generative mechanisms. Journal of Critical Realism, 10(1), 60-79.
Sayer, A. (1992). Method in social science: A realist approach. Psychology Press.
I add the references for my previous post below:
Pawson, R., & Tilley, N. Realistic evaluation. 1997. London, California and New Delhi: Sage.
I agree with Staphan that there's no "recipe" for realist evaluation, and that the Pawson and Tilley book provides what is probably the best overview of this approach. Another source is Henry, Julnes, and Mark, Realist Evaluation: An Emerging Theory in Support of Practice, New Directions in Evaluation 78 (Jossey-Bass, 1998). Depending on the methods you're planning to use, my book A Realist Approach for Qualitative Research (Sage, 2012) may also be useful.
Using Pawson (2004) & Pawson and Tilley (1997) as references, my understanding (and initial experience) in doing realist research as a theory-driven approach usually starts with developing the initial program theory behind the program/intervention by asking how the program/intervention is supposed to work?. Researchers usually do this by asking the program designers/implementers or review of policy documents. This will then be refined by populating components of this program theory using the literature (if you are doing a realist synthesis). If you are doing a realist evaluation, we usually conduct primary data collection methods - interviews, FGDs, documents analysis. This process of refinement follows the context-mechanism and outcome configuration (CMOc). Realist research usually ends with a refined program theory.