I'm interested in how people often claim that external evaluators are unbiased and internal evaluators are biased. What are your thoughts for and/or against this common orthodoxy? Do you know any good papers that explore this?
This is interesting and of course speaks to the heart of how we may view critical practice and the voice of participants in research. You may not find the answer in the literature on evaluation but you may find it in the literature on PARs and social action research approaches.
Yes! Thanks Robyn. There is plenty of literature in development, indigenous research methods, PAR, action research, everyday research etc etc but then it all seems to be forgotten and swept away in the evaluation literature (with some exceptions such as Fetterman and Wandersman). But then Fetterman's Empowerment Evaluation approach is often criticised by 'real evaluators' as not being real evaluation (e.g. Stufflebeam and Coryn 2014). It's a bit interesting hey. So while saying we must listen to the voices of the people and people are the experts on their own situation; the overarching evaluation paradigm seems to be saying the opposite...
Yes I agree and that is why we need to stand for evaluation approaches that incorporate participants as co-evaluators. In the disability field for instance disabled people insist (and rightly so) that they are on the evaluation teams that evaluate their programmes. I think a lot of this is in the grey literature and does not get written up. I am often of the view when I see evaluators make noises about bias is that given they have not experienced the lived and everyday experience of the consumers/users they may make the wrong the assumptions. So let's turn the debate around and put the spotlight on in a different place.
In the absence of principles that give context and coherence, for example to frame the generation and appraisal of alternative options, there can be no guarantee that the (presumed) lack of bias of external experts will conduce the main functions of accountability and learning that evaluations are commonly expected to provide. Depending of course on the nature of the "host" organization, such principles might include:
Evaluations should contribute to the accomplishment of an organization’s mission.
The decision to evaluate should be strategic.
Evaluations should enlist the participation of users.
Evaluations should be an asset to users.
The process of evaluation should develop capacity in evaluative thinking and evaluation use.
Evaluative thinking should add value from the outset of operations.
Evaluations should test the validity of conventional wisdom about practice.
Thank you so much for all your answers. Robyn, I couldn't agree with you more! Dibakar, I hadn't seen that article yet and it looks very topical. I've just downloaded it and will include so many thanks. Olivier, I think the principles you list are very important and many of them are not incorporated into evaluation currently despite rhetoric (or sometimes they are incorporated but in a tokenistic way). With some I think we need a lot more meaning making discussion. E.g. in your last point...what is quality and who determines what it is?
@Kelly, it goes without saying that more meaning-making should (must) accompany the identification, selection, and definition of principles, including what parties are privy to the exercise; the list I gave is indicative. On quality, requirements might relate to what is needed to ensure validity of findings (from evaluation) and reasonableness of recommendations (and what accepted social science research methods and procedures could be followed toward that). At a higher level, quality might be assessed against four internationally accepted standards: utility, feasibility, propriety, and accuracy.
Those are great points Olivier. I'm just thinking about how the standards of quality are very context dependent e.g. an RCT in a medical trial may meet those standards but an RCT in a social program run by a small NGO is unlikely to meet them. Has there been much exploration specifically about context sensitivity in standards of quality? I know there has been a lot of emphasis on the use of appropriate methodologies but RCTs and quantitative methods still prevail as 'gold standards' despite their inapplicability (and subsequent failure to meet quality standards) in many contexts.
@Kelly, I agree: propriety, mentioned earlier, refers to the state or quality of conforming to conventionally accepted standards of behavior or morals but also to the condition of being right, appropriate, or fitting. In short, there is little point in buying a Lexus when a bicycle is needed.