There are some questions on RG about EBM (e.g. Rachel E Patzer: What proportion of medicine is evidence-based). In a lot of such discussions one can find critics. My motivation is to develop a distinguished picture of advantages and disadvantages, limitations and perspectives.
To trigger the discussion here are some possible arguments: Is it possible to use EBM in the same way in every medical or therapeutic discipline, or are there crucial differences? Does the institutionalization of EBM lead to a conservative attitude? What are the limits of research methods preferred by EBM (e.g. randomized controlled studies, meta-analysis)? How is the relation of expertise to evidence? Is verification the right way or should research strategies of EBM better be based on falsificationism? Are evidence informed practice or empirical supported treatments better concepts / terms to enhance the interaction of research and treatment?
If we are able to figure out the problems, we are perhaps able to optimize the situation.
Dear colleagues
let me try to add a systematic overview:
1. Problems of the EBM idea
• It is a really complex challenge to unite clinician’s expertise with best available evidence. For example it is epistemologically not easy to combine first person (clinician) perspective with third person perspective (research results).
• Clinicians have to be both perfect experienced clinicians and researchers that understand the methods, methodology and limits of EBM research.
• First naturalistic fallacy: Even knowing everything would not imply a question to the answer ‘what should be done’.
2. Problems of EBM research methods
• There are a lot of possible bias’ associated with RCTs (e.g. representativity of datasets, control group problems).
• There are a lot of possible bias’ associated with meta-analysis (e.g. garbage-in-garbage-out, apples-and-oranges, abstraction).
• There is a reductionism-problem associated with group research methods.
• The applicability of the currently preferred methods in different clinical fields, is an unsolved problem (e.g. what works in pharmacotherapy does not necessarily work in psychiatry).
3. Methodological / epistemological problems
• There is a allegiance- or stake-holder-problem that leads to a tendency to produce positive results.
• To produce meta-analyses needs time that leads to an up-to-date problem.
• Additionally there is a conservative drift (old intervention methods of which meta-analyses exist are preferred).
What do you think? Do you have ideas to complete? Do you have alternative statements ? I do not want to deprecate EBM. The idea is to improve it or develop alternative approaches.
Regards Thomas
As a quick addition to the discussion I will forward a link to Eric Mykhalovskiy's and Lorna Weir's 2004 paper that explores some of the critique from a social science perspective. The paper is in Soc Sci Med. 2004 Sep;59(5):1059-69, entitled:
The problem of evidence-based medicine: directions for social science.
http://www.ncbi.nlm.nih.gov/pubmed/15186905
With more time, I will contribute more reflections and current references for this critical and important discussion as it relates to knowledge making and actual practice problems...
Excellent dicussion to hold in this venue, thanks Thomas.
Another starting point is Helen Lambert's summary of critiques and limitations of EBM and its notion of evidence (also Soc Sci Med, 2006). http://www.sciencedirect.com/science/article/pii/S0277953605006131
Its a rather large topic, which can be (and has been) approached from many different angles. I think Mykhalovskiy's call for more empirical social science research into actual functioning and impact of EBM (rather than development of more abstract and normative critiques) is still current.
I think this sentence of Hume reveals the biggest problem of EBM:
“When we run over libraries, persuaded of these principles, what havoc must we make? If we take in hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.”
Hume, An Enquiry Concerning Human Understanding
This is a very big topic on which much has been written and discussed. I wrote a paper with Jackie Persons on one aspect of this topic, "Are randomized controlled trials useful to psychotherapists?". The paper is somewhat unusual in that it's done in the form of a debate in which Persons argues in favor of RCTs while I argue that they are not helpful to therapists (though other types of research studies can be very useful). The paper can be found on my profile and here is a link:
https://www.researchgate.net/publication/13742944_Are_results_of_randomized_controlled_trials_useful_to_psychotherapists?ev=prf_pub
Article Are results of randomized controlled trials useful to psycho...
EBM is controlled by those who control the evidence. Peter Parry and I documented problems relating to various biases in EBM in a paper linked here: http://i.bnet.com/blogs/spielmans-parry-ebm-to-mbm-jbioethicinqu-2010.pdf.
In our paper, we describe evidence from internal pharmaceutical company documents, mainly pertaining to psychiatry. After delving into this line of research for several years, I can't even say the phrase "evidence-based medicine" without rolling my eyes. A substantial literature base has emerged documenting clear biases in the "evidence base" underlying EBM. The idea of EBM is sound but its implementation leaves much room for improvement.
Sometimes - and this could be one case - etymology, the study of the origin of words, is useful. What does "Evidence-based Medicine" mean? "Evidence" is not just any observation which is "evident". In fact, if by "evident" we mean what is logically evident, then only mathematics could be a science. Otherwise, if we mean just any observation which is appears as unequivocal under our senses (empirical evidence), because it is salient over a background, then we should still believe that the Earth is the Center of the Universe (it is evident that the Sun rides around the Earth), and so could not achieve the eliocentrism. Science is both empirical AND logical. It is the experimental method which allows to reduce the ever present errors in observation an judgement, by controlling the possible sources of error. So in the scientific field, "evidence" is a short for "experimental proof". But has not Medicine relied on experimental proofs so far, even before EBM? No, it was only in the middle XIX C. (1865) that Claude Bernard published his book "An introduction to the study of experimental medicine". So, EBM certainly has not has such a long history as Medicine itself, but it isn't either new, having about 150 years!
To me, I think the issue lies, in making a distinction between the philsophy of Evidence-based Medicine at a high level and the epistemology/practices of Evidence-Based Medicine at a "lower" level. I think the idea that medical decisions are made on the basis of sound evidence, clinical experience and patient preference is not one that most people would argue with. Problems arise, when separating these three elements out in research and giving any one dominance over the others. Basing decisions on systematic reviews that are essentially a review of published RCTs, given that it is now generally accepted that 1) not everything that is published is created with rigour and 2) much of interest, doesn't manage to be get published is problematic.
In addition, the RCT format only asks one kind of question and generally pushes people (given funding restrictions etc) to conduct their work in a very simplistic kind of way, ignoring difficult populations, complex areas, etc. So there is a quesiton of social justice that arises within the EBM setting given our current research environment, where the hierarchy of evidence is dominant.
Some authors that spring to mind who have written great pieces on this are Harald Walah, Wendy Rogers and from a pharma perspective part of Ben Goldacre's recent book on the pharma industry is useful.
Perhaps the issue is if anything is useful about EBM/P? Here is my published attempt to show its deep problems.
The levels of evidence (and so EBM) are usually defined over the presence or absence of RCTs. The practice of EBM, as his father Sackett has seen it, means integrating individual clinical expertise with the best external evidence from systematic research. This is more than RCTs or not.
My 30-year-experience in health services research, psychotherapy research and program evaluation can well be summarized with Nick L. Smiths 1980-work „The feasibility and desirability of experimental methods in evaluation“ (published in: Evaluation and Program Planning, 3, 251-55. see https://www.researchgate.net/publication/4949505_The_feasibility_and_desirability_of_experimental_methods_in_evaluation?ev=prf_pub). Applied research requires methods which do not control reality, but trying to map them as they appear daily, I think.
Psychotherapy research is one of the fields of applied sciences, in which the overestimating of RCTs is a big problem. Leichsenring and Rabung have written a very readable paper on this: https://www.researchgate.net/publication/8446167_Randomized_controlled_versus_naturalistic_studies_a_new_research_agenda?ev=auth_pub. Also the scientific board for psychotherapy in germany (see http://www.wbpsychotherapie.de/) have voted for a more balanced methodological view for evaluating research (see Nübling 2009 unfortunately only in German: https://www.researchgate.net/publication/235249255_Das_Methodenpapier_des_Wissenschaftlichen_Beirats_Psychotherapie_-_Definierte_Hrden_fr_die_Zulassung_von_Psychotherapieverfahren_fr_Ausbildung_und_Berufsausbung?ev=prf_pub)
Also a very good paper in this sense was written by Franz Porzsolt: https://www.researchgate.net/publication/236921737_Form_follows_function_pragmatic_controlled_trials_(PCTs)_have_to_answer_different_questions_and_require_different_designs_than_randomized_controlled_trials_(RCTs)?ev=prf_pub
Article The Feasibility and Desirability of Experimental Methods in Evaluation
Article Randomized controlled vs naturalistic studies: A new research agenda
Article Das Methodenpapier des Wissenschaftlichen Beirats Psychother...
Article Form follows function: Pragmatic controlled trials (PCTs) ha...
Thanks for interesting answers and literature hints.
Some aspects should be summarized so far:
1. There are obviously some critical arguments that derive from social sciences (Sonya Jakubec).
2. More research on actual functioning is necessary (Loes Knaapen).
3. There seems to be a epistemological danger of sophistry and illusion in EBM (if I understand the Hume citation of Giacinto Buscaglia the right way).
4. Other types as currently preferred research methods should be taken into account, because RCTs are limited (George Silberschatz).
5. There seem to be a difference between the probably sound idea of EBM and its implementation (Glen Spielmans).
6. EBM is not really new, it has a history of at least 150 years (Lucio Sibilia).
7. There is a social justice problem associated with EBM that derives from publication bias and the problem of less complexity of RCTs (Emma Tumilty).
8. The tree component process model of EBM is pseudoscientific tool (Tomi Gomory).
9. By EBM the ‘the development of best practices’ is converted ‘in to a political process’ (Tomy Gomory, p 18).
10. There is an overestimation of RCTs in EBM (especially in psychotherapy research) (Rüdiger Nübling).
I hope I did not forget one relevant aspect and understood your statements he right way (please correct me if necessary).
Regards Thomas
Dear colleagues
let me try to add a systematic overview:
1. Problems of the EBM idea
• It is a really complex challenge to unite clinician’s expertise with best available evidence. For example it is epistemologically not easy to combine first person (clinician) perspective with third person perspective (research results).
• Clinicians have to be both perfect experienced clinicians and researchers that understand the methods, methodology and limits of EBM research.
• First naturalistic fallacy: Even knowing everything would not imply a question to the answer ‘what should be done’.
2. Problems of EBM research methods
• There are a lot of possible bias’ associated with RCTs (e.g. representativity of datasets, control group problems).
• There are a lot of possible bias’ associated with meta-analysis (e.g. garbage-in-garbage-out, apples-and-oranges, abstraction).
• There is a reductionism-problem associated with group research methods.
• The applicability of the currently preferred methods in different clinical fields, is an unsolved problem (e.g. what works in pharmacotherapy does not necessarily work in psychiatry).
3. Methodological / epistemological problems
• There is a allegiance- or stake-holder-problem that leads to a tendency to produce positive results.
• To produce meta-analyses needs time that leads to an up-to-date problem.
• Additionally there is a conservative drift (old intervention methods of which meta-analyses exist are preferred).
What do you think? Do you have ideas to complete? Do you have alternative statements ? I do not want to deprecate EBM. The idea is to improve it or develop alternative approaches.
Regards Thomas
I recently read an article
McLeod, J., & Elliott, R. (2011). Systematic case study research: A practice-‐oriented introduction to building an evidence base for counselling and psychotherapy. Counselling and Psychotherapy Research, 11, 1 — 10
These wellknown authors thoughtfully recommend case-study Methodology in different variant as a complement to RCT-research only
Michael B. buchholz
I was recently recruited to work in New Zealand and seriously entertained moving there until I noticed that only cognitive behavior therapy was allowed in the treatment on a national basis as it was the only evidence based treatment despite the fact that there are no two- five year followups on these CBT patients. This approach struck me as political rather than scientific.
During my graduate studies in USA, evidence based practice was emphasized. It gives specifics for many circumstances but not all. However, if someone is seeing a patient used therapy that is not in an evidence based criterion, and it is successful, then a research should be done to add it to the list of other evidence based procedures.
Alternative medicine is in this category in the US. Most physicians refuse to even look at another therapy that they don't find in print and under evidence based practice. Progress can be achieved for more therapies that are currently not in the evidence based lists if more do simple research so more therapies are added.
In USA there is a HUGH tendency to prescribe chemicals under pharmacy prescriptions as that is faster and easier and enhances the ability to keep each patient time down to 5 minutes so their business module is maintained. In USA, there is medical care but not health care as prevention of illness would reduce the bottom line of the business module. BUT with more research that is viable, there could be better options that are evidence based and then more practitioners would use them. Unfortunately, many practitioners are stuck in a rut: we have always done it this way and who are you to say there is any other way. HENCE evidence based research can improve outcomes by making it easier for practitioners to update and improve their practice.
NOTE: Important to keep any research for evidence based practice simple and easy to understand for any busy practitioner. OR to use an old US Army saying: Use the KISS Principle..ie keep it simple stupid.
May be helpful to remember that in "Lancet" some years ago interesting contributions were published discussing the value of RCT-methodology in medicine
Thompson, S. G., & Higgins, J. P. T. (2005). Can meta-analyses help target interventions at individuals most likely to benefit? Lancet I, 365, 341–346.
RCTs and psychotherapeutic practice cannot become completely integrated one into the other, as what professional practioners (PP) do is relatively autonomous. No client in psychotherapy will like it to be treated as "a case of...", everyone wishes to be viewed as an individualized person with intimate needs and special interactive demands. This trias of three-I (individuality, intimate relationship, interactive style) cannot be captivated by a RCT-Methodology. The latter one serves for other purposes.
Michael B. Buchholz
Inter. Psychoanal. University (IPU), Berlin (Germany)
This discussion is too theoretical and abstract for me to get my head round. Could someone who thinks EBM is a bad idea give a very specific and concrete example of this, and mention what the superior method of assessment of evidence was?
An interesting discussion. As an advocate/recipient of services, I find it most interesting that in these days of patient centered/driven care and shared decision making, that those responding to this question compare and contrast the perspective of 1st person (clinician) with that of 3rd person (research results). Of course I would suggest that in these days of the recovery model, the 1st person should be the patient, and that patient/consumer views, both individual and collective, should be considered, or at least acknowledged, in any discussion of what kind of services we should be receiving. Three other doctoral level persons-in-recovery and I published an article attempting to make this point. For those who might be interested in seeing our views on this issue see: Frese, FJ,, Stanley, J., Kress, K, & Vogel-Scibilia, S. (2001). Integrating Evidence-Based Practices and the Recovery Model. Psychiatric Services. 52, 1462-1468. As we say in the movement, "Nothing about us without us!"
The objective of conventional medical treatment should surely be to induce complete recovery, or restitution to the pre-illness state. Anyone who disagrees with this principle needs to find a name other than Recovery Model for the alternative, as otherwise endless confusion and miscommunication will ensue.
Unfortunately the psychiatric recovery model requires the recipients to confess to being "mentally ill" and receive "medical" treatment, that if we review the empirical evidence has not ameliorated a single meaningful outcome affecting the individual in the last 100 years or so. Rather, as per for example the atypical anti-psychotics has instead created a whole bunch of iatrogenic diseases in the recipients.I along with two of my colleagues argue this in our 2013 book Mad Science: Psychiatric Coercion, Diagnosis and Drugs.
A true recovery approach, at least in mental health, requires rethinking the entire infrastructure. Of course NIMH this last year has begun to admit that the DSM approach for example is useless for real scientific advance. That should stimulate all of us "experts" to get busy reenvisioning together with those, who I would argue have been victims rather than consumers/clients of the system, the way forward.
"the psychiatric recovery model requires the recipients to confess to being "mentally ill" and receive "medical" treatment"
So, if persons do not accept that they are ill, the whole concept of recovery becomes even more meaningless. How can someone recover from something they don't have?
"the empirical evidence has not ameliorated a single meaningful outcome affecting the individual in the last 100 years or so. Rather, as per for example the atypical anti-psychotics has instead created a whole bunch of iatrogenic diseases in the recipients"
But see evidence based on 6 million Swedish adults (Am J Psych 2013;170:324):
Men with schizophrenia died 15 y earlier, women 12 y earlier, that the rest of the population. A major factor was "substantial underdiagnosis and/or undertreatment" of common medical conditions. Those who did NOT take antipsychotics were 45% more likely to die.
Thes results were clear and consistent, and looked at unambiguous endpoints.
How is a wholesale rejection of the Medical Model going to help anyone with schizophrenia?
Anthony;
Perhaps as way to refute or at least complicate your information of one study you might review psychiatrist David Healy's discussion regarding premature death of those diagnosed as schizophrenic here http://davidhealy.org/the-madness-of-psychiatry/
Best.
Tomi
Anthony:
When i reviewed that article in fact it discussed the premature death of those diagnosed with schizophrenia not that they were better off. The most likely reason is the adverse effects of the antipsychotics as argued for by Healy. I attach the article for you and others to review.
To be precise antipsychotic use in this one study had mixed outcomes and was not clear and unambiguous.
"The most likely reason is the adverse effects of the antipsychotics"
But antipsychotics saved lives!
Here is a relevant extract from the Swedish article:
"Antipsychotic Treatment
The association between specific antipsychotic medications
and mortality was examined among persons
who received any outpatient or inpatient diagnosis of
schizophrenia between 2001 and 2009 (N=23,971), using
“sole use of perphenazine” as the reference group (Table 4).
After adjusting for age, other sociodemographic variables,
and substance use disorders, lack of antipsychotic treatment
was associated with a greater all-cause mortality
(adjusted hazard ratio=1.45, 95% CI=1.20–1.76), and specifically
a greater mortality from cancer (adjusted hazard
ratio=1.94, 95%CI=1.13–3.32) and a nonsignificantly greater
mortality from suicide (adjusted hazard ratio=2.07, 95%
CI=0.73–5.87).
A doubled risk of cancer and suicide is not something that can be ignored. I agree that results for individual antipsychotics were less clear, but also less interpretable.
I will try to contribute to this debate by giving an example from deveolopments in the psychotherapy area.
Music therapists (MT) think what "works or not" is making music with their patients. Cognitive behavioral therapists (CBT) think what "works or not" is "the intervention" (desensitizing, assertiveness training etc.). Psychoanalysts (PA) think what "works or not" is "the interpretation".
Simple observation shows, however, that in music therapy hardly more than 10 percent of a session is music, the rest is "talking about...". A similar situation is for CBT and PA - the essence is "talking". Of course, it is "talk-in-interaction" and it is "talking as doing", as treatment. It is effective as very many studies convincingly showed - for MT, CBT and PA in the same veine.
More than half a century we tested hypotheses drawn from self-descriptions of therapists mixed with scientific models. Self-Descriptions went with an overload of theoretical prejudice (case studies did nothing than to confirm what the author thought that worked) and the scientific model went unreflected as for example a "disorder - intervention - outcome"-model (DIO-model).This is not only from medicine, it is broadly applied in many variants of technology approaches. This led to further consequences: We followed the idea that there are special interventions for particular disorders - and we had a time with many research papers showing the superiority of one method (or the other) in treating e.g. monosymptomatic anxiety disorders (snakes, spiders, mice etc.). However, clinicans knew that there is no such thing like a monosymptomatic anxiety disorder. There are hardly depressions without (from time to time) suicidal thought (but these people were excluded in the pre-treatment screening).
Gradually, the concept of co-morbidity emerged. This was a construct saving the DIO-model in an additive manner. To each symptom a new "intervention" was proposed - the model could be saved, again supported by many RCT-Studies. Critics like Drew Westen said, this is like treating feaver and fatigue but ignoring the meningitis. Gradually, it became clear that this kind of DIO-model cannot do what good clinicians can: to view an integrated gestalt.
Meanwhile the concept of comorbidity is criticized even by its former advocates. Some think, its not "the intervention", but the special atmosphere in a consulting room, where principles like security, attachment, empathic listening etc. apply. Principles that cannot be thought of beyond "talk-in-interaction". What psychotherapists do is "talking", but not in a trivial manner! We slowly understand the details of this kind of talking, but we are from having a general theory. One thing, however, shapes out as very clear: "(Psychotherapeutic) Talking" is not something that can be "applied" like an "intervention" into a technical system. If you would try to handle your talk in this manner you would deeply destroy the whole endeavor. What you would have then, is not a therapeutic session but something like an instruction or, worse, like orders to be fullfilled: mission accomplished!
The best overview "The Great Psychotherapy debate" (Wampold 2001) including all available meta-analyses at the time confirmed this point definitely. Psychotherapeutic conversation cannot be thought of as an "intervention" and this is, why RCT-Studies in this special field meanwhile have lost their priority and many researchers feel that these studies should urgently be complemented by studies of other kind.
Best
Michael B. Buchholz
Thanks for raising this(these) questions.
This is such a complex and fascinating topic with so many issues,
I'll just mention one quick problem with EBM which is how should we take into account patients' preference. At some level this is trivial (there is no point in giving an SSRI to a pregnant depressed woman who is not willing to take the medication). The problem, however, is that we don't have sufficient data regarding the impact of preference for various treatments on their outcome.
For example it could be that Dynamic Therapy is an effective treatment for African American Males (in fact we have actual data supporting that dynamic therapy is more effective than SSRI or placebo in this group (see Barber, Barrett et al, 2012)). However, it would difficult to conduct a study and include preference as a stratifying variable in the randomization process.
If you are interested to read a bit more on the advantages and weaknesses of RCT, not the trivial problems with RCTs, please see my SPR presidential address. It can be downloaded for free at
http://www.tandfonline.com/doi/pdf/10.1080/10503300802609680
Have a great weekend
It is very a complex and interesting area in the clinical field. We need to use knowledge, experience and patient preference. It could simply start with sitting with a healthy young patient with acute bronchitis where expectation to get antibiotic no matter what evidence you show or what amount of time you spend with his/her to show all the high quality study /evidence.
I think the limitations mentioned about EBM are real when the perspective is focused on the evidence part rather than the practice part of the phrase. The best available evidence needs to be combined with patient preferences and assessment of the applicability of the evidence etc. This is a high order skill for clinicians.
Helping clinicians by teaching them how to appraise the evidence and how to apply the evidence is an essential part of good training. Making up-to-date evidence available is part of this. So overall the evolution of EBM is a good thing.
I agree that "teaching them how to appraise the evidence and how to apply the evidence is an essential part of good training. Making up-to-date evidence available is part of this", but this is nothing new or requires the alleged EBM protocols. Which, as admitted by its inventors, requires subjective choices as making such decisions always have. And which were always, well prior to EBM, if you were an ethical professional, based on the best evidence available.
The issue of patient choice is an interesting one. A truly informed patient able to make reasoned treatment decisions at the time of a diagnosis, I suspect, is a rarity. The relationship with the physician is critical at this point and may well be influenced by the physicians own understanding of the research evidence. It can be observed in this relationship the culture of the medical model meeting the public understanding of that model.
The legitimate questions physicians are asking of the foundations of the research they use as their starting point in decision making tend to be translated in the popular media as questioning the validity of evidence at the core of modern medicine. The patient does not usually read the medical press so their primary source of opinion forming information and understanding is often derived from a popular journalistic opinion based and biased perspective. How many patients have been influenced by headlines such as "Health fascists get it wrong again" (Daily Express in the UK on salt)? How does the physician then advise the patient that the basis for the headline was untrue and it was the research on the effectiveness of health education programmes that is weak, not the evidence for the risks of high salt intake in the diet?
The physician is in a difficult position, caught between the inherent problems of a less than robust evidence base / pressures to minimize costs from healthcare funders and the need to include an often poorly informed patient / public in the decision making process. The Cochrane Collaboration and others have done much to improve our understanding of the research base, the outcomes a particular research method will address, the promotion of higher publication standards and release of data. Medical education programmes are increasingly placing emphasis in the curriculum on informed decision making. The third element is how we now improve the public understanding of healthcare decision making, particularly where there is doubt around the effectiveness of an intervention, potential risks and the sought outcomes. This last is probably the most difficult to address.
The problem is simple in Psychology/therapy. "Evidence" is often treated as a noun, when it should be considered a verb.
The evidence is in the treatment outcome, not necessarily a cookie-cutter way of treating symptomology.
The risk of follow strictly EBM is the risk of transforming a medical doctor in a cold veterinary doctor. Actually there is a subjective transcendent action in the medical act which is related to the complex relationship of the doctor and a patient. For sure EBM is very important tool if adequately used, taking into consideration that Medicine is far of being an exact science.
What is evidence should be the first question. Knowledge, Polanyi pointed out so eloquently, is to the largest extend tacit and personal, hence we always say (rightly): I know. The randomized controlled trial as the basis for the highest level of evidence is as flawed a conception as that put forward for any other trial. The basic underlying assumption is that all potential variables in the two trial populations are evenly distributed, and equally as well as in the same causative fashion, contribute to the observed differences. Thus the perception is one of causality whereas in reality it is of correlation. This point should not be surprising as all living things are" complex adaptive systems" who behave in non-deterministic ways. Minimal difference between the inputs invariably result in major differences in outputs.
The evidence base of medicine, I would argue, is really observational and experiential, as has been known for millennia, recent proponents of this proposition amongst others would include Osler, McWhinney and Feinstein. Our knowledge is based on individual observations that create different patterns which are mutually agreeable. The clinical task is to compare the features and behaviors of THIS patient to the known patterns to guide clinical participatory decision making.
Doctors as much as patients need to understand that one never can predict the benefit of treatment or non-treatment, even if the odds are 99%. Hiding behind a p-value of a trial as the reason to make a decision is neither logical nor ethical (even if lawyers think it provides legal justification). The p-value is a property of the sample seize, and every big trial with even the smallest difference will have a significant one. As David Healy pointed out the more people are required to achieve a p-value the more one can be sure that the intervention doesn't work in a pragmatic sense (or a parachute doesn't need a trial).
Medicine will always operate in the realm of uncertainty, the sooner we come to terms with this and "psychological" learn to cope with it, the sooner we will practice more honest and human medicine.
I support what Joachim has just stated. There is an extensive and thoughtful literature on these issues. Notable are annual thematic issues devoted to EBM in the Journal of Evaluation in Clinical Practice published since the 1990's. Two dedicated issues of Perspectives in Biology and Medicine also address epistemological issues.
I think psychological therapy has jumped far too quickly to evaluation based on pharmaceutical hospital-based medicine models. Medicine served its time in case study and observation for a very long time before RCTs became standard, based on a better developed understanding of the field. (cf: Pickstone) As for the pharmaceutical research model, people are way more complex than mice or rats and they do not live in controlled environments; it would certainly help if commercial interest didn't motivate manipulation of results (cf Goldacre)
Much of what passes for "evidence" in psychological therapies is rooted in simplistic studies with highly untypical samples. (cf Moloney and Kelley) The medical paradigm in clinical psychology blinds many researchers (and some practitioners) to the centrality of the therapeutic relationship as a healing factor.
The requirement to fragment/isolate/manualise in order to achieve evidence of effectiveness renders complex and/or irreducible phenomena such as the therapeutic relationship or responsive integration of therapies impossible to study. Hence the proliferation of pseudo-scientific versions of therapy in the literature on EB in psychological therapy.
On the other hand, therapists are notorious for overly trusting their own experience of what works in therapy (cf. Rennie).
"On the other hand, therapists are notorious for overly trusting their own experience of what works in therapy"
This is actually the core problem, so therapists can't really complain when forced to embrace the more ruthless and inflexible methods of hard science.
Thanks a lot for much more answers.
The ongoing debate on EBM is an interesting field. I would again agree that the idea is a useful tool, without doubt. But the realization often is the problem. If normative decisions are added to scientific results (or based on) the upcoming problems start to have different connotations. The question then is if the norms are correct, helpful, ethical, adaptive enough and supported by which political interests.
Regards Thomas
Yes there is - shift from reductionist thinking to systems thinking, and for health professions, especially thinking in complex adaptive systems terms.
The difference between the two is summarized as follows:
Simple scientific world view (Gauss distribution)
• linear; output is proportional to input
• additive
• simple rules yield simple results
• stable
• predictable
• quantitative
• normal distribution (Bell curve)
Complex scientific world view (Pareto distribution)
• non-linear; small changes may diverge
• multiplicative
• simple rules yield complex results
• unstable
• limited predictability
• qualitative plus quantitative
• inverse power-law distributions (log-log normal)
Systems thinking focuses on (1) understanding the agents and their behaviour, and (2) the relationships between agents and their interactions. A very simple introduction can be found here:
Joachim P. Sturmberg & Carmel M Martin. Complexity and health – yesterday’s traditions, tomorrow’s future. doi:10.1111/j.1365-2753.2009.01163.x
Bruce J. West . Homeostasis and Gauss statistics: barriers to understanding natural variability. doi:10.1111/j.1365-2753.2010.01459.x
Thank you Joachim - in EEG research we begin with modular approaches but more illuminating information arises from synergy.
Thanks to all of you for this informative discussion - I would like to recommend another critique that I found of great value: HIckey & Roberts (2011): Tarnished Gold: the Sickness of Evidence-based Medicine.
Carolyn
I have worked as a clinician for some twenty years and another twenty as an hospital-based, clinical epidemiologist, and I find that confining EBM to treatment, and therefore to RCT, is very reductive.
Clinical medicine starts with diagnosis, a neglected field for evidence.
Moreover, true RCTs in most of fields are simply impossible, or carry limitations, like surgery, psychotherapy, rehabilitation, rare diseases etc...
A lot of effective medicine is practiced with evidence from observational studies, that seem to be neglected too.
RCT seems conceptually simple and nowadays clinical researchers "jump" at once to the RCT. without taking in consideration observationale evidence, that is an evidence, more or less sound, but anyway an evidence.
That is an excellent point. I think EBM started out with a much broader set of concerns and initiatives. I think essential components of clinical practice such as diagnosis and prognosis have received short shrift.EBM has also shied away from engaging the more relational dimensions of clinical practice. I have explored this in terms of patients who are "beyond" evidence. Either all "evidence" bases therapies have failed or there is no evidence of effective therapy in these cases, yet there is still much a clinician can do.
Quick response, hopefully more later. As a clinician (psychotherapy) and a researcher my continual struggle with RCT approaches is that they assume a treatment protocols can be administered with equal effectiveness by the clinician. The focus is almost always entirely upon the protocol. In my research I continue to find a common denominator that impacts outcome outside of any given protocol: the state of mind and capacity of the treatment provider. Until we can adequately take this into account, much of what and evidence based protocol can actually deliver on will be profoundly limited.
Thank you Ken: you confirm the suspicion I put forward about psychotherapy and EBM.
I want to underline that it is not something against psychotherapy at all.
As I can understand yousay that you are the "pill", but this "pill" is variable, and is not a "pill", like a surgical procedure (that needs an entire team); both these interventions cannot be blinded, just to add another problem.
EBM quotes anyway the "best" evidence you can gain, together with your experience (possibly biased by over-self-esteem) and patient's preference.
So don't let all the matter to trialists: I respect very much them and their tools but RCT are only a part, large and respectable, but it is not the whole matter.
If I remember rightly, good papers have been published on anti-depression drugs with versus without psychotherapy: that is "the best evidence", I think.
Some of this is confusing-too pedantic. EBM has been around for awhile and has been proven in the treatment of variety of medical conditions. Are you talking about the difference between basic research versus applied research? Medicine uses experimental research re: cause and effect, e.g., the effectiveness of an intervention. RCT studies are characterized by randomization, subjects are selected or assigned to groups as to how well they represent the population of interest. The researcher manipulates some aspect of the treatment in a highly controlled setting with the outcome results compared to the control group that usually has the standard treatment or no treatment. If after statistical analysis, there is difference, the researcher can conclude the treatment effective because other variables have been controlled. Experimental designs are highly structured protocols for sample selection and assignment, intervention, measurement, and analysis. This design is aimed at eliminating bias and controlling for rival explanations for the outcome. Where do you think medicine would be today if EBM wasn't implemented some years ago.
Just some additional ideas: 1. EBM also clarified that some psychological and pharmacological treatments are not effective and had (serious) negative side effects. 2. I do not agree with the gaussian argument against RCT's, size effects are much more subtle and not use parametric stats (see Cohen 1984 The Earth is Round (p
I think it important to make a distinction between methods (RCT/Metanalaysis etc) and a normative approach to accepting what is evidential in a variety of practice situations. There are many important neglected questions to be resolved about the first ( effect size, outcomes, uncertainty bounds) and these will help to determine whether the latter is actually "effective" as a form of practice.
1. Meta-analysis - how heterogeneous can the data be? This is often an opinion. Selection of what goes into meta-analysis always involves bias ie you look at data and you do meta-analysis when you think there is something there. Conclusions of meta-analysis may prematrurely prevent a well controlled RCT to answer a question
2. EBM -requires research quality and feasibility. If certain specialties/areas of study do not use validated outcomes, RCTs, etc. EBM may not help the individual health care provider
I just published an overview of (critical) sociological literature on Evidence based Medicine, focusing on their notion of evidence and the use of guideline to regulate practice. That may also be helpful starting point.
Article Evidence-Based Medicine or Cookbook Medicine? Addressing Con...
... a very interesiting paper for this discussion here was published by Trisha Grennhalg, BMJ 2014, see: http://www.bmj.com/content/348/bmj.g3725
Best regards, Rüdiger Nübling
An excellent study of this question in regard to psychotherapies in psychiatry and clinical psychology:
Westen, D., Novotny, C. M., & Thompson-Brenner, H. (2004). Empirical status of empirically supported psychotherapies: Assumptions, findings, and reporting in controlled clinical trials. Psychological Bulletin, 130, 631– 663.
Additional note - although this study is specifically about EBM in psychiatry/psychology, it has much broader implications for use of EBM/EBTs throughout medicine and healthcare. It has much to say about the logic, and lack thereof, of wholesale application of EBTs in ways that are destructive to patient care because they are not individualized.
Recently I read a very helpful paper:
Carey, Timothy A.; Stiles, William B. (2015): Some Problems with Randomized Controlled Trials and Some Viable Alternatives, in: Clinical psychology & psychotherapy.
If you contact Bill Stiles here in ResearchGate I assume he will send it to you.
Best
Michael
An early and extremely useful commentary:
Empirically Supported Complexity: Rethinking Evidence-Based Practice in Psychotherapy
Drew Westen and Rebekah BradleyCurrent Directions in Psychological ScienceVol. 14, No. 5 (Oct., 2005), pp. 266-271
AEHD (Attention Excess Hypoactivity Disorder )
http://www.psychiatryonline.it/node/6920
EBM?
It used to be that evidence meant discovering a cause or causes and finding the pathways that led from cause to effect. Now it means the wisdom of the crowd. What works for most is considered evidence. Those for whom it doesn't work are disregarded, at least on the first pass.
This is very different from what used to be - working with one person at a time and trying to come to some understanding about cause and effect in that one person - which, of course, did not necessarily generalize to others
The wise practitioner uses both methods together.
WHAT DOES EVIDENCE-BASED MEAN? PSYCHIATRIC MEDICATIONS DO
HAVE BIOLOGICAL EFFECTS INEVITABLY. HOWEVER FDA APPROVAL
ONLY REQUIRES EVIDENCE FOR REDUCING SYMPTOMS PER DSM GUIDED
DIAGNOSIS- I WOULD WELCOME ANY CONFIRMATION THAT ANY MEDS
CORRECT IDENTIFIED BIOLOGICAL IMPAIRMENTS IN THE BRAIN- THOMAS INSEL HAD THE TEMERITY TO ASSERT THAT THE DSM V WAS NO MORE
THAN ATTACHING LABELS TO SYMPTOM CLUSTERS