We built up interesting concepts like scientists-practitioner models, evidence based practice or empirical supported therapy. But especially in psychotherapy including behavior therapy the problems are ongoing. Many clinicians express the feeling that research results are often irrelevant to them or too hard to understand. We have a debate about manuals, about modalities, about specific versus common factors, about the clinical relevance of RCTs and meta analysis … In talking with researchers and clinicians there seems to be a long lasting large distance between researchers and clinicians. The question seems extremely relevant, and the danger too loose contact is not banned. I remember very interesting meetings with David Orlinsky on SPR congresses more than 15 years ago, where this problem was discussed. But has the situation changed in the meantime?
I agree that it is a huge issue. I think that progress can be made. Part of it can come from changing the way that research results are presented. Clinicians (and clients) need to make decisions about individual people, not average cases. The traditional statistical analyses (correlation, t-test) describe group trends, and it is hard to connect them to individual cases.
It is a solvable problem -- Bayes' Theorem is a solution that helps take data and connect it back to the probability for individual cases.
Evidence Based Medicine (like the Straus/Sackett book) lay out practical ways that clinicians could use research to help make decisions about individual cases. (Straus, S. E., Glasziou, P., Richardson, W. S., & Haynes, R. B. (2011). Evidence-based medicine: How to practice and teach EBM (4th ed.). New York, NY: Churchill Livingstone.)
Reading that has changed the way that I approach doing research, teaching, and clinical work. Much of what I am doing now is trying to make it easier for both researchers and clinicians to connect the results to the people who would benefit from them.
It will still take time, because the current generation of practitioners and researchers were not taught this way of thinking (and it is a change in habits of thought).
I am hopeful that we are nearing a tipping point where this will become more common.
On my list of papers available, this one illustrates how one might approach assessment and treatment selection with a client using the Evidence Based Medicine/Evidence Based Assessment way of trying to link research and practice:
Youngstrom, E. A., Choukas-Bradley, S., Calhoun, C. D., & Jensen-Doss, A. (2014). Clinical guide to the Evidence-Based Assessment approach to diagnosis and treatment. Cognitive and Behavioral Practice. doi: 10.1016/j.cbpra.2013.12.005
For researchers or people with access to the data, here is an article that lays out the alternate statistical methods. They use the same data, but produce much more clinically helpful results.
Youngstrom, E. A. (2014). A primer on Receiver Operating Characteristic analysis and diagnostic efficiency statistics for pediatric psychology: We are ready to ROC. Journal of Pediatric Psychology. doi: 10.1093/jpepsy/jst062
This third paper shares some of my personal journey in changing my own thinking and practice:
Youngstrom, E. A. (2013). Future directions in psychological assessment: Combining Evidence-Based Medicine innovations with psychology's historical strengths to enhance utility. Journal of Clinical Child & Adolescent Psychology, 42(1), 139-159. doi: 10.1080/15374416.2012.736358
Looking forward to hearing others' thoughts, and here's to trying to become the change that we want to see!
-Eric Youngstrom
Frankly, I have no idea if the situation has changed - because I wasn't there before. What I can say is that, at least for CBT, I simply don't see the disconnect. CBT is a theory that is based upon the scientific method. Maybe something has gone wrong with some CBT practitioners, but it seems to me that the relationship between theory-research-practice is like a virtuous inductive loop where the bottom - i.e., practice feeds back and influences the research, which thereby influences the theory. What comes to mind immediately is Beck's ideas about “negative distortions of reality" as an example of what I'm talking about. It was based developed inductively in therapy sessions then tested in controlled experiments with drug therapy, behavior therapy and other types of therapy. Results were very favorable for CBT. These results went back and influenced the theory.
Sounds reasonable or is this simply a naive response to a serious question?
Dear Tomas,
This observation of the gap between research and clinical, in Psychotherapy, is the same that affects all social sciences, cognitive sciences, and humanities. The main reason is that, in general and in these disciplines in particular, is not followed a single theory waiting to be disproved, to replace it with a better one. No, in these disciplines we can find 20-25 theories (some even, opposite), as, for example, Linguistics, Neuroscience, Cognitive Psychology, etc.. since all these theories are still in force,, the clinic can not stick to them all. The reason?, because not really study the man, ie, it is not addressed from their subjective reality. If you are interested in my page on RG are a couple of books and articles that talk about this interesting and important topic.
I have worked as a clinician and researcher. It really depends on settings and staff in the various settings. Many settings do not employ those with strong research backgrounds (i.e. statistics), and so are not able to utilize the research effectively. There is also concern in many settings about sharing/publishing outcome research with others. In general I tried to establish outcomes based data when possible in the various clinical settings, but rarely would a site allow those to be shared outside the institution. As the data are often messy (i.e., diagnosis and symptom patterns have substantial variability) and it takes time from reimbursable activities, many setting also wonder about the utility. This leads to conversations about outcomes that start with the phrase, "Yes, but ...". However, given this, I fear that if the clinicians do not take charge of this type of research insurance companies (and/or others) who collect large data sets across sites will eventually do this data analysis and end up dictating treatment back to the settings (more so than now).
My perception: In the sense that on-site research and outcome research is being emphasized more, yes that appears to have changed since I started in the late 80's. As a result on-site research is being undertaken more frequently, but only marginally so, as I would see it. So that has changed as well. The importance of doing so at the institutional level, and integrating/updating skills for clinical staff development is more important than ever. It is perhaps one of the most critical skill development areas one might consider a the institutional level. Given that there is some validity to my perception (arguable) this is not being developed as I might expect.
As you note this is a longstanding issue. I guess it as been made apparently less solvable by the hegemony of the RCT as the de facto standard. The RCT deals in averages and most clinicians are not interested in averages, and individual clients are certainly not interested. It's made more difficult in our field because we now typically evaluate the efficacy of therapy with an effect size. If the effect size of treatment is 0.8 (General estimate from Wampold and others) then telling a client that at the end of treatment they are likely to be at the 79th percentile of the untreated group is not particularly useful or understandable (for the client). Clients want to know if they will be better. So problem #1 is that we need most of the RCT outcomes in a binary form i.e. Better/not better. There are signs that this is happening and more recent trials dichotomise the outcome.
The second problem lies, I think, with clinicians. We forget that there are two RCTs - Randomised Clinical Trials and Routine Clinical Treatments. On the whole the practitioner community hasn't used its enormous power to accumulate basic data about the effect of routine treatments. An agreed set of basic measures and recording protocol can lead to the accumulation of data bases with sample sizes of clients and therapists many times that found in the largest trial. There are now good examples of practice research networks - check out Louis Castonguay and others. They're not easy to set up but they are worth it. I'm engaged with establishing one for clinical psychologists in training. We want to know how effective our students are during their training.
The availability of the internet and suitable software does give us the opportunity to begin to weave the data from the two RCTs together and get some basic questions answered. So it guess we are further on but there is a lot to do.
Clinicians feel their wealth of experience is ignored in research paradigms (much as patients feel that their subjective experience is ignored). Researchers address questions that can be studied - these are not always the most pertinent questions and they often seem narrow and irrelevant to patients and clinicians. Clinicians sometimes feel they have to follow protocols that have been shown to work, and to follow them exactly - which interferes with spontaneity and sense of autonomy.
Researchers resent the fact that their findings are often dismissed and not followed in practice. From the research perspective, if something has been "shown" to be true, why is it not immediately adopted?
Many clinicians are more easily persuaded by anecdotal evidence (especially from their own practice) than they are by the results of large studies. The personal is always more influential than the impersonal. Perhaps if study results were presented differently....?
More focus on naturalistic studies, effectiveness >> efficacy, more process-outcome research, as someone wrote (decades ago) amphibian clinicians-researchers are needed in the clinical field. Research oriented approach for clinicians and more contact with practice for researchers... Have a nice weekend !
Because of variability in human conditions, specifically related to psychological problems, the gap between efficacy of a researched intervention and actual practice effectiveness will always be unpredictable and large. This is unlike in medical practice and biological sciences where the gap may be much lower, and the probability of effectiveness is higher, when an intervention is applied from a finding based on a efficacy trial to an individual situation. As such, the need for individualized evaluation to measure outcome is much greater. Clinicians may need to improvise their own measure of effectiveness of a therapeutic intervention that they chose to use, which often may need to be modified or adapted to a specific client or client-group situation,by using an inductive approach to effective practice. In the process, science-practitioner trained clinician may modify or add newer dimension to an existing practice model to be effective for a client or client-group.
Mental health field is wide open for new ways of conceptualizing therapeutic interventions that fit the current knowledge and understanding of human behavior, which will always change with time and with the ever increasing growth of knowledge. It will always remain an exciting venture for clinical practitioner with a research bent of mind.
Mind Stimulation Therapy that I pioneered over a period of 25+ years of clinical practice evolved from this kind of practice experience individual and groups, and with collaborative efforts with others (see Mind Stimulation Therapy: Cognitive Intervention with Persons with Schizophrenia by Mohiuddin Ahmed and Charles Boivert, Routledge, 2013). No psychotherapy model ever developed will be definite and universally applicable to all specific situations, it will always require some adaptation or modifications, and need for clinician's individualized approach to its use and evaluation of outcome. In the process, they may and will fine newer ways of conceptualizing a therapeutic approach that makes sense for a particular time period reflecting existing knowledge of human behavior and easy acceptance from consumers and service providers. This will always be an exciting opportunity for science practitioner, but collaborative work and finding administrative and funding support will always be a challenge
I started in october in Italy a research project in order to collect single case studies able to support Transactional Analysis (TA) as an Empirically Supported Treatment (EST). I'm using the Hermeneutic Single Case Efficacy Design by Elliott.
The problem is, as always, to define variables; It is quite easy to measure depression by an Hamilton or a Beck instrument, but it is a little more difficult to measure change in a whole transferal drama, as the pathological script of a suffering depressed person is defined in TA.
As a researcher I think that the articles I'm writing will be useful to demonstrate that TA, as many other models of psychotherapy, has its own efficacy and effectiveness.
As a psychodynamic transactional analyst, I think that research risk to be "casually empty"(Elliott) when the objects of the change (variables) are defined in a narrow way.
Anyway, it is essential for surviving that every model, above all psychodynamic, might learn from CBT how to publish and become an EST.
To me the big question is how to balance rigor and relevance as two important ideals, as these two ideals may never be fully accomplished.
Many researchers and therapists believe, using Stiles and Shapiro’s (1994) metaphor, that psychotherapy is not a kind of ‘drug’ that therapists administer to clients. Frontline clinical practice does not resemble a controlled efficacy study. Unfortunately- or maybe
luckily - therapists cannot control who comes into the room.
Barkham et al. (2010) take a stance that honours the complexities of practice. They respond to such complexities with a provocative claim: evidence-based practice must be informed by practice-based evidence (PBE).
PBE can be seen as an emerging paradigm in
psychotherapy research; one concerned with acquiring evidence from routine clinical practice settings. I wrote a whole review on the book that is available in my RG page (see reference below),
I hope that helps.
Joaquin.
Gaete, J. (2011). Developing and delivering practice-based evidence: A guide for the psychological therapies. Psychotherapy research, 1-3. doi: 10.1080/10503307.2011.611544
High quality discussion here with, as expected, opinion coming fast and furious. Not surprising given how important this question is.
I think a lot of "the problem" comes down to the personal inclinations, training and skills of the individual clinician. Some people (Ph.D. or not) are likely to think deeply about the "mechanisms" of therapeutic change from a scientific perspective (including clear definition of terms, valid measurement strategies, systematic testing of hypotheses and the development of theories based on lawful premises - or at least the striving for such) and others are not! Some people are intellectually a bit lazier, less curious, or quicker to accept ideas and "theories" on faith. It is unrealistic to expect all clinicians to be scientists, or even to ready, willing and able to make use of research findings, let alone to contribute to them.
Perhaps you will agree that this is a realist (rather than an elitist) attitude......
That being said, researchers might have to try harder to have the clinical implications of their findings understood.
I am really glad about the great interest in in that topic.
What I can probably do is to develop an overview of some statements to characterize the problem more into depth and to summarize some proposal/ideas for solutions:
Clinician perspective:
• Their experience is often ignored. (Mary)
• Prefer anecdotal evidence (from own practice) (Mary)
• Feel that addressed questions of researchers are irrelevant. (Mary)
• Employed clinicians rarely have strong research background. (Dale)
• Clinicians (and clients) are not interested in averages. (Stephen)
Researchers perspective:
• Resent that their results are often ignored by clinicians and not immediately adopted. (Mary)
• The community of practitioners wasted the opportunity to collect data of routine treatment. (Stephen)
• Some people think deeper about therapeutic change (Ph. D. etc.), especially if they are trained as scientists. (Stephen)
Critical and expanding statements:
• In CBT the gap does not exist. (John)
• The problem is more extensive, it can be observed in all cognitive sciences, social sciences, and humanities. (Dante)
• Pressure of insurance companies (use research to put clinicians under pressure) (Dale)
• Hegemony of the RCT as de facto standard and effect size evaluations. (Stephen)
• Large amount of theories confuses the situation. (Dante)
• Problem depends on the variability in human conditions. (Mohiuddin)
• It is nature of psychotherapy models, that they are not definite, and not universally applicable to all specific situations. (Mohiuddin)
• Reductionism in outcome measures. (Enrico)
• Problem of balance rigor (researchers ambition) and relevance (clinicians ambition) (Joaquin)
• It seems impossible to expect all clinicians to be scientists (Stephen)
Ideas for solutions:
• Changing the way that research results are presented. (Eric)
• Researchers have to try harder to get their results understood. (Stephen)
• Presentation of RCT results in binary form. (Stephen)
• Using Bayes’ statistics to transfer results to individual cases. (Eric)
• Adopt ideas from Straus, Sackett etc. of EBM. (Eric)
• On-site research and outcome research was enhanced (better situation over the years) (Dale)
• Subjective reality is often ignored. (Dante)
• Integrating/ updating skills for clinical staff is important. (Dale)
• Built up (more) practice research networks. (Stephen)
• More integration of RTC data to get some basic questions answered. (Stephen)
• More naturalistic studies, effectiveness research, process-outcome studies. (Jerzy)
• More Scientist-practitioners and more contact of researchers and practitioners. (Jerzy)
• Individualized evaluation to measure outcome. (Mohiuddin)
• Provocative statement: evidence-based practice must be informed by practice-based evidence (Joaquin)
I hope I did not forget a relevant statement, and it will help the discussion.
The question is still open. What are the best measures to bridge the gap? Are there problems that lie in the nature of psychotherapy? Are methods we currently prefer doubtable?
There are some additional issues coming to my mind.
• The concept of time for researchers is different from that one of therapists. Researchers think in pre-post concepts and therapists prefer the model of past, present and future, and have to react in present see (McTaggart).
• Researchers work on the basis of abstraction (case-RCT-meta analysis), whereas clinicians have to deal with concrete problems (human event).
• For researchers therapists can also be treated as variables and therefor feel sometimes not really respected.
• Research can be used to control practitioners. Therefor the question to them is, what is aim of research to produce answers to relevant questions or to be the agents of health care systems (insurance companies).
What do you think?
Regards Thomas
Since long time, I think that this problem have a conceptual dimension often neglected in discussions about this interesting topic. Since my point of view, a conceptual failure is our concept of knowledge as a sort of unitary entity, where the scientific knowledge is the standard model or paradigm of knowledge. For me, knowledge is relational behavior in context, and "knowledge" is best conceived as three knowing modes: scientific, technological, and practical. See an old article about this distinction.
With best wishes.
Another solution may be to use psychological science for implementing science (evidence based treatment). There is a massive body of knowledge in psychology on motivation. Several theories on motivation, like the theory of planned behaviour and the self-determination theory, were developed to predict behavioural change. These theories could also be applied for changing therapist behaviour.
See also the answers to my question on RG : https://www.researchgate.net/post/Which_factors_contribute_to_the_correct_use_of_evidence-based_treatment_protocols_by_therapists_And_what_factors_hinder_the_use_of_these_protocols
Dear Thomas ,
Excellent review of our responses. I think the solution to this ' gap ' , as I said before , affects the psycho -socio- cultural aspects , that the biological , and goes hand in hand , as you well point out , with the times that handling both researchers and clinicians.
In my theory about the structure and psychical functioning ( Salatino , Psyche - . 2013 's in RG ) where time is considered as the 'motor' of the operation, development and evolution of the Psyche. There he highlights the existence of two different times ( those who keep a triple relation of opposition , complementarity, and concurrency ) : time of appearance , which coincides with the time of the researchers , ie , chronological time ranging from a 'before' to 'after' which is linear and irreversible , and the internal or psychic time, which coincides with the time that therapists handled , which runs cyclically and hidden, by the past, present and future.
The key to a possible solution is to coordinate these two times , in the 'now ' of the eternal present which is found in our psyche , and hence also the patient .
Very interesting topic and discussion. It is not easy to contribute anything unsaid. Maybe some aspects deserve more attention: (1) economic constraints and pressures are different in research and clinical practice, (2) the hierarchy of goals could be different: researches need to get the project finished and published (and to secure good treatment), clinicians need to treat patients effectively under certain organisational conditions, (3) time perspectves differ: researches follow their projects for a limited number of years, whereas clinicians may have to offer therapy and support for much longer. (4) social integration: most clinicians are part of the greater community where many of their patients and their families live for a lifetime, researchers can move away more easily once a project is finished, (5) Possibly more joint ventures are needed: Researches should study the work of clinicians more often in a truly cooperative way.
As an extension of my previous answer, here are the translation of part of chapter 5 of my book Psyche - Structure and Function - pp. 162-165.
Unfortunately psychiatry continues to face this gap between research and clinical practice for the reasons that Mary Seeman and others have given. To be scientifically sound, drug research must seek out specificity of targets that is precluded by an absence of understanding in psychiatry of the links between the neurochemical targets able to be affected by drugs and the mechanisms by which effects at these targets alter clinical signs, symptoms, biomarkers, and so forth. Psychotherapy lacks the same mechanistic understanding leaving researchers and clinicians looking for changes in disorders when psychotherapeutic interventions may affect quite specific mechanisms such as cognitive set, mood, social anxiety and so forth; yet, not do all the patient needs to recover.
In my experience I found DSM diagnoses and indicated drug or psychotherapy effects on these diagnoses only one resource and not an organizing principle of research or treatment. Instead, immediately after residency, I learned from severely mentally ill patients that all the drugs, psychotherapies, lobotomies, they received had not altered their abilities to leave hospitals or resist returns to hospitals. In talking with them I found them reassured that there could be a future only if they saw a way to live with their illnesses yet become members of society. Fortunately it was the dawning of the community mental health era which allowed psychiatrists to work in teams with social workers, occupational therapists, vocational counselors, supportive employment, and other resources. This identified for me a patient centered approach to care where the patient (did not as today agree to what psychiatry offered) determined what his or her future was to be and we in mental health took on responsibilities to provide a road into that future. I learned to identify patient problems and to provide with patients routes to resolution of those problems. I learned to allow patients to be considered depressed because of personal issues that needed to be resolved: some with simply identifying the mechanism, some needing psychotherapy, some needing ECT, and some needing drugs, many needing different mixes in different orders. Research became relevant as it was relevant for this or that step in a patient’s recovery and not generally as identifying pret a portier solutions to psychiatric disorders.
Why is there a gap between research and clinical practice? I would say because we ask too much of research and clinical practice while not asking enough. I predict psychiatry will suffer this gap so long as individual patients do not identify for psychiatrists the problems that psychiatrists respond to and so long as psychiatry hopes to identify generalized solutions to disorders when all that either drugs or psychotherapies can do is affect specific mechanisms for which we have no consistent links to clinical disorders. Research can identify mechanisms of disease and interventions, links of mechanisms to clinical behaviors, and then a nosology based on mechanisms that interventions could target. Clinicians in my experience can only effectively treat the needs of patients, a constraint not well served by the academic profession’s obsessions with disorders that have homogenized how individuals behave and how they will be treated.
You may find these articles interesting:
https://www.researchgate.net/publication/26726638_Mind_the_gap_Improving_the_dissemination_of_CBT?ev=prf_pub
https://www.researchgate.net/publication/23500167_Evidence-based_treatment_and_therapist_drift?ev=pub_cit
Article Mind the gap: Improving the dissemination of CBT
Article Evidence-based treatment and therapist drift
To summarize some of the work in the special issue Jan is referring to: Practice Research Networks integrate the use of some form of measurement alongside clinical practice. They work only if that system can be designed in such a manner as to fit easily within the routine of seeing patients. Thus, most successful systems use brief computer assisted assessment, and automated scoring and feedback, to provide clinicians with what I would conceptualize as "mental health vital signs". They do this without overburdening clinicians with additional work.
I was trained in such a clinic, where data was a routine part of my case formulation, daily practice, and developing relationship with patients. After a time, many of my patients would begin sessions by asking whether I had looked over their most recent vitals. Often they would use the measure to communicate something awkward or difficult.
Anecdotal experience aside, what we find when we examine naturalistic data is that practitioners, by and large, have substantial effect sizes for treatment, regardless of orientation or use of ESTs. Takuya Minami benchmarked treatments in college counseling alongside results from RCTs, and found similar effect sizes (http://dm.education.wisc.edu/tminami/intellcont/Minami_etal_QQ_2008-1.pdf).
As a researcher and practitioner, I sometimes think we spend too much time considering the rollout of empirically supported treatments, and not nearly enough time considering how we train and support effective therapists. After all, research appears to indicate that more variation in outcome is due to therapist variables than the choice of treatment (see Wampold and Brown, 2005, or Kim, Wampold and Bolt, 2006, for example). I suspect that identifying and retraining, say, the bottom 5% of therapists (in terms of producing measured change in clients) would go a lot farther toward improving practice than increasing therapists' awareness of research or ESTs.
However, I appreciate that therapists are concerned that data won't really capture what is going on with their clients, and that they will be unfairly identified as ineffective.
I consult with some large outpatient providers in the US, and systems like the PRN are being considered quite seriously as a means for benchmarking and monitoring performance of therapists. Done right, such a system could be empowering and helpful - validating effective practitioners and identifying "supershrinks" to learn from. Done wrong, and it could alienate therapists and misidentify those in need of retraining.
Apologies if this has gone a bit astray from the original question, but as a younger therapist who has "grown up" in a system where measurement was a routine part of treatment, I see routine data collection as the manner by which research is integrated into practice. The analytics associated with routine measurement - identifying clinical change, expected treatment course, risk-adjusting for severity - are all based in substantial programs of research. Clinicians who use such tools may never be aware of the empirical work behind them, but the can take advantage of the research as part of daily practice.
I have spent a lot of time thinking about and researching these issues. Much of what I would say has already been said. I want to particularly "amen" George Siefen's comment. It addresses structural and economic incentive issues that are too often overlooked.
Some have mentioned the importance of moving beyond evidence-based treatments to evidence-based therapists. I agree. However, in many cases, what is equally if not more important (as well as perhaps more feasible to operationalize) is to move more towards evidence-based clinics or organizations. Inasmuch as outcome measures could be agreed upon by a large enough base of providers, organizations could demonstrate that they are evidence-based by showing outcomes that are equivalent or greater to what has been demonstrated in clinical trials. If organizations do not want to use EBTs, it's perfectly fine as long as they demonstrate that their organization meets certain benchmarks.
This turn to the organization (beyond the individual practitioner) is likely going to be more and more important, as psychological treatment becomes more integrated with managed care delivered by interprofessional teams. This is especially the case for specialty treatment, such as substance abuse treatment.
If the hope is that published research can be used to match individual patients/clients with the most effective psychotherapeutic procedure, then, while the gap may be progressively narrowed, it seems quite unrealistic to expect that it will ever be regarded as anything like closed. While the ‘matching objective’ of evidence practice may seem attainable in many areas of physical medicine and surgery, the context in which psychological treatments are used is very much more complicated. We already know that so many variable domains can make a difference. Some of the key underlying questions are what specified procedures for whom, to treat what problems, in what social/cultural environment, with what kind of outcomes and for how long. Those of us who have struggled to maintain good contact with the best available published research know that in the end the data rarely (if ever) tell us exactly what is best for the patient/client seeking our help. We just do the best we can and remain guided by, but not totally wedded to, the available evidence.
Clinicians should collect data, develop hypotheses, analyze their data and publish the results. I started collecting EEG and qEEG data in May 2009. I presented my results at an international conference in the U.S. in September 2013 and again last month in Venice, Italy. The study had an N = 386 clinical cases (224 children and adolescents). The paper is submitter to Clinical EEG and Neuroscience titled: "EEG/qEEG Technology Identifies Neurobiomarkers Critical to Medication Selection and Treatment: A Preliminary Study."
When I first started collecting data, I had no idea where it would take me...and that is just good unbiased science. The collection and analysis of clinical data is priceless and only costs your time.
I think that the enforcement of the scientific - practitioner model is key, as others have stated, but it's not easy at all. I have read this thread with great interest, and the references, which I haven't read thoroughly yet, seem really promising... it would be great if I could find there some light on the issues that trouble me. Personally, I develop most of my research on the area of clinical effectiveness, and I face some discouraging difficulties, most of them related to the reliability of the conclusions and / or the chances to communicate the results in peer reviewed journals :
- The problems to assess properly diagnosis, evolution, and maintenance of results with valid, reliable, quantitative measures (or, moreover, the frequent lack of measures at all).
- In particular, the lack of structured interviews, rarely used in clinical practice, to support the diagnosis, which is usually a basic requirement of many journals.
- The minusvaloration and condescending attitude that some basic / lab researchers place upon data gathered in clinical practice.
I believe that we should improve our means of gathering high quality data with a minimun effort: reliability must be a major concern; but also, changes are to happen in the importance and consideration that the scientific community places in these kind of anturalistic studies. Surely, technology is going to play a key role in the gathering and analysis of data, (I'm working on the development and validation of EMA tools and assessment protocols using smartphones, for instance) but the scientific culture seems a harder issue to deal with...
Obviously it is difficult to bridge this gap between two groups who generally do not work in the same place and who attend different conferences and are otherwise not likely to encounter each other very often. Both sides do what humans do: they congregate with their own tribe.
Let me suggest one venue to increase interaction. Virtually all state psychological associations are largely practitioner oriented. Most of them hold some sort of annual meeting and many of them produce some sort of publication or newsletter. Both of these outlets are good tools for researchers to reach practitioners to present updated scholarship in user-accessible forums.
For the past 4 years I have edited the Texas Psychologist, which comes out quarterly. Nearly every issue has contained some article from a researcher updating the membership on new research findings about assessment, intervention, patient management, etc. These articles usually run about 3000 words and are solicited from scholars in our many Texas universities---I'd love to get an unsolicited piece from an academic who would provide an overview of some current research topic but thus far that has never happened.
I have collected data on my patients for the past 35 years utilizing a structured neuropsychological screening assessment to which I have from time to time added research based procedures, most recently an EEG actuarial. I now have keyed test results to medication responsiveness in collaboration with a a number of physicians. I find this procedure allows me more accurately direct therapy and medication managment with out recourse to phenomenological categories required by insurance companies. I was as many in my generation trained in the scientist - practitioner model.
Thank you again for interesting contributions and literature (Jan and Kate). I try to summarize the intermediate results again.
We had a lot of statements completing the picture of what the problem is and which factors are of relevance. Some current statements are optimistic some are rather pessimistic. Robert seems to me rather pessimistic because there is not enough knowledge (or no organizing principle) or understanding of change mechanisms. One way out, in his opinion, could be to work more patient centered. Another solution could be to implement research networks (Sam) or/and to train and support therapists in their work.
Both statements imply that research should be processed less abstract and in closer relation to therapists and patients. Dennis added the idea to move towards evidence-based clinics or organizations. (Just to allow me a commentary I think in principle the idea is good, but benchmarking bears some dangers of comparability). Peter is pessimistic in closing the gap because of, I think, complexity reasons but to narrow it by matching individual patients/clients to the most effective psychotherapeutic procedure. Ronald added the idea that clinicians should collect more data, what I think should imply that clinicians can help researchers to produce more clinically relevant studies and results.
Francisco stated that the scientist-practitioner model is key. But he faced the problem of reliability in drawing conclusions and publication problems.
Brian brought up that there are two (I would say) cultures, the culture of researchers and clinicians and they congregate with their own tribe. Therefor to bridge the gap would mean to increase interaction (e.g. on meetings, in journals). And Edward, if I understand it the right way, tries to realize scientist-practitioner model in integrating scientific measures (EEG) in clinical practice.
Hence, most of you, some more pessimistic, some more optimistic see solutions to narrow the gap or to hold on to the metaphor see some small bridges over the gap. The question now is, to bring order to the solutions, which are realistic and which of them should be prioritized, and why.
(I hope I did not forget a relevant statement.)
Regards Thomas
What is the definition of gap? It seems to me that most research is focused on how well the application of an evidenced based theory works with the group being studied. The evidenced based theories themselves are very difficult to improve. The question is not so much the gap between researech and practice as it is achieving skill in the practice itself whatever evidence-based theory is being used. Therefore the gap is the difference between the real meaning of the practice and sufficient knowledge to apply it. The level of knowledge can be measured. The level of skill also can be measured by determining therapeutic outcomes. The results would help close this gap.
Another significant factor is the ability to establish an effective working relationship with clients. The gap in these relationships is very significant as a reason why therapeutic efforts fail. Ofen therapists do not have sufficient self-awareness to maintain objectivity in the work and at the same time extend to the client a caring, hopeful and warm attitude. Thiscould be researched through the existing attachment scales as the ability to be an attachment figure for a client is, in my opinion, the basic ingredient of successful therapeutic work.
As both a clinician, trainer and researcher in psychotherapy, I found this debate very interesting. Over the decades I have gradually become aware that there are a variety of implicit positions among my colleague clinicians - even those with a cognitive-behavioral orientation - which I called "myths" and described in my paper : Sibilia L. (2009) Efficacia delle psicoterapie: alcuni miti da sfatare [Effectiveness of psychotherapies: some myths to debunk]. Idee in Psicoterapia, Vol.2 n.3, pp.15-31.
(see: https://www.researchgate.net/publication/255723388_EFFICACIA_DELLE_PSICOTERAPIE_ALCUNI_MITI_DA_SFATARE?ev=prf_pub)
These myths can be easily traced back to the psychoanalytic (freudian) thinking and still today their presence in the clinical culture induce those who did not receive a proper methodological training to dismiss or devaluate the importance of experimental research and the systematic gathering of data.
What can be done? My suggestion may sound too simple: to strongly debunk those myths!
Article EFFICACIA DELLE PSICOTERAPIE: ALCUNI MITI DA SFATARE
I suggest you to read Wendy Hollway's work as a methodology o bridge the gap between research and clinical practice in psychotherapy
Hollway,W. (1989). Subjectivity and Method in Psychology: Gender, Meaning and Science. pp150. London: Sage.
Hollway, W. and Jefferson, T. (2000). Doing Qualitative Research Differently: Free Association, Narrative and the Interview Method. London: Sage.
Hollway, W. and T.Jefferson (2008) ‘The Free Association Narrative Interview method’. Sage Encyclopedia of Qualitative Research Methods. Ed Lisa M. Given. Sage: Seven Oaks, CA.
Lucio, I suspect that many (if not most of us) cannot understand academic papers written in Italian. Perhaps you can tell us more about the content of your papers in English.
Paulo, similar problem. Many of us may not have ready access to these books... and even if we did, it would be helpful to have more information about why you consider their content to be relevant and important.
I am a clinical psychologist who stumbled upon Research Gate while researching schizoaffective disorder and agenesis of the corpus callosum. I wanted information in order to better treat a client. I find this site interesting and signed up for “updates”. This particular discussion has caught my attention, and I think a non-academic clinician’s voice could add perspective. In reading through this dialog of “How to bridge the gap between research and clinical practice in psychotherapy”, I was immediately tempted to add a little phenomenology into the discourse, but decided maybe a straightforward empirical approach would be more appropriate. What exactly is your operational definition(s) of “Gap”?
Speaking only for myself, I can report that I use a “tool box” of evidence based practices in order to meet the needs of each client. I am a reflective clinician, and adjust my practice accordingly. I am informed of current research though my reading of current journals, conferences and association meetings. I have yet to hear a colleague report that research results are “too hard to understand”, although I will admit that relevancy can be an issue. I hope this emic perspective is helpful.
To Peter, here I paste the English abstract of my cited paper, and add some relevant information.
"Abstract: A bulk of experimental research studies has been cumulated in the last six decades on the therapeutic effectiveness of some of the psychotherapies offered in clinical practice, and on the subsequent processes of psychological change. Nonetheless, for many clinicians, such full-grown research field seems to hide more than a threat, albeit it unquestionably allows both to improve the quality of treatments provided to patients and to widen the range of recommended choices. A few critical positions exist in fact in the field, which are apparently open to the use of experimental tests of psychotherapy effectiveness, but basically skeptical about the evidence this research studies have provided, or clearly misrepresenting its conclusions. Here it will be shown how as these critical stances, well noticeable in the writings of otherwise well informed Authors, are not supported by empirical evidence, or are clashing with them, and are grounded instead on common stereotypes. The aim of this work has been to clarify such scientifically untenable attitudes, which appear to be akin to mythologies, owing to some of their features, such as unchanging transmission, independence from empirically controlled results, or their resemblance with other myths rooted in our culture. Unfortunately, the convergent effect of such stereotypes is that of limiting the use by clinicians of knowledge deriving form this research field, and of hampering its dissemination in the teaching and training of psychotherapists. We propose that it can be fruitful to to define such myths to better detect them."
Briefly, here are the described myths (mostly on outcome research):
1. MYTH OF PROSCRIPTION. "Outcome research is targeted at building banishment lists, so to exclude some psychotherapy approaches"
2. MYTH OF ECONOMICISM. "The evidence-based movement serves just the goals to reduce the costs of psychiatric care."
3. MYTH OF DEFINITIVE CURE. "Psychotherapies submitted to empirical scrutiny can produce initially favorable results, but do not achieve a persistent cure".
4. MYTH OF UNIQUENESS. "The experience of psychotherapy is unique, so it cannot be studied by any empirical mean." (Patients are unique, so are therapists, methods, clinical problems, settings, interventions, etc, THEREFORE no generalizations are possible from empirical studies).
5. MYTH OF LONG DURATION. "Short therapies only achieve short-term and ephemeral results as they do not produce the psychological change needed, which is very slow and requires much longer time to be evident".
6. MYTH OF RESEARCH IRRELEVANCE. "Typical outcome research studies are irrelevant because: a) they exclude most of the patients found in clinical practice; b) they randomize patients, c) they have a fixed duration, d) they are implemented "in laboratory", e) they only evaluate "psychiatric symptoms".
7. MYTH OF EQUIVALENCE. The Dodo bird verdict (debunked), about the so-called "paradox of equivalence".
8. MYTH OF THE THERAPEUTIC RELATIONSHIP. "It is the therapeutic relationship (or alliance) which explains the outcome of a psychotherapy".
Of course, there are other myths to be described and debunked. I am well aware, also, that it only regards the outcome research and not other research fields. Moreover, they do not pertain to the methodological skills that clinicians should also have, in order to perform useful clinical research. SO, there is a lot of work still to be done!
I think that one of the best ways to bridge research and clinical practice in psychotherapy is to utilize the theoretical methodologies and applied diagnostic questionnaires of the Psychopathology-Health Inventory (PHI) devised by my beloved late father, Dr. Max Hammer. This material is found in chapter five of our recently published book, Psychological Healing Through Creative Self-Understanding and Self-Transformation. (ISBN: 978-1-62857-075-5) The PHI scale explains how to assess a particular psychotherapy client's relative level of psychological health or degree of unhealthy psychological disturbance, as well as how to diagnose related psychological patterns and needs. Other psychotherapeutic diagnostic questionnaires are included in chapter two of that book. My father's two books also explain how psychotherapists can gain greater therapeutic insight and induce greater receptivity to constructive changes in clients by engaging in a responsive process of empathic communion with clients, and expressing genuine warmhearted caring to them. Another related topic discussed in my father's books is how psychotherapists can best help their clients develop genuine experiential self-understanding, as the basis of liberating self-transformation, including resolving or healing emotional pain and inner conflict, as well as developing inner peace, compassion for oneself and others, happiness, creativity, and awareness of the spiritual or Transpersonal level of reality, as the basis of optimal psychological health and fulfillment. My father's books are available through Amazon, Barnes and Noble, and our author website, http://sbprabooks.com/MaxHammer
http://www.implementationscience.com/
Keyword for studies regarding strategies to bridge gap: "implementation science"
I participated at spr conferences. A lot of innovative studies from Norway, Germany, Switzerland, USA and UK were presented. Fortunately there was no hostility against psychoanalysis. First: Today governments are only funding experimental methods in psychotherapy. This method does not work in psychotherapy research. The only methodology thast works in the field are Natural Data. I developped and presented at the spr a quantitative method in order to analyse natural speech. Second: To analyse empirical data and successful psycotherapy you need a theory of successful development of the client. Most psychologists reject even the possibility of such a theoretical model. As a consequence of these deficits research on psychotherapy is not useful for the practitioner. At the spr a lot of practitioners conducted research. A good thing.
The Two-Way Bridge initiative is a collaboration between the Society of Clinical Psychology (Division 12 of the APA) and the Psychotherapy Division of the APA—Division 29. It is part of an overall effort to bridge the long-standing gap between psychotherapy research and practice.
This initiative provides a way for practicing therapists to be a part of the research process by disseminating their clinical experiences in using various empirically supported treatments, which can hopefully inform future research.
For the survey findings on the use of empirically supported treatments for panic disorder, social anxiety, and OCD, visit: www.stonybrook.edu/twowaybridge.
Thomas this is a great question and an important one for all of us, clinician and researcher alike. I'm wondering if you would be interested in summarising your findings here in a short piece for our eMagazine The Neuropsychotherapist for our readership (made up of academics, researchers and clinicians)?
I think one of the first steps to bridge this gap is for those on the side of "science" to reconsider the current practice of answering every critique of RCT's, empirically-supported treatments, and the evidence-based practice model by labeling them as "MYTHS". It seems to me that most reasonable people can find some truth in the critiques and that the matter is more about how much partials truths (both about the science and the supposed myths) matter. The gap between research and practice will not shrink so long as each side invests itself in wholesale devaluing of the others' perspective. I think research-informed practice is important. I think there is a lot that science can teach practitioners and practitioners can teach to scientists. I don't think the EBP or EST movements are good examples of how to achieve either of these goals.
I agree with Eric: research-informed practice is important. There is a lot that science can teach to practitioners, provided they have the needed skills to distinguish between good science and bad science, and between relevant studies and irrelevant.
It'a matter of training. With a good methodological training, I agree that also clinicians can: a) organise scientifically useful data gathering, b) introduce rigorous testing into their psychotherapic work, c) select research questions to address to researchers.
But bad science still remains in the culture of too many practitioners (and researchers as well!). Some terms, concepts, models and theories typical of the pre-scientific era of psychotherapy, born alien of the experimental method and unaware of it, has influenced a lot clinical psychology and psychotherapy of today.
So, these Colleagues are not aware of the myths I have described. Anyway you are right, that there seem to be some truth in them, as it happens in many cultural myths. Researchers and clinicians, as any other people (me included), are embedded in a culture, which influences their (our) thinking. For example, one of my culture biases (and I suppose that of many other Colleagues as well) is this the following: becoming aware of own biases is very important .
I just got into this conversation and skimmed the very interesting responses. I walk in both world because I am academician expert in HRV and deeply involved with Somatic Experiencing the trauma healing modality created by Peter Levine. We have a Somatic Experiencing Research Group that is currently creating a relevant database of supporting the design and implementation of research studies. What become clear from our exchanges is that the expertise of people who actually do research and publish it is essential, because the average clinician does not understand concepts like having an IRB to supervise the study, getting CITI training for the people involved and even research design (no just because you call something a control group does not make it either useful of feasible :-) ).
Yes, in the United States all credible research has to be supervised by an Institutional Review Board (IRB) sometimes known as a Human Subjects Protection Organization (HRPO). These organizations are found at every research institution but there are also independent IRBs to supervise research for investigators who are not part of a research institution. They charge in the range of $1300 to approve a study and then $700 or maybe $900 a year to renew it. All human subjects trials require annual renewal. Here our IRB charges money to review and approve an industry-sponsored study but not yet for an internal one. The purpose is to make sure that federal guidelines for the protection of research subjects (HIPPA) are followed and the rules are quite detailed, especially about informed consent, safety and privacy. This would not apply to a case study written up in a journal. CITI training is an online training that is required to ensure that the investigators know the rules. It involves looking at slides and then answering questions at the end of each module. I believe that you need an IRB in order to do the training but you can google that. There have been too many times that research subjects were not protected or had no recourse, so these rules are in place and in fact when they are significantly violated (the IRB is not doing its job), the right to do research and to receive federal funds to do it has been revoked as a penalty. European studies also have some version of this and you can see a statement that the study was approved by .... in many papers.
Thank you Matthew Dahlitz,
for your invitation. I also like this discussion here and think that a lot of relevant statements were discussed. Please give me some more information about journal, expected amount of words, time frame ... .
Regards Thomas
I think the answer lies in the concept of engineering. In all other field engineers take the science and translate it into practice. We have psychotherapy scientists and we have psychotherapy practitioners. We do not have sufficient psychotherapy engineers. Some EBP developers effectively use engineering concepts to translate their work into practice but that's mostly in the guise of implementing one specific practice. We need some generalists. I like to think of my own work in outcomes management as engineering--neither science nor practice, rather the translation of science into practice. You really can't take research approaches and make them work for outcomes in practice. In fact, in my experience, you can't take program evaluation approaches and translate them into practice. The approach has to be different.
There is a huge family of validated self-report instruments for measuring outcomes on many valences, stress, anxiety, depression, somatic complaints etc. In addition there are many ways to measure physiology, e g 24-hour ambulatory HR and HRV. We have published a paper showing a decrease in HR and increase in some HRV measures in depressed post-MI patients who receive a tailored CBT intervention. I think with the increasing number of ecological mood state devices that run on smart phones, the possibility of measuring outcomes is growing very quickly.
Use outcomes which are relevant and recognisable in research for both, the patient, carers and clinicians. Ditch many of the current rating scales. Should also be binary if possible, or develop a way of translating the results into binary outcomes (better, not better, in work, not working); report all results measured in the research, not just the cherry picked, significant outcomes. Undertake the research over a period of time that is more in tune with a lifetimes illness (i.e. longer then 6 to 12 weeks). All research should be set in the context of what we already know,
What can clinicians gain from research in psychoanalysis?
A personal summary by Imre Szecsödy (assoc. prof. Stockholm) from five consecutively organised workshops, held at the Annual Conferences of the EPF in Prague 2002, Sorrento 2003, Helsinki 2004, Vilamoura 2005 and Athens 2006..
The goal of psychoanalysis is complex; this can not be more clearly defined and made more explicit than as an aspiration on the part of the analysand and the analyst to promote autonomy, knowledge, emancipation, and health and to liberate the individual from some limitations and suffering. How do we reach our goals in psychoanalysis? What happens within and through the interaction between the analysand and the analyst? Does change lead to insight or insight to change? What does it signify that patients may feel equally understood by analysts belonging to different schools of thought, despite their divergent and often conflicting views of what is relevant and correct? What is specific? Is the analysis a process of acquired learning or a new beginning due to the analysand’s relation to the analyst? What is curative? Are the factors, which vary and distinguish between different schools non-specific or specific?
The study of the process of change is a complex undertaking that can raise more questions than it answers. Results do not explain how the change occurred or what influenced it or brought it about. The individual case report has long tradition in the study of psychoanalytic process. Immersed in the clinical material the analyst - as a researcher - tries to identify (impres¬sionisti¬cally) the different elements of the process, what changes, how it changes and why. A problem with using individual case studies for research is the unchecked or not systematically checked subjectivity of the observer and how unknown systematic biases are introduced by selecting data for presentation according to unspecified canons of procedure for determi¬ning its relevance.
To maintain psychoanalysis as a discipline, as a theoretical system and as a treatment method we have to have a commitment to the reflection of its own nature and structure and we have to be continuously interested in trying to study these questions. An examination of contemporary psychoanalysis reveals major disputes in both theory and techniques as well as conceded gaps in knowledge. The need within any scientific body for resolution of disagreements and additions to knowledge provides steady pressure for the development of improved research methods. Nevertheless systematic research and empirical research does encounter resistance within the psychoanalytic community. The main aim these workshops had, was to enhance and deepen discussion between ”clinicians” and researchers, to learn more about how we can approach the study of a central question: How does psychoanalysis work?
These workshops were neither a start nor an end of something but could be experienced as a continuation of a dialogue amongst analyst – as researchers and clinicians. In the service of this very aim do I attempt to give a summarising report from the workshops. First of all I wish to convey my sincere gratefulness to Henk-Jan Dalewijk, Peter Fonagy and David Tuckett for their stimulating trust, to the presenters, the discussants, co-chairs and all the participants that we did form and maintained a “work-group” atmosphere, to have an open discussion, with the aim to learn from rather than convince each other . To quote Bion, we could: ”meet conflicts and challenges by testing our conclusions in a scientific spirit, seeking knowledge, learning from experience and continually questioning the best way to achieve its goal; in which there is a clear awareness of the passage of time and of the processes which have to do with learning and development”.
The discussion remains very interesting. I have been thinking about these very brilliant and engaged contributions. Part of "the gap" could be that much of what clinicians do is not being translated into research paradigms. Some of the activities during psychotherapy do not corrspnd to "manuals". But they are indispensable to keep the process going. Maybe research could open itself to these "hidden" apects.
I completely agree with you, George, and that's the sense that I gave to my previous answer, the problem of 'time'. Here's my article.
I think the solution to this ' gap ' , as I said before , affects the psycho -socio- cultural aspects , besides the biological , and goes hand in hand , as you well point out , with the times that handling both researchers and clinicians. In my theory about the structure and psychical functioning ( Salatino , Psyche - . 2013 's in RG ) where time is considered as the 'motor' of the operation, development and evolution of the Psyche. There he highlights the existence of two different times ( those who keep a triple relation of opposition , complementarity, and concurrency ) : time of appearance , which coincides with the time of the researchers , ie , chronological time ranging from a 'before' to 'after' which is linear and irreversible , and the internal or psychic time, which coincides with the time that therapists handled , which runs cyclically and hidden, by the past, present and future. The key to a possible solution is to coordinate these two times , in the 'now ' of the eternal present which is found in our psyche , and hence also the patient .
Thomas, what a great summary of proposal/ideas for solutions.
I'm convinced the gap between research and clinical practice also exists in the diagnostic field. From my experience, it certainly applies for psychophysiology. It would not surprise me that this can be extended to other diagnostic areas.
You refer to several authors. Would it be possible to provide the exact references?
Thank you.
Thank you all for a stimulating conversation – this is an area that is really taking my interest and it is great to find a community of people thinking about similar problems. Again it is hard to contribute to this discussion with unique points – but here are some extra things which we need to think about to truly understand this research – clinical practice debate.
Firstly – is all research relevant to clinicians and need to be translated? Having trained in both practice and research, I find it hard to imagine that the millions of different scales about single construct (e.g. social anxiety) add much additional benefit to clinicians.
Secondly – where does the incentive for translation of research lie? In other domains there are commercial incentives for the translation of research (and arguably there are so for big pharma in mental health as well). Speaking from an Australian incentive – there is little incentive to take foundational research beyond traditional peer-review formats as these are the benchmarks for promotion. As outlined in “Dissemination and Implementation of Evidence-Based Psychological Interventions” Edited by R. Kathryn McHugh and David H. Barlow these traditional formats are the least effective drivers of behaviour change. Researchers may need to join to highlight the need for other markers of productivity within their own institutions.
Thirdly – this idea comes from my own current research – but there is a lot of conjecture about what therapists do and don’t do with research evidence. But I can find little information beyond a small collection of studies which have actually tried to establish how clinicians engage in research. Those which are available tend to focus on attitudes (rather than actual behaviours) or are limited to discrete professions (often with exceeding low participation rates) despite there being numerous different professionals who may treat the same person. Although we are very clear that there needs to be evidence for what clinicians do – we seem to be lacking the evidence about what clinicians actually do and what they believe they need in order to integrate evidence more into their practice. We are currently conducting this study in Australian mental health professionals – but it would be interesting to see whether this differed around the world and in different healthcare structures and institutions.
Great topic for debate. One of the challenges appears to me to be that, with the supremacy of the large-scale RCT (which necessarily usually involves homogenous participants), many practitioners find themselves in the position that the RCT does not apply to their individual clients. The client has comorbidities not present in those included in the RCT. The client is challenged by aspects of a manualised therapy or wants to take a different focus or use different goals. This just widens the evidence-practice gap. Or the busy clinician wants to engage in research to evaluate their practice but does not have the capacity or sufficient clients to make that possible.
Single case design research offers a way both to evaluate the individual client's progress (beneficial to the client) and to evaluate a therapy in a client who does not fit the mold of that large-scale RCT. These are also publishable and accessible types of studies.
See http://www.google.com.au/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved=0CEEQFjAC&url=http%3A%2F%2Fwww.sagepub.com%2Fupm-data%2F19353_Chapter_22.pdf&ei=kyNWU-nXGITVkAXCkYGADg&usg=AFQjCNG-TKgPSTP4fVLahw4H6PTCsABkFw&sig2=UnWBjOjvphsy_O7xDequUg&bvm=bv.65177938,d.dGI for a great introductory chapter on how SCEDs can help to close the research-practice gap.
John Crawford (see his website here: http://homepages.abdn.ac.uk/j.crawford/pages/dept/SingleCaseMethodology.htm) offers free and downloadable packages to conduct statistical analyses of single case research, to assist the researcher or practitioner.
On a practical level, it can also be difficult for practitioners/clinicians to forge links with universities and get access to libraries and ethics committees etc.
This is mostly a comment on the subject of this topic. I am currently nearing the end of my doctoral program and the final project for my Systems Approaches to Addictions course is on this topic. A very salient topic and some really great answers here as I am new to systems thinking. So far it seems that implementation and feedback are some of the major barriers that feed into the gap that exists. Practitioners are not seeing that the research findings apply to their population (I believe this was already mentioned by another member) while researchers need to get good feedback from the field so they are studying what is going on in the "real world".
I'm interested in more feedback on this topic and thank everyone thus far for the contributions to my learning.
Ty Yanushka
If you are interested in single case designs as a way of bridging that gap, William Shadish recently published an article on Statistical Analyses of Single-Case Designs in Psychological Science: http://cdp.sagepub.com/content/23/2/139.abstract.
Here is my reply to the initial question:
"In efficacy research, the focus is on maximizing the power of treatments. thus, efforts are made to control the influence of therapist factors by constructing treatment manuals that can be applied in the same way to all patients within a particular diagnostic group, regardless of any particular clinician. This research gives scant attention to any curative role that might be attributed to therapist factors that are independent of the treatment model and procedures (p. 227)."
Beutler, L. E., Malik, M. L., Alimohamed, S., Harwood, T. M., Talebi, H., Noble, S. (2004). Therapist variables. In M.J. Lambert (Ed.), Bergin and Garfield’s handbook of psychotherapy and behavior change (pp. 227-306). New York: Wiley & Sons.
And another one:
According to Wampold, 70% of treatment efficacy is due to the common factors out of which 30% is due to the therapist variable.
Wampold, B.E. (2001). The great psychotherapy debate: Models, methods and findings. Mahwah, NJ: Lawrence Erlbaum, Publishers, 263 pp.
And finally, in a study of what makes "master-therapists", Skovholt and Jenkins (2004) stated that one of the most evident qualities of master-therapists is use the "self" as an agent of change in the therapeutic relationship, more so than any kind of psychotherapy technique or theory.
Skovholt, T. M., & Jennings, L. (Eds.) (2004 ). Master therapists: Exploring expertise in therapy and counseling. Boston, MA: Allyn & Bacon.
My question is, then: why don't we look at unique qualities that some therapists have, which make them effective either in a particular diagnostic group, or across the board, and choose carefully students who later go into practice, instead of being overly concerned with treatment fidelity, at the expense of a very important variable?
Or, why don't therapists watch masters at work, and learn to approach better particular diagnostic groups?
Being a therapist is a solitary job, and therapists, just like musicians, practice with the orchestra, but perfect their skills in solitary isolation. And, just like soloists, master-therapists do not belong to crowds. Maybe research should focus where excellence already exists instead of trying to discover it elsewhere.
Take a look at emerging efforts regarding practice-research networks. I'm attaching an article (Castonguay et al) from a special issue on this topic in an upcoming issue of Psychotherapy Research.
Thank you John! Interesting read and makes sense. I wrote a paper on bridging the gap for a graduate course, which go me interested in this topic, and much of Castonguay's article rings true to much of my findings.