What’s Wrong with Evidence-Based AOD Treatment?

On Looking into William R. Miller’s

“Disseminating Evidence-Based Practices in

Substance Abuse Treatment”

© 2010 by Alex Brumbaugh

What is my problem with “evidence based practice” (EBP) in the treatment of substance use disorders and addiction?

I need to take a look at that.

I have so much first-hand knowledge of the immeasurable pain and suffering involved in chronic substance use disorders, both for the patient (alcoholic or addict) themselves and for those who love them – that one would think I would be at the very front and center of the crowd demanding that treatment programs only use things that have been “proven” to work – things that meet the gold standard of “scientific evidence.”

And yet, I have a problem. Something must be wrong with me.

That reminds me of a joke. People new to the alcohol and drug treatment biz may not have heard this one. It is something they used to say in Minnesota. “In Minnesota, they say that if you aren’t in treatment, or thinking about getting into treatment, there’s something wrong with you.”

My problem could be that I have a resentment against medicine, psychology, scientific research, and the criminal justice system for coming late to the party and now wanting to take it completely over.

That is a bit disrespectful on their part, after all, and not just a little arrogant.

Or is that just me?

As a faithful and long-time subscriber to Join Together Online, and hence one who bathes regularly in the Latest Research on substance abuse, and as someone who has now read (most of) the venerable William R. Miller’s exhaustive review (2006) in the Journal of Substance Abuse Treatment of EBPs and the efforts that have been made to get them safely ensconced in all the substance abuse treatment programs of the land, I have a sentiment that I can no longer keep bottled up:

What we really need in the substance abuse treatment field is a little less science and a little more common sense.

A caveat: William Miller is brilliant. Brilliant and prolific. It would be an honor and great fun I am sure to be in one of his classes in alcoholism at UNM.

Let me also say that I am a devotee of a couple of the most innovative and powerful EBPs around – namely Lisa Najavits’s brilliant Seeking Safety for substance abuse and trauma, and Robert Meyers’ and Daniel Squires’ rich and sorely needed Adolescent Community Reinforcement Curriculum.

Or is it arrogant on my part to express value judgments like this? Isn’t it just about the … evidence? Who, after all, am I?

But – insofar as I am allowed a personal opinion – let me also say that William Miller’s EBPs on brief intervention and motivational interviewing are stellar breakthroughs.

But back to his article. It is exhaustively comprehensive – 36 pages long, with 149 references.

A side note: Fourteen of those references are to publications authored by Miller himself. That’s a rush with which I am not completely unfamiliar, having myself once published an article in the Journal of Substance Abuse Treatment and having twice had the opportunity to cite it – once in a book I wrote in 1993 and again in the only other article I have ever published, in a journal called Addiction and Recovery that is now defunct.

But this isn’t about me.

Miller talks in his article about the “natural diffusion” of EBPs from the research community to the treatment community. He suggests however that something in that process is not flowing quite as “naturally” as it ought. Programs cling to the old ways, the ineffective, non-EBP ways.

He also talks about levels or standards of efficacy. The gold standard of EBPs are clinical trials, of course, good old fashioned double-blind replicable research studies. EBPs are also established by the consensus of professional people working in the field. This is what comprises the “Treatment Improvement Protocols” (TIPS) of the Center for Substance Abuse Treatment. (William Miller is so successful that he has a TIP of his own – TIP  #35. Nice!)

On lower levels of efficacy, according to Dr. Miller, there are what he calls “unevaluated” treatment methods, for which there has been little or no research and whose efficacy, therefore, is not known.

That means that there are things that people are doing to other people in substance abuse treatment programs about whose value nobody knows anything because the Professional Scientific Researchers haven’t studied them yet.

Next come “disconfirmed” treatment approaches. Some research has been done on these approaches, but they have been – in Miller’s words – “found wanting.”

And finally we have treatment methods that have a long history of negative findings in clinical trials yet continue in widespread use.

“For example,” Miller writes, “many substance abuse programs continue to use educational lectures and films as a standard component of treatment, unaware of dozens of clinical trials showing no impact of such didactic approaches.”

To provide evidence for this breathtaking assertion (coming as it does from a person whose primary profession is teaching) Miller cites two sources. The first one is a 1995 paper by D. A. Davis and others called “Changing Physician Performance: A Systematic Review of the Effect of Continuing Education Strategies.” I am putting this article on my list of things to read to see if it successfully debunks the impact of educational lectures and films in substance abuse treatment. (Based on the title, such a finding would be so tangential to Davis’s main topic that I will be curious to see how he works it in.)

The second source Miller cites for his broad assertion is a chapter he himself wrote for the 2003 Handbook of Alcoholism Treatment Approaches.  This book has gone on my Amazon wish list.  The cheapest used copy is $63.18. It lists new at $95.00. It’s not at the top of my wish list, so I will need to wait awhile to take a look at the “dozens of clinical trials” from which he is drawing his conclusions.

In the meantime, here is something: I got sober in 1983 at a men’s alcohol rehab in Southern California. One night during the first dark week of my stay there, with my eyes still blurry and my body still shaking, the house manager rolled in a rickety 16-millimeter projector and showed an educational video by Father Joseph Martin called “Chalk Talk on Alcoholism.”

I recall to this day that experience, sitting in that darkened room watching the grainy images of a Catholic priest standing at an old-fashioned free-standing blackboard. I don’t recall what he said. I remember that at one point he turned the blackboard around. He had written stuff on the other side. Whatever it was, I knew for the first time that I wasn’t alone in my personal struggle with alcohol.

I was forever changed. I never drank again.

Some impact!

In Miller’s realm, educational films – more than being of questionable efficacy in the treatment of alcoholism – actually get a negative score. In the chapter he wrote for that 2003 book, he made a table. I found a copy of it on the internet for free. The table gives education (tapes, lectures and films) the value of minus 443 (emphasis mine).  He has a ranked list of 48 interventions, and education (tapes, lectures, and films) ranks last – #48.

Miller provides – as only a scholarly researcher could – the following explanations of the numbers and rankings used in the chart he made. I quote directly:

Cumulative Evidence Score (CES). The CES is figured in this way. For each study, the Methodological Quality Score is multiplied by the Outcome Logic Score (not shown). Then (we) added up all these scores for a particular modality. So the CES is a function of the number of studies, the scientific rigor of each study, and the outcomes for a treatment within each study. 

Mean Methodological Quotient Score. This is the average score of the scientific rigor of each study in a particular modality. Scores can range from 0 to 17. 

Mean Severity of Treatment Population. This reflects the average of how severely dependent the population studied was. Scores range from 1 (less severe alcohol related problems and dependence) to 4 (severe alcohol dependence). 

% Excellent. This is the percent of studies in each treatment category that had high Methodological Quotient Scores (>13 on scale of 1-17). 

I am tempted to rest my case about less science and more common sense.

But instead, let’s press on.

Let’s imagine that the EBP-related tidbit in question – that educational films are worse than worthless in treatment as to their impact – were “successfully and naturally diffused.” For what outcomes would Miller then hope as a consequence among treatment program managers and administrators?

Would it be that they burn all the movies in the house – including those of Father Martin?

If not, why not?

Could it be that Miller would assert that the immensely positive impact  that watching Father Martin’s “Chalk talk” had on me is zeroed out by a commensurate negative impact on the man sitting next to me on that dreary night in Southern California? It may have so shamed that man that he went out and got drunk, and subsequently died of an overdose. Who knows? People were going out and dying of overdoses all the time at that rehab. If the forensic scientists had been rigorously on the scene, it would have made – if true – quite a headline: “Alcoholic dies of educational overdose – relapses after watching Father Martin film.”

How would William Miller respond if he and I were talking and I told him that watching that Father Martin film saved my life? Well, I don’t believe it literally saved my life. To state the case with less drama and more scientific accuracy – that I am sure Miller would appreciate – I would say that what it did was contribute to saving my life because it helped keep me in that rehab long enough for significant changes to happen.

There’s a thought. What if movies improve program retention? Can we prove that? Has anyone looked at that?

Here is something else that I am curious about. Would Dr. Miller – based on the studies he has reviewed – have an interest in the film’s content? Does it make a difference what the film is about or who is in it? Would he equate, for example, the intense didactic lectures of John Bradshaw, the fatherly ones of Ernie Larsen, or the electrifying ones of Michael Johnson with dramatic emotional submersions such as Lost Weekend, Days of Wine and Roses, or Basketball Diaries?

In the presumably numerous education-debunking studies that Miller carries in his briefcase (to which one is not privy without buying the $95 book), are there distinctions drawn if the material in the film happens itself to be evidence-based? I am thinking for example of the brilliant Commitment to Change video series by Dr. Stanton Samenow, the preeminent criminal justice researcher who has more than any other individual brought cognitive therapy to substance abuse, mental health, and reentry programs in federal and state correctional institutions.

Cognitive therapy gets a stellar 21 score on Millers efficacy table, ranking #13 right behind Case management. Is cognitive therapy completely negated in value if it is presented in video format? If yes, the assumption is that the counselor in the program who is doing the cognitive therapy is in every case more efficacious than Dr. Stanton Samenow, even if she is a terrible therapist. It doesn’t matter who she is or what mood she is in, or how poorly trained or culturally incompetent she is, because she is in the flesh, and Dr. Samenow is only digital. She beats him every time.

BTW, if movies are at the bottom of Miller’s efficacy table, is anyone interested in what is at the top?

Number 1 is “Brief Interventions” and number 2 is “Motivational Enhancement.”

On that subject, I know what someone is going to say. They are going to say, “Oh, I’ll bet you are just saying all this because you are probably in the substance abuse movie field and just want to bolster sales.”

The answer to that is, as a matter of fact – spoiler alert! – I am! I am an education and treatment consultant for the company that distributed that very Father Martin video that I first saw so many years ago.

Why am I doing that?

Because I know from personal experience how important those different voices of recovery can be for people who are struggling.

To which Dr. Miller would say … what? That that weighs for … what? That my professional life is a  … negative what?

And what would I say to Dr. Miller in that the interventions he largely developed personally and profits from every day are at the top of his list?

I would say, “Bravo!”

On that subject, if Dr. Miller so doubts the efficacy of educational videos, how come he has more than a dozen of them on motivational interviewing for sale on his website? He has some in three languages. They cost around $150 each. 

*  *  *

Let’s look closely at education itself – whose impact Miller discredits – as a substance abuse treatment tool.

Can one imagine an educational lecture on cocaine being delivered to a room full of cocaine addicts as being really, really, really bad for them?

I can. As an alcoholic and addict, I will be the first to say how much I hate patronizing, harping, didactic, arrogant “health lecturers” telling me about my body. I hate them even when they are well organized, even when they have Power Point and nice charts and graphs and when I am sitting in a nice theatre-style cushy lodge chair with comfortable arms. I can see them making me want to go out and get shit-faced, blitzed!

Does that mean on the other hand that information about the process of neuroadaptation of dopamine receptors in the brains of cocaine users and its impact on craving cycles is of no value to someone struggling with recovery from cocaine? – That knowledge of that would somehow not have some positive impact on the success of his or her recovery? Or that education about the relationship between stress and craving would consistently have zero impact on a recovering heroin addict?

That’s not my (for what it’s worth) experience.

The point is that the essential variable in the impact of education is the context in which it is presented. If a parent gives their child educational information about drugs, is that an efficacious intervention? It depends upon the context. For example, is the information delivered in a loving context or an angry or confrontational one? Is it presented in a timely manner, or while the kid is on the way out the door on prom night? One could possibly list a hundred contextual variables that would influence whether or not the presentation of educational information had an impact that was (a) beneficial or (b) harmful or (c) neutral.

How many variables can you think of?

The important question for purposes of this discussion is, were all these variables accounted for in the clinical trials on the educational lectures that produced Miller’s thesis that such components in treatment programs deserve a minus 443?

The follow-up questions would be, if not, what is Miller saying? And why is he saying it? And why would someone of such reputation and brilliance bring his full academic and professional weight to a categorical debunking of the role of education in a forum as prestigious as the Journal of Substance Abuse Treatment?

Okay, maybe he is just saying, “If you are going to give the alcoholic client a bunch of facts about alcohol and the liver and if that’s all you are going to do, don’t think you have done a good job in treating him.”

I couldn’t agree more. I am a former high school American History teacher, and hence not one to overvalue education, especially if it is spotty. We used to have a saying: “If you are going to cover the Civil War in two days, forget it because the students will too.”

Context is everything – in teaching, in child-raising, in substance abuse treatment.

Isn’t that just common sense?

*  *  *

Miller goes on in the article in question: “Similarly, controlled trials have shown little or no beneficial impact on substance use outcomes from interventions such as acupuncture, confrontational approaches, insight-oriented group or individual psychotherapy, or mandated (emphasis his) Alcoholics Anonymous attendance.”

As evidence for this weighty statement, he again and solely cites himself – the same chapter in the same 2003 book.

In his table in that chapter he gives Confrontational Counseling a minus 183, ranking it #45 on the list of 48 interventions. That’s pretty low, but the other three evidence-impoverished interventions mentioned in this statement don’t even make the list, so they must be really, really low!

I have no experience at all with confrontational approaches to substance abuse treatment. My intuition (for what it’s worth) would lead me to agree that such approaches don’t have much impact, although in my travels in substance abuse recovery circles, I have definitely met people whose experiences would differ.

Nor do I have much personal experience with insight-oriented psychotherapy in that setting, but my bias (for what it’s worth) is that it probably won’t help people much until they have been clean and sober for a couple of years (at which time it might be “just what the doctor ordered” to prevent relapse!).

Nor has anyone ever made me go to AA meetings. But I will tell you this: I can’t count the number of times that I have been sitting in an AA meeting listening to a speaker who attributed his or her recovery solely to mandated meetings. It is so common in the AA culture that they have a euphemism for it: it’s called “a nudge from the judge.”

If Miller is right in ascribing such low impact value to this, what is going on here? Are these people delusional? Is it that they were really ready for recovery but didn’t know it so got themselves mandated to AA through some subliminal process?

How would Miller invalidate the experiences of these people?

I am genuinely curious.

The fourth thing on Miller’s list is acupuncture. He mentions it first, but I saved it for last because it is something with which I do happen to have a lot of experience. Those two articles I published – and that book I wrote – that’s what they were about. Acupuncture in the treatment of substance abuse.

Maybe the reason Miller does not ascribe a numeric scale or ranking to acupuncture in his table is that there were so many favorable clinical trials in the 1980s and early 1990s (Brumbaugh, 1993) that they would skew the numbers resulting from the poor clinical outcomes that followed.

I have first-hand knowledge of what accompanied that sad downward trend in the research outcomes, and it relates back to the issue of context.

In the first clinical trials evaluating acupuncture, the modality (a 3-5 needle ear protocol) was studied in the context of a treatment continuum, i.e. a residential detox for homeless men. The outcomes were very positive. Acupuncture advocates have never asserted that it is a stand-alone therapy; it works in concert with other therapeutic interventions, including medications, case management, cognitive therapy, and other things high on Miller’s list.

And yet the research studies increasingly and persistently began to look at acupuncture in a vacuum. One of the last studies involved cocaine addicts who were not involved in treatment or recovery programs at all, who were paid a cash stipend to come to acupuncture sessions in a university setting. These volunteer subjects presented their ears to a hole in a piece of cardboard in order to assure that the acupuncturist would not recognize the person who was receiving sham versus therapeutic acupuncture and to eliminate any relational factors that might alter the outcome of the study. No one spoke to these subjects. If the subject happened to ask if anyone knew of any recovery resources for cocaine addicts, no one could say anything lest it squirrel the study.

Dr. Miller says that he factors in the “scientific rigor” of the studies he uses to make his charts and report his findings. I’ll bet this study got really high marks for its rigor!

Speaking of scientific rigor versus common sense, here’s something interesting: An acupuncture study just published in (of all places) the Journal of Substance Abuse Treatment (Meade, 2010) looked at whether or not a form of electric acupuncture stimulation was effective in treating opiate addicts in an inpatient setting. The acupuncture was administered for in 30-minute treatments for four days along with prescribed drugs (a combination of buprenorphine and naloxone). Even though the declared intent of the study was to address the symptoms associated with opioid withdrawal, such as nausea, irritability, and insomnia, in order to decide if the 4-day treatment was effective or not, the researchers tested people to see if they had used opiates two weeks following discharge from the program!

So the presumption was that a treatment administered for 30 minutes for four days upon admission to an inpatient program might influence whether or not people were still abstinent two weeks after discharge.

Success at two weeks post-discharge is going to be influenced by far more variables than a procedure that was done for four days following admission. And yet such studies are – in Miller’s world - the gold standard that is used to determine what should and should not be done in treatment. (Ironically, this particular study found that those receiving acupuncture were more than twice as successful as the control group, and they also reported they were less bothered by pain and that they experienced greater improvements in overall health.)

Go figure!

I’m sure this study will prompt Miller to publish a retraction of his disapproval of the modality.

On a related subject, what does it really look like inside a treatment program that is doing rigorous, scientific, double blind clinical trials on substance abuse interventions? Where are these clinics? Do they represent all the kinds of clinics and programs where people get sober? Are they mostly based in university or VA hospitals? Who are the subjects? Are the only people who get studied those who are willing to volunteer for a (usually) government-funded research study and who are therefore willing (and able) to fill out all the reams of paper work and sign all the consent forms and waivers that will be required?

(That last one is a trick question. Of course they are.)

So, is that a typical substance abusing population?

Who knows?

Who knows anything, for sure?

We should probably be well-advised not to conclude things that cannot be supported by evidence and by experience and by common sense.

But if he wants to persist (and I am sure he will), here’s another thing that Dr. Miller can put on the bottom of his efficacy list:

Cats.

There have (so far as I know) been no double-blind clinical trials on the efficacy of having substance abusers interact with cats while in treatment.

I mention this because someone dear to me one time had successfully recovered from multiple addictions – alcohol, heroin, cocaine. She had gone through and failed numerous treatment programs, some of them very “high-end” famous ones in California, Arizona, Minnesota, and so on. She finally got sober in a women’s half-way house. They didn’t even have formal treatment, just peer support and community living. I asked her why that worked for her, what the difference was that helped her finally achieve long-term sobriety.

She said there was a cat who lived in that house, and it gravitated to her and began sleeping on her bed. She said that it was the first time in her life that she had experienced unconditional love. The cat didn’t care where she had been, what she had done.

My friend never drank or used again.

How’s that for impact?

We should study that.

But we can’t. You would have to have sham cats. And to make it double blind, the people couldn’t know whether the cat in their room was a sham one or really cared about them. You would have to keep that information from the cat as well. Someday science may be able to accomplish that (perhaps John Cameron could help), but it hasn’t done so yet.

*  *  *

A final thought: The way to diffuse EBP into the treatment milieu is to make it a condition of funding. Treatment programs, for the most part, do what they do not because they think it’s so great, but because it gets reimbursed. Inversely, programs don’t avoid things that work per se. They avoid things that don’t get reimbursed because they don’t have any resources to do them.

Some of the recent treatment grants from SAMHSA not only require EBPs, but urge very specific ones (thanks to the lobbying efforts of whom?). That means that if you want to apply for the money, you have to promise to do the EBP. No education or movies or acupuncture or mandated AA or loving kittens are to be included in these programs lest their fidelity be compromised.

The important questions are: (1) Are we absolutely sure that’s what we want? (2) Do we really know what we’re doing?

 

 

Notes:

Brumbaugh, Alex (1993). Acupuncture: New Perspectives in Chemical Dependency Treatment, Journal of Substance Abuse Treatment. February 12, 1993, V10, No. 1.

Meade CS, Lukas SE, McDonald LJ, et al. A randomized trial of transcutaneous electric acupoint stimulation as adjunctive treatment for opioid detoxification. Journal of Substance Abuse Treatment. 2010;38(1):12–21.

Miller, William R. (2006), et.al. Disseminating Evidence-Based Practices in Substance Abuse Treatment : A Review with Suggestions. Journal of Substance Abuse Treatment 2006;31(1):25-39.

 

About Alex Brumbaugh

Why Add Acupuncture to your Treatment Program? FAQs

Articles Index

Stillpoint Press Home