Skip to main content

Some Psychological Interventions Are More Harmful Than Helpful

New research shows that some highly publicized programs are ineffective

Silhouette of a teenager holding a beer bottle.

For most Americans, the motto of the medical community, “First, do no harm,” is both familiar and uncontroversial. Mental health care professionals subscribe to similar codes. But look around the scientific and public discussions about psychological interventions—programs to make you feel better, encourage healthier choices and treat mental health problems—and you will notice little concern about possible harm. The late psychologist Scott Lilienfeld attempted to raise awareness of the risk of harm in his article “Psychological Treatments That Cause Harm,” in which he determined a provisional list of “potentially harmful therapies” (PHTs) that should be reconsidered. But the scientific study of PHTs was complicated by an emerging concern in the field of psychology regarding the credibility of published research.

What makes research credible—that is, scientifically believable and persuasive—varies from researcher to researcher. However, over the past decade, the psychological research community realized the scientific literature contained frequent and troubling signs of being, well, incredible. Some of the warning signs were obvious, like published claims of extrasensory perception and dramatic instances of scientific fraud. Other warning signs of lackluster credibility were more subtle and/or technical, but included:

Psychologists using tiny sample sizes that could not be expected to yield reliable information about the effects of psychological interventions.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Despite those tiny samples, psychologists almost always (more than 90 percent of the time) reporting statistical results that they claim supported their psychological interventions.

Psychologists frequently including “statistical typos” in their published research (e.g.,

counts of participantsnot “adding up”, requiring corrections or retractions when detected).

Given this development, our research team wondered: what do the effects in this research literature suggest about whether PHTs help or harm, and how credible are the reported studies? Our findings were recently published in the journal Clinical Psychology: Science and Practice.

We collected data from every randomized controlled trial (the sort of studies used in medicine to determine if a drug or vaccine works) for a PHT that we could find (more than 70); in total, we reviewed more than 500 statistical tests of PHTs. We then extracted and analyzed warning signs of low credibility from each study; you can therefore think of our paper as containing “credibility report cards” for each PHT.

What did we find? First, the good news: statistical typos were very rare in published research on PHTs. Moreover, the literature underpinning grief counseling—a specific PHT intended to help clients cope with the death of a loved one—looked supportive of its effectiveness and reasonably credible.

Many of our findings however, were concerning. For example, while it’s encouraging that statistical typos in this literature were rare, this may have something to do with so few (less than 20 percent) of the statistics being reported with the necessary information to verify their correctness! Further, most of the studies we reviewed involved pitting PHTs against no treatment at all, which is quite literally the weakest bar of comparison possible and one that could overstate the effectiveness of PHTs.

Most significantly, in our analysis, two interventions stood out as more likely to be harmful than helpful:

Critical incident stress debriefing, in which someone who has experienced an extremely stressful event (like a paramedic or firefighter) is required to participate in a small-group intervention shortly after the stressor.

Scared Straight programs, in which adolescents who have broken the law are exposed to inmates in actual prisons; the inmates attempt to scare them out of delinquency by describing the horrors of prison life.

Unfortunately, both of these treatments are often touted by their developers, podcasts and TV shows. Drug abuse resistance education (DARE)is similarly well-promoted. A program that most millennials and Gen Zers were or have been exposed to, DARE involves a uniformed police officer teaching students about the perils of drug use and drinking. We found the credibility of the DARE literature to be so-so, and its effects suggest it doesn’t do much of anything at all. DARE has had an operating budget in the millions and been deployed across the world. Resources devoted to it could have been spent on programs that actually benefited student learning and well-being.

Our research suggests that being a provider or consumer of psychological interventions is tricky. Unlike the FDA for medications, medical devices and vaccines, there is no government body that judges psychological treatments as safe. Therefore, those involved need to not only consider the potential for a psychotherapy to help, but also to harm. Making these considerations even more difficult, providers and consumers need to be aware that published research on the helpfulness and harmfulness of psychological interventions is not always credible. We think that, moving forward, more consumers of psychological interventions need to feel comfortable asking providers what scientific evidence there is for a given intervention’s helpfulness and against its harmfulness. Mental health providers, meanwhile, could benefit from increasing their attention to the possibility of harm, while also learning to spot some of the more straightforward warning signs of incredible research.  

This is an opinion and analysis article; the views expressed by the author or authors are not necessarily those of Scientific American.