This post was originally published on Psychology Today.
Mylea Charvat, Ph.D. is the CEO & Founder at Savonix. Follow her on Twitter.

What do the terms “evidence-based,” “fetus,” and “transgender” have in common? They’re on a list of seven words banned by the current Presidential Administration from the Center of Disease Control’s (CDC) budget documents. While the words fetus and transgender have long been trailed by controversy as they refer to scientific realms that are now politicized, “evidence-based” doesn’t seem to fit in the same category. There are nowhere near as many OpEds debating the veracity of evidence-based treatments as there are for gender dysphoria. And yet it was still considered a contentious enough term to be forbidden, despite the fact that evidence-based refers to the highest standard of treatment.

What Does Evidence-based Mean?

Evidence-based treatments are founded on scientific rigor and a robust body of evidence to support their claims. The alternative, also known as “the way it’s always been done,” stands on much shakier ground. The problem with treatments based on medical lore is that they have not been put through the wringer of the scientific process.

Dangers of Ignoring the Science — The Lobotomy

Take the gruesome, but popular psychosurgical procedure of the 1950s: the lobotomy. This treatment consists of disrupting neural connections in the prefrontal cortex and other parts of the brain to treat severe mental illnesses, as first described by Portuguese neurologist António Egaz Moniz. The medical community’s desperation to find a quick fix for serious psychiatric conditions led clinicians to overlook the considerable flaws in his research and the awful side effects associated with the invasive procedure. Moniz’s pioneering study should have never been so revered, because it was based on the immediate results of only 20 patients. With rare exceptions, 20 patients are far too small a sample size to deem a treatment successful. Additionally, when Moniz did follow up with the patients, he did so only a few days or a few weeks after the surgery — not nearly a long enough interval to determine the side effects for such an intrusive procedure.

Nonetheless, in 1949, Moniz was awarded the Nobel Prize in Medicine for his research and development of the lobotomy. This recognition lent the treatment legitimacy and caused an increase in its use, despite the inadequate science behind it. By 1951, almost 20,000 patients had been lobotomized. Although some patients became more “agreeable,” many became shells of their formal selves — and in the worst cases, vegetables.

So how could this have been avoided? Such a high-risk procedure needed to be put under much greater scrutiny before it was introduced to the public.

How to Spot “Good Versus Dodgy” Science

As well as the size of the study, it’s important to look at the study designs of the research behind evidence-based treatments. The gold standard is the randomized controlled trial. In this type of study, researchers randomly assign patients into two groups: the control and treatment group. Random selection is important, because it precludes the possibility of researchers placing the patients they think will best respond to treatment into the treatment group. Additionally, a well-designed, randomized-controlled trial will have a large sample size. The greater the number of study participants, the more likely the study results are due to the treatment in question and not due to an outlier skewing the data one way or another.

Teoh, P. J., & Camm, C. F. (2013). Test, Learn, Adapt. Annals of Medicine and Surgery, 2(1), 22.
Source: Teoh, P. J., & Camm, C. F. (2013). Test, Learn, Adapt. Annals of Medicine and Surgery, 2(1), 22.

Evidence-based treatments should have their underlying studies peer-reviewed and published in a reputable academic journal. After a study has been conducted, the researchers send their article to the journal editor, who then distributes the article to other scholars in the field for peer review. These highly qualified industry experts examine the methods and analyses of the study and decide whether the research is scientifically sound. Next, the journal editor evaluates whether the study meets the standards of the journal, and whether or not it can be published. This process is taxing and can take several months to a year to complete.

When evaluating studies to determine if they contribute to the “evidence” part of evidence-based treatment, use good judgment and approach each study with a healthy dose of skepticism. Furthermore, for the same reason that it’s important for a study to have a large sample size, focus on evidence-based treatments that have a substantial body of corroborating evidence. Luckily for the field of psychology, the Society of Clinical Psychology has a brilliant resource that categorizes psychological treatments by the quality of their supporting research. Each therapy is categorized as either “Strong,” “Modest,” or “Controversial” based on its research support. The supporting studies are listed, as well as clinical resources for practitioners interested in implementing the treatment.

This comprehensive effort is exactly what the field needs if we are serious about championing evidence-based research. It’s a long and arduous process to sift through and evaluate peer-reviewed articles, but it is vital to the integrity of psychology. Even if the CDC has shied away from the word, it’s the duty of health-care professionals to seek out evidence-based treatments so that we can give the best care possible.