Evidence-Based Medicine

See the short introduction to these essays and the rest of my sociomedical essays here.

One of the magic words of modern medicine is EBM, or evidence-based medicine. It is a fine idea: all treatment should be based on evidence, which comes from scientific research. However, blindly relying on this principle has several problems. There are much more ways to do a study wrong than to do it right, and various biases can twist the results.

Clinical trials are often biased through study designs chosen to yield favorable results for sponsors. For example, the sponsor's drug may be compared with another drug administered at a dose so low that the sponsor's drug looks more powerful. Or a drug likely to be used by older people is tested in a young population, so that side effects are less likely to emerge.

A common form of bias stems from the standard practice of comparing a new drug with a placebo, when the relevant question is how well it compares with an existing drug. In general there are surprisingly few studies actually comparing treatments with each other. Usually the goal is only to show that one drug is "non-inferior" to others - that it does not perform noticeably worse or cause significantly worse side effects. Whether it is actually better may not be evaluated. Treatments already in use are rarely compared against each other.

Besides bad study design, there are several reasons why research can produce apparently incorrect results, such as pure chance (in smaller studies) and fraud, both of which are likely more common than widely assumed. For example, a result is usually considered statistically significant for p values less than 0.05 - but that still means there is a 5 % chance the results were due to chance alone.

But these factors may not come to play at all, if a treatment is not studied in the first place. Medical research is all about politics. Politics determine which studies get published, but also which studies get done in the first place. Some illnesses gather a lot of research money, such as breast cancer, which gets many times more than other cancers. Some, like CFS/ME, get a tiny fraction of the sum that equally serious but less common illnesses receive. Some researchers have an agenda to show that medications work, others that alternative treatments work (or do not work). Others are willing to study anything as long as it furthers their careers.

Medical studies require a large amount of two resources: money and time, both of which are in limited supply. Studies take time, publication take time, doctors reading the results takes time and it may take even longer to get the results adapted in wide-spread use. It has been estimated it can take almost two decades for a treatment to become widely accepted.

Besides the fact that time continuum is difficult to warp, the second issue is also problematic. Pharmaceutical companies have the money to run trials for drugs still in development or still covered by patent protection, but if a new use is discovered for an old drug or something that cannot be patented, it is very difficult to get funding for large stage clinical trials.

Lack of funding has been a problem for e.g. low dose naltrexone. Naltrexone is an opioid antagonist which has been off patent for decades. In small doses it can boost the secretion of endogenous opioids, which regulate the immune system and can be helpful in a variety of immune system and neurological illnesses, from cancer to Alzheimer's disease. There are hundreds of lab studies showing the mode of action is valid, but only a few small clinical trials. And the same goes for many other drugs. It is not uncommon to encounter a few small studies with highly encouraging results, but after that there is no further research. It could be that some researchers have attempted to replicate the results and failed, but often the problem is simply the lack of funding.

Things like publication bias are often shrugged off as being conspiracy theories, but it is clear that no reviewer can ever be fully objective. It is hard enough to get funding for a small pilot study. Then you also need to get it published. Everyone wants groundbreaking research, but it can be surprisingly difficult for something which breaks the dogma to be accepted into a medical journal.

One problem is that unless you are a pharmaceutical company, you always have to start with a small study. You cannot just expect to get millions of funding to study thousands of patients. And your first study has to produce results that further studies can validate. A small study may not have enough statistical power to show a treatment actually works, if it does not work for everyone, and the results may be doomed to obscurity.

Review articles and meta-analyses gather data from several other studies - sometimes as many as hundreds, sometimes as few as two. They are at most as strong as the strongest data, usually weaker. With enough data it is possible to end up with almost any results by cherry-picking the right studies. Yet even with selecting the same studies it is possible to draw very different, even opposite conclusions. This explains how there are often reviews and meta-analyses of the same treatment for the same diseases coming to completely different conclusions.

When it comes to such paradoxes, people tend to trust review authorities, like the Cochrane group, known for its meta-analyses. But even Cochrane group has occasionally come to completely disastrous conclusions simply because the original data has been misleading. Withdrawals of Cochrane meta-analyses are surprisingly common.

One good example is the Cochrane recommendation of exercise as a treatment for chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME), even though it is not only ineffective but even dangerous. In reality such studies have been done on a population which does not have CFS/ME but "chronic fatigue", an entirely different condition - an example of faulty study design and in most cases likely intentional fraud.

The good doctors are those who understand that sometimes we have to accept science may have no answers for us. Every illness and every patient is different. Even sound science may not always be applicable in real situations. Many people have multiple illnesses and medical conditions, some of which may greatly complicate the treatment of others or may even require opposite kinds of treatments. Comorbid conditions are more the norm than the exception.

Studies, on the other hand, are usually done in patients who only have a single medical condition, possibly two but rarely more, not pregnant, not unusually sensitive to medications or with massive allergies, and usually taking a limited number of medications. They are likely not world-class athletes, businessmen traveling the world or single moms of several children, because such people usually have other things to do than participate in clinical trials. (People who participate in clinical trials also often lie both about the illnesses they have or do not have and about the effects of the drugs being tested.)

Clinical trials in general are reductionistic. A study may occasionally (though rarely) compare two or more treatments. Sometimes a study is done to see which medication would help people who are already partially helped by one treatment (or whom were helped by it at first, but its efficacy waned) or for whom a particular treatment has not worked, but they are rarely enough to actually guide treatment decisions. For example, if a patient has already suffered bad reaction to drugs X and Y (instead of X and Y not working), what should be tried next?

When the situation is complicated, the average doctor throws his hands in the air. His dogma offers no answers for him; for him, the patient is untreatable, a dangerous mess best avoided. Instead of evidence based medicine, sometimes we may have to rely on medicine based more on rationality than high quality evidence. There may be a drug known to be safe, is anecdotally effective and has a sensible mode of action. Perhaps there are a few studies, but they are small and preliminary, not considered proper evidence. Perhaps a similar drug has been studied in the same or a similar condition, but this drug is not yet available, is too expensive or has too many side effects.

For scientists science is often the top priority. For patients, it is to get diagnosed and treated. They want a treatment that works with the least possible side effects (and often they also have to pay consideration to whether they can afford it or whether their insurance company or the public insurance coverage will cover it), and something that fits into their lifestyle, whether it is sports, travel or just having a busy life.

Many doctors rather recommend a treatment which is known to have some yet very limited efficacy with serious side effects - as is the case with most drugs for e.g. cancer and autoimmune diseases and also for some psychotropic medications. Patients, on the other hand, would often prefer trying a treatment which may have less solid evidence supporting its efficacy, but has fewer side effects and may be more effective than the well-studied new wonder drug. Some have already gone through all the new wonder drugs and are told there are no more options. EBM has exhausted all its options, but rational medicine often has some choices left to offer.