Is Evidence-Based Medicine too rigid (created 2010-08-19).

This page is moving to a new website.

Someone was asking about criticisms of Evidence-Based Medicine (EBM) that the reliance on grading schemes and the hierarchy of evidence was too rigid or was EBM providing some heuristics that could be adapted as needed. This is hard to respond to, but it is an important question. I view checklists and hierarchies as a necessary evil, and that sometimes they are applied too rigidly.

As an example, a recent article in the Skeptical Inquirer criticized case-control studies (Park 2010) and said they were analogous to election polls which sometimes agree with the results of the election. Case-control studies are indeed a weak form of evidence, but when they produce an effect of strong magnitude and are associated with a plausible mechanism, they can provide convincing evidence. Case-control studies, for example, correctly identified the link between aspirin use and Reye's syndrome, a critical step in the prevention of this disease (Monto 1999).

This is an important point which we sometimes neglect to teach. No one study should be examined in isolation. It needs to be thought of in the whole context of knowledge of the problem. So replication is important, biological mechanisms are important, the presence of a dose response relationship is important, and so forth. When these things are present, a case-control study can and should move higher on the hierarchy. When they are absent, a randomized trial should drop lower on the hierarchy.

Do practitioners of EBM look at the whole picture or do they rigidly stick to a hierarchy? That's something that could be studied, but it would be difficult to identify when someone was too rigid in applying a hierarchy versus appropriately discounting weak evidence. The best example of this was the fuss over eight randomized trials of mammography. When the best two trials were pooled, mammography did not look so good. When the (slightly?) flawed remaining studies were included, mammography looked much better. So was using only the two best trials being too rigid, or was it appropriate? I don't think there is a truly objective answer to that question. A nice summary of the controversy appears in Jackson 2002.

An important point in favor of EBM has the benefit of transparency. If you are trying to dispute an expert opinion, your only serious option is to attack the expert. In EBM, the cards are all laid out on the table. If you don't like the way that studies were selected in a systematic overview, you can suggest an alternate approach. Compare that with how you might try to criticize the bibliography in a subjective expert review. I can't see how you could do this without crawling inside the mind of the expert to see if the exclusion of some studies was a deliberate attempt to skew the results or if there was a rational basis for these exclusions. Now lots of people do try to do this and attribute base motives to experts that they dislike. I much prefer the objectivity of debate that EBM makes possible.

Another point in favor of EBM is that EBM is self-critical. If observational studies are not getting enough respect, then a systematic overview of observational versus randomized studies (Concato 2000) should answer the question.

This is the irony of the situation. Many critics of EBM use the tools of EBM to attack it. But actually, this EBM's greatest strength, as critical research about EBM allows EBM to improve itself. I'm something of an outsider (I'm a statistician and not a doctor), but in my experience with EBM (I first became aware of EBM in the late 1990s), it appears that it was practiced much too rigidly in its early history. It is still too rigid at times today, but better than earlier.