A controversial meta-analysis (December 20, 2005).

This page is moving to a new website.

Back in August 2005, the Lancet published an interesting meta-analysis on homeopathy. The full free text of this article is not available on the web, but if you can find a copy it is well worth reading.

The researchers identified 110 placebo controlled homeopathy trials and matched them with 110 placebo controlled conventional-medicine trials. Both sets of trials showed that smaller studies showed stronger effects. Both also showed that lower quality studies showed stronger effects. But when the analysis was restricted to large trials of high quality, the effect of conventional medicine was still statistically significant (odds ratio 0.58, 95% CI 0.39 to 0.85) but the effect of homeopathy was not (odds ratio 0.88, 95% CI 0.65 to 1.19).

The critics of this meta-analysis raise some interesting objections, and you can read some of them in the in correspondence section of the December 17, 2005 issue of the Lancet. A common complaint, voiced in the letter by Harald Walach, Wayne Jonas, and George Lewith is that the type of homeopathy testing in clinical trials is very different from that used in the real world. They also argue that the comparison of homeopathy and conventional medicine is an apples to oranges comparison.

Second, the six studies of conventional interventions are, by comparison, highly selected. The substances assessed within them have gone through the four clinical pharmacological stages of drug testing. Most newly developed pharmaceuticals do not make it to the last stage of large, multicentre phase IV trials. Therefore the allopathy trials chosen by Shang and colleagues tested medications that had already been largely proven to be efficacious, whereas most homoeopathy trials start from a far less systematic and rigorous evidence base. There have, after all, been very few placebo-controlled randomised trials in homoeopathy, which is why there is an absence of evidence. We are only just beginning to understand how to research homoeopathy and complementary medicine in general. This seems to be an argument for more research, not less.

I don't find this argument compelling. If homeopathy uses a lot of things that would not survive the four clinical pharmacological stages of drug testing, then a large portion of homeopathy, as it is currently practiced, is ineffective. If anything, this statement supports the use of Evidence Based Medicine since the practice of conventional medicine to test new therapies and, when indicated, adopt them in place of older therapies appears to lead to superior results.

The authors also argue that the placebo effect is stronger in homeopathy studies than in conventional medicine. They don't present any evidence for this statement other than pointing out that the placebo effect can vary substantially.

Two other sets of authors criticize the published meta-analysis for leaving out important details, such as the studies that were excluded from analysis. The authors do correct this in their reply in the December 17 issue and point to a web site (www.ispm.ch) with those details.

One of the letters referred to the crossword analogy of Susan Haack, which is worth quoting here:

The clues [of the crossword] are the analogue of experiential evidence, already-completed entries the analogue of background information. How reasonable an entry in a crossword is depends upon how well it is supported by the clue and any other already intersecting entries; how reasonable, independently of the entry in question, those other entries are; and how much of the crossword has been completed. An empirical proposition is more or less warranted depending on how well it is supported by experiential evidence and background beliefs; how secure the relevant background beliefs are, independently of the proposition in question; and how much of the relevant evidence the evidence includes. How well evidence supports a proposition depends on how much the addition of the proposition in question improves its explanatory integration. There is such a thing as supportive-but-less-than-conclusive evidence, even if there is no formalizable inductive logic.

This quote comes from the interesting publication

A far sharper criticism of the Lancet study was published by Domenico Mastrangelo and Cosimo Lore.

The authors argue that it is a mistake to compare homeophathy to a placebo because:

  1. the real nature of the placebo effect is unknown;
  2. it has never been explained in terms of interactions between molecules and hence must be based on “immaterial” interactions, if any (something like the “vital force” in homeopathy);
  3. “immaterial”, and hence non-measurable, interactions are commonly discarded as unproven by conventional medicine. On the other hand, this is the subject of the current dispute between homeopathy and conventional medicine;
  4. nevertheless, conventional medicine looks at the placebo effect as something “real”. As a matter of fact, controlled clinical trials are commonly planned to include a “control” group of patients to be treated with “sugar pills” and therefore the placebo effect, although mysterious and unexplainable, is still an integral part of the culture of conventional medicine. It would be good to know why homeopathy should not be treated in the same way;
  5. in spite of all the above, the placebo still has a curative effect since it refers to people cured by the administration of a “sugar pill” instead of an active drug.

They then go on to ask

Should we still consider the placebo as a sort of unwanted effect of treatment, or would it perhaps be wiser and more advisable to try to better understand its nature and, eventually, exploit it to reduce the incidence of adverse or fatal drug reactions?

Many proponents of alternative medicine are fascinated by the placebo effect because it provides evidence of a mind-body interaction that might support and explain the efficacy of their medical approaches. But quite honestly, hitching your wagon to the placebo star will probably lead you nowhere. The placebo effect is an amalgam of different effects, such as the self limiting nature of many diseases, the regression to the mean effect, observer bias, and so forth. A major meta-analysis of placebo effects demonstrated that in many situations, a placebo arm of a study does not differ from a "no treatment" arm.

Mastrangelo and Lore then attack the controlled clinical trial (CCT):

The Lancet campaign against homeopathy was launched by experts on controlled clinical trials and it is therefore based on the unproven assumption that the CCT methodology is reliable, repeatable, accurate, and infallible. This is simply not true. In 1991, Dr. Harris L. Coulter [10], in his book “The Controlled Clinical Trial: An Analysis”, reported that “CCT cannot guarantee drug safety and efficacy because the theoretical requirements of CCT are both unrealistic and unscientific”. This point of view was more recently confirmed by scientists who reported that there is no evidence for large-scale CCTs other than the vested interests of the pharmaceutical industry to defy sound arguments which demonstrate that the methodology of these studies is deeply flawed.

As support for the last sentence, these authors cite an article by James Penston:

I have not yet read this article, but I have read the book by Dr. Penston

which appears to make much the same claim. I disagree with Dr. Penston's pessimistic assessment of medical research and have commented on it elsewhere (see the link to my other weblog entry at the bottom).

All research has flaws, and it is a mistake to think that only unflawed research can produce useful information. The trick is to recognize the difference between a trivial flaw and an important flaw and to discern when a flaw becomes so serious that it makes the research findings unusable.

The authors then point out some recent examples of commercial influences and fraud in medical research.

With this picture in mind, the reader may now evaluate more objectively the clinical and scientific relevance of the methodology behind CCTs and finally understand why large collections of such investigations, as performed in meta-analyses, would only lead to confusing, uncertain, and misleading conclusions.

The authors also point out that the tendency of a CCT to examine a single treatment for a single disease limits its ability to examine the efficacy of homeopathy.

Let us suppose that a pharmaceutical company has to test the efficacy (and safety) of the new drug “ASA” (acetylsalicylic acid) in the treatment of fever. According to CCT methodology, one would simply select a group of patients with fever, assign them, through the process of “randomization”, to either the active drug (ASA) or a placebo treatment, and look for differences in response. Hence: one disease (or symptom) – one treatment. Homeopathy, in contrast, teaches us that fever may manifest differently in different individuals and it may depend on several diverse causes. Therefore, homeopathy will use Aconitum to treat a fever with sudden onset, Arsenicum Album for a feverish, anxious, and fidgety child, Belladonna for a feverish child who has chills and a flushed and heated face and body, Bryonia for fever with strong thirst, Chamomilla for fever associated with teething, Ferrum Phosphoricum for moderate fever, Gelsemium for the child who sustains a fever and whose whole body feels achy and flushed, Mercurius solubilis for the feverish child with offensive-smelling breath, body, stool, and/or urine, etc., continuing with a list of tens or maybe hundreds of different remedies, each with a single and extremely specific indication. It is easy to see that limiting the homeopathic treatment to one remedy for a single indication, with no further specification, would inevitably end up destroying the essence of homeopathic treatment itself, thus resulting in ineffective treatment.

It is indeed true that the CCT ends up focusing too often on single treatments for single diseases, but it is very easy to design a CCT that will test homeopathy in an individualized approach. Send any patient with a fever to a homeopathic practitioner. Get an individualized prescription and have it filled by a different practitioner who dispenses either the indicated homeopathic solution or a placebo based on a random number table. The patient returns to the original homeopathic practitioner for a blinded evaluation of the effectiveness of the individualized treatment.

While the CCT does not deserve to rest untested and unchallenged at the top of the research hierarchy, it still does better than Mastrangelo and Lore claim it does. A good balanced review of the limitations of the CCT in testing alternative medicine appears in

and I comment extensively on this article as well as the much more negative viewpoint of Dr. Penston on another one of my weblog entries: