Fighting the claim that any size difference is clinically important (created 2010-08-05).

This page is moving to a new website.

When working with people to select an appropriate sample size, it is important to establish the minimum clinically important difference (MCID). This is a difference such that any value smaller would be clinically trivial, but any value larger would be clinically important. I get told quite often that any difference that might be detected is important. I could be flippant here and then tell them that their sample size is now infinite and my consulting rate is proportional to the sample size, but I don't make flippant comments (out loud, at least). Here's how I might challenge such a claim.

The first comment that I usually make is to question their claim that any difference is important. I ask them to think about some of their skeptical colleagues. Are there differences that they would sneer at? That sometimes helps get the discussion started.

There might be a few studies of rather theoretical nature, such as the establishment of biological mechanism or the identification of a molecular pathway where any size difference would establish the mechanism or pathway. In such a case, any signal stronger than noise would establish the mechanism. I might still try to make an argument about this, but if someone really insisted, I would not force the issue.

But studies where any size difference is clinically important are rare. What may be going on is that the scientist doesn't want to commit to a particular value. There are alternatives to try in such a case, but I do not like to invoke them. You could instead focus on "what size difference would you expect to see." This is part of the lemming school of research (if all your friends jumped off a cliff would you jump off also?).

Another way to handle this is to back-calculate the MCID. What size sample could you collect without killing yourself in the process? Plug that sample size into the equation and then find the MCID that makes power equal to 80 or 90%. You could argue that this is cheating and I wouldn't disagree with you too strongly. You could also argue though, that this process specifies the resolution of the research design--what size difference could a design with this sample size detect reliably? Then, even if the researcher can't specify whether this is close to the MCID or not, perhaps some of the people reviewing the grant could.

There's a second problem that I encounter. Sometimes I am given a value for MCID that is so far off that even I can recognize that it is bogus. I'm not a clinician, that won't stop me from questioning a number that looks totally absurd. If you specify a difference that is way too small, usually showing them that that results in a simple size of ten million is enough to get you to rethink what you consider to be a minimum clinically important difference. If the minimum clinically important difference is absurdly large, I usually pose a hypothetical scenario. So if you say that only a ten fold increase in side effect rates, from 4% to 40% is important, then I ask how one of your colleagues would react if they saw an increase from 4% to 20%. Wouldn't that be a concern? How about an increase from 4% to 10%?

Now anytime I pose a question like this about clinical or scientific judgment, it is a probe to see how well people can defend their choice. If you really do believe in a number that looks absurd to me and stringent challenges do not sway you, I am loathe to substitute my judgment for yours.