P.Mean: Using a sub-optimal approach in meta-analysis (created 2008-12-06).
I am having difficulty understanding the meta-analysis of ordinal data in a Cochrane systematic review, and would appreciate advice and comments. One study in the meta-analysis had an ordinal efficacy outcome with categories None, Some, Good, and Excellent. The meta-analysis did 4 separate analyses, treating each category as if it were a dichotomous outcome. Aside from the fact that this generates (almost) more analyses than there are data, this approach seems unnecessary and uninterpretable. The Cochrane Handbook says: "Ordinal and measurement scale outcomes are most commonly meta-analysed as dichotomous data." And "Occasionally it is possible to analyse the data using proportional odds models where ordinal scales have a small number of categories, the numbers falling into each category for each intervention group can be obtained, and the same ordinal scale has been used in all studies." What should the authors of the systematic review have done?
They should have hired me to do their data analysis.
Seriously, treating ordinal data as a series of dichotomous outcomes is an acceptable approach but not the optimal approach. I should be careful when I say this, of course, because some people think that the optimal approach is the ONLY acceptable approach. I'm more of a pragmatist, so I would normally not fuss too much about this. Still, it helps to understand what you lose when you dichotimize.
First, as you note, there is a multiplicity issue. If the ordinal outcome has four levels then there are three analyses (A vs BCD, AB vs CD, and ABC vs D). For the record, it would be very bad to compare the middle outcomes to the extremes (B vs ACD, for example). Multiplicity is not too serious a problem here unless the number of ordinal levels is greater than 5 or so.
A more serious problem is the loss in precision. Dichotomizing, by its nature throws away information. It treats A and B alike, for example, in the AB vs CD comparison. Gary McClelland created a wonderful Java applet that shows visually the problem with dichotomization. It talks about dichotomization of a continuous outcome, but the principle is the same.
Ordinal logistic regression is a bit messy, but it effectively pools the A vs BCD, AB vs CD, and ABC vs D dichotomies. Pooling leads to more precision and avoids the multiplicity issue.
When someone uses a sub-optimal approach (one with less power or precision), and is still able to find statistical significance, then you shouldn't fuss at all about the issue. If the data looks like a total bust, then it is unlikely that even an optimal approach would have salvaged the data. It is those borderline findings where you need to kick and fuss and shout.
This work is licensed under a Creative Commons Attribution 3.0 United States License. This page was written by Steve Simon and was last modified on 2010-04-01. Need more information? I have a page with general help resources. You can also browse for pages similar to this one at Category: Modeling issues.