Changes in the adjusted hazard ratio, but not in the precision of the ratio (created 2009-01-19)

.

This page is moving to a new website.

Does anyone know a good reference on why, in Cox regression of a clinical trial, including covariates often changes the treatment hazard ratio rather than narrowing the confidence interval? I can remember attending a talk on this years ago, but cannot remember the details.

This was a question posted on the Medstats email discussion group by one of the more prominent members (MB), and here is what wrote in reply.

I can't offer any references, but let me suggest a few things that are important to remember.

* The word "often" lacks any explicit empirical data. Has anyone formally studied this, or commented on this, or is this something that you noticed yourself? Are you sure that you are not suffering from recall bias? Here are two published examples (taken from a convenience sample of Pubmed Central articles:

In the first table, there is very little change in either the point estimates or the precision. In the second, the point estimates and the precision both change. So if you are alleging that point estimates are more likely to change after adjustment than precision, then my first two examples from a convenience sample are both contradicting your premise.

* Covariate adjustment is done at the discretion of the researcher. Perhaps researchers are more likely to compute and include adjusted hazard ratios in their research when they change the point estimate than when they change the precision. This could be considered a form of selection bias.

* Referees and editors might be more likely to demand cutting information to meet page limitations when the point estimate is the same both adjusted and unadjusted, so perhaps this could be considered a form of survivorship bias.

* Width is a tricky thing when discussing confidence intervals for ratios. A confidence interval for a ratio of 0.25 that ranges between 0.125 and 0.5 is a lot narrower than a confidence interval for a ratio of 4 that ranges between 2 and 8. Perhaps it would make more sense to examine the width of the confidence interval on a log scale.

* Certainly in a linear regression model, covariate adjustment can decrease the width of the confidence interval, but it can in some cases increase the width of the interval (because of loss of degrees of freedom). Covariate adjustment in a linear regression model can increase the size of the treatment effect, decrease the size of the treatment effect, or leave the treatment effect unchanged. It is trivial to show examples of all of these cases. Why would you expect that a nonlinear regression model like Cox regression would be any different?

* Are you trying to say, how can you see a shift in the adjusted hazard ratio without also seeing a decline in the width at the same time? This does seem counterintuitive. Intuition seems to say that a covariate could only produces a marked change in the hazard ratio if it was strongly related to the outcome and this strong relationship could not help but produce a narrower confidence interval at the same time. I don't fully grasp how covariate adjustment works in any nonlinear model. Logistic regression is another type of nonlinear model where strange things seem to happen during covariate adjustment. Perhaps there is something in the nonlinearity that produces this counterintuitive result. It may be something more mundane, though, such as the Cox regression model being more sensitive to loss of degrees of freedom.

Sorry that I could not provide any specific references, but I hope my comments are still helpful.