[Previous issue] [Next issue]

[[Image available at http://www.pmean.com/images/MonthlyMean.png]]

The Monthly Mean newsletter, July 2013. Released 2013-08-19.

--> Introduction

--> Fighting a bad peer-review

--> The problem with alternating assignment

--> Article: The exaggerated relations between diet, body weight and mortality: the case for a categorical data approach

--> Joke: I was co-author on a paper...

--> Nick News: What is that strange creature?

--> Trivia: Fifty years ago, the Beatles...

--> Website: The Hawthorne effect: An old scientists' tale lingering "in the gunsmoke of academic snipers"

--> Tell me what you think.

--> Join me on Facebook, LinkedIn and Twitter

--> Permission to re-use any of the material in this newsletter

--> Introduction. Welcome to the Monthly Mean, a newsletter with articles about Statistics with occasional forays into research ethics and evidence based medicine. If you are having trouble reading this newsletter in your email system, please go to the web version (www.pmean.com/news/201307.html). If you are not yet subscribed to this newsletter, you can sign on at the newsletter page (www.pmean.com/news). If you no longer wish to receive this newsletter, there is a link to unsubscribe at the bottom of this email.

--> Fighting a bad peer-review. I get a lot of questions about what people consider bad advice from the peer review of a research paper. The say you can't fight City Hall, and you are finding yourself in a similar setting because you can't fight a peer-reviewer. That's not really true, you can fight City Hall and you can fight a bad peer review. But both are difficult. I've had a few people who were hoping that they could use the 4S defense (Steve Simon Said So) to fight a bad peer review, but one of the gravest injustices in the world is that the 4S defense is largely ineffective.

It's not surprising. In a battle of "experts" you would trust me more than a peer reviewer because you know me well. But the opposite is true of the editor. The editor has a cadre of peer-reviewers that they know and trust, and it would take a lot to get that editor to overrule them. It could happen, but don't hold your breath waiting. So the default position when a peer reviewer asks for something is to cave in.

I don't cave in when the peer reviewer is asking for something that does violence to the data. By that I mean something that distorts the data so badly that I would feel uncomfortable having my name associated with it.

The way you fight back is by appealing to a higher authority. So if a peer-reviewer, for example, asks for a post hoc power calculation, I usually refer them to an article (there are several in fact) published in the peer reviewed literature that says in no uncertain terms that post-hoc power calculations are very bad.

This is trickier in other areas. Sometimes you know the best thing to do, but you can't find any peer-reviewed articles to back you up. Or there's a whole bunch of articles advocating conflicting viewpoints. A good example of this is the use of the Bonferroni correction for multiple comparisons. There are lots of articles that explain why you MUST use this correction in a variety of settings, but there are just as many articles that explain why you MUST NOT use this correction. You appeal to a higher authority on Bonferroni, and the referee trumps your peer-reviewed article with one that is "obviously" better. So you're stuck.

When you can, it helps if you can anticipate a research controversy in advance. You run the analysis the way you like, but then you run it a different way and keep that analysis in your hip pocket. When the peer-reviewer asks for the alternate approach, you can say that you already ran it and you're glad to include it in the paper. And, quite honestly, if there is lack of consensus in the research community and a variety of published opinions about this, who am I to say that my way is right and the peer-reviewer's way is wrong.

Actually, it is not even a question of "right" and "wrong." It is a question of whether you can defend your approach properly. You can defend yourself properly if you use Bonferroni and you can defend yourself properly if you reject Bonferroni.

If I sound a bit weasely on this, so be it. I used to be very prescriptive in my advice and fought hard against any deviation from the ONE TRUE WAY. As I get older, I am becoming more tolerant of the belief that there are many reasonable ways that you can analyze your data.

--> The problem with alternating assignment. Dear Professor Mean, I am reviewing a study that describes their randomisation technique as follows: "We assigned participants to two groups by randomized sequential allocation. The first patient was randomly assigned to the NE group by tossing a coin, the second patient to the CD group, and the other patients were assigned to the two groups, sequentially." Is this techniques an adequate method of randomisation?

This is alternating (or sequential) assignment, and the only thing random about it is that an initial coin flip determines whether the alternating sequence is NE, CD, NE, CD, NE,... or CD, NE, CD, NE, CD,...

This is fast, because you flip a coin once and determine your entire sequence. True randomization would require a separate coin flip for each patient in the study.

Alternating assignment has one (and only one) nice statistical property. Any particular patient has an equal probability of being in NE or CD. Just like true randomization, this method will prevent covariate imbalance, on average, unless your covariates have an odd pattern themselves. Alternating assignment, though, has serious problems with temporal patterns.

Have you ever noticed how plants grown in a garden? There is an alternating pattern of big cabbage, little cabbage, big cabbage, little cabbage. They all start out as the same size seed, but one plant will, for various reasons, grow slightly faster. Its roots will extend towards its neighbor, stealing some of the water and nutrients. It will then grow even faster at its neighbor's expense.

Alternating assignment would be a total disaster in this setting. A fertilizer applied to every other plant would either end up looking really really good or really really bad, just because of the big/little pattern in plant growth.

It's a bit harder to imagine a problem with alternating assignment in a medical context, but it could happen here as well. Suppose you are measuring the amount of time that a physician spends with a series of patients. If the first patient runs a bit long, the physician will try to rush the next patient a bit to try to get things back on schedule. If one patient exits quickly, on the other hand, the physician may be more generous with his/her time for the next patient.

You could wave your hands around and argue that this type of temporal trend could not occur in your particular research setting, but there is no practical way to prove or disprove this assertion using the data from an alternating assignment. Thus, alternating assignment will always have a cloud over its head.

In theory, alternating assignment could be blinded from the patient and from the physician. Just keep the original coin flip blinded, and then unless there are some characteristic side effects or other aspects of the treatment that unblind the study, the blind could be maintained. In reality, anyone who does not take the trouble to use randomization over alternating assignment is probably not going to take the trouble to properly blind the trial either.

Using alternating assignment instead of randomization is enough to get your study excluded from most systematic overviews, so you could argue that alternating assignment is unethical. Why run a study if you know it won't contribute any data to the meta-analysis? Alternating assignment will give your study a miserable risk/benefit ratio.

There are a few settings where logistics mandate alternating assignments over randomization, but these are very rare.

I have a nice example of an epic fail involving lack of randomization. Although it is not really related to alternating assignment, it is still amusing. Note that even though the pictures look like they were done by a professional artist, I did them myself.

In general, treat alternating assignment as equivalent to a non-randomized study. You could try to make some arguments that it is almost as good as randomization, but most researchers will not buy those arguments.

--> Article: Welch HG, Schwartz LM, Woloshin S. The exaggerated relations between diet, body weight and mortality: the case for a categorical data approach. CMAJ. 2005;172(7):891-895. doi:10.1503/cmaj.1041310. Description: Many of the statistical adjustments used in research are difficult for the average reader to evaluate. Sometimes the effect of a covariate is not the same across the entire range, but the reader has no way to see if this might be the case. This paper advocates supplementing the adjusted model with a table reporting the outcome by a stratified version of the covariate.

--> Joke: This is a true story. I was co-author on a paper with a colleague, and we got back a peer review that called us out on a rookie mistake. I'm not embarrassed by this. Actually, I'm quite grateful when reviews catch such a mistake. It means that the process is working. What was amusing, though, is how they worded their criticism. They suggested that our work be reviewed by someone "more senior." They didn't know us, obviously. I was in my mid fifties and my colleague was even older. I was tempted to write back and point out that the only people more senior than us were already dead.

--> Nick News: What is that strange creature?

[[Image available at http://www.pmean.com/news/images/201307/nick01.jpg]]

I was eating lunch when I noticed a strange creature sitting across from me.

[[Image available at http://www.pmean.com/news/images/201307/nick02.jpg]]

Some hair started to emerge from one of its orifices.

[[Image available at http://www.pmean.com/news/images/201307/nick03.jpg]]

Oh, it's just Nicholas. He was trying to keep warm in a chilly over-airconditioned restaurant by curling up inside his shirt.

--> Trivia: Fifty years ago, the Beatles made the three-peat chorus "Yeah, Yeah, Yeah" famous in the song, "She Loves You (Yeah, Yeah, Yeah)." This same three-peat appears in two other Beatles songs. What are they?

Last month's trivia question was What television character had the following line: "You're worth two 43s, 86."? Several people recognized the series, "Get Smart" but incorrectly attributed this quote to Agent 86, aka Maxwell Smart, who was played by Don Adams. The quote was actually spoken TO Maxwell Smart. Frank Underwood was the first to note correctly that the speaker was actually Agent 99, who was played by Barbara Feldon. By the way, there is a song "99" by the band Toto that many people including me have mistakenly taken as a love song to the Barbara Feldon character, but it is actually a tribute to the early George Lucas movie, THX-1138.

--> Website: Bastian H. The Hawthorne effect: An old scientists' tale lingering "in the gunsmoke of academic snipers" published on the Absolutely Maybe, Scientific American Blog Network. Description: Researchers frequently cite the "Hawthorne Effect" as a reason to run concurrent controls in any study. These studies were conducted in a factory called the Hawthorne Works, and demonstrated that ANY intervention would increase productivity. The conclusion was that it was the process of studying workers that caused the increase, and not the particular intervention. This article debunks the historical accuracy of the description of these experiments and questions whether there really is any empirical data to support a Hawthorne Effect.

--> Tell me what you think. How did you like this newsletter? Give me some feedback by responding to this email. Unlike most newsletters where your reply goes to the bottomless bit bucket, a reply to this newsletter goes back to my main email account. Comment on anything you like, but I am especially interested in answers to the following three questions.
--> What was the most important thing that you learned in this newsletter?
--> What was the one thing that you found confusing or difficult to follow?
--> What other topics would you like to see covered in a future newsletter?

If you send a comment, I'll mention your name and summarize what you said in the next newsletter. It's a small thank you and acknowledgement to those who take the time to help me improve my newsletter. If you send feedback and you want to remain anonymous, please let me know.

I received feedback from several people. Jane Yank actually liked the feedback from Ed Gracely in the June issue about the May article about p-values. Confused? Basically, Ed pointed out that a borderline p-value (e.g., 0.078) associated with a large effect size is pretty good evidence that the sample size was inadequate. There are lots of papers out there with inadequate sample sizes, so this informal indicator is a useful rule of thumb. Jane also liked the article about placebos during the AIDS crisis and how to look at small two by two tables within a larger contingency table.

Frank Underwood correctly notes that the standardized residual can be used to detect patterns in a large contingency table. I should write up an article about residuals in general for a future newsletter.

Laurie Stiles liked the report on Nicholas at scout camp.

I got several other general "nice" comments about the newsletter. I do need to hear these comments from time to time to help keep me going.

If you liked this newsletter, you can read the previous issue at my website. From that webpage, you can link back to other issues as well. Also feel free to look at my archive page, with links to individual newsletter articles. You're also welcome to take a peek at the next issue as I am developing it.

--> Join me on Facebook, LinkedIn, and Twitter. I'm just getting started with social media. My Facebook page is www.facebook.com/pmean, my page on LinkedIn is www.linkedin.com/in/pmean, and my Twitter feed name is @profmean. If you'd like to be a Facebook friend, LinkedIn connection (my email is mail (at) pmean (dot) com), or tweet follower, I'd love to add you. If you have suggestions on how I could use these social media better, please let me know.

--> Permission to re-use any of the material in this newsletter. This newsletter is published under the Creative Commons Attribution 3.0 United States License. You are free to re-use any of this material, as long as you acknowledge the original source. A link to or a mention of my main website, www.pmean.com, is sufficient attribution. If your re-use of my material is at a publicly accessible webpage, it would be nice to hear about that link, but this is optional.