Summarizing from PLoS Blogs:
A review article in the highly acclaimed CMAJ finds cigarettes may be beneficial for long distance runners.
Serum hemoglobin is related to endurance running performance. Smoking is known to enhance serum hemoglobin levels and (added bonus), alcohol may further enhance this beneficial adaptation.
Lung volume also correlates with running performance, and training increases lung volume. Guess what else increases lung volume? Smoking.
Running is a weight-bearing sport, and therefore lighter distance runners are typically faster runners. Smoking is associated with reduced body weight, especially in individuals with chronic obstructive pulmonary disease (these folks require so much energy just to breath that they often lose weight).
The point of the reivew article is that
The review paper is a staple of medical literature and, when well executed by an expert in the field, can provide a summary of literature that generates useful recommendations and new conceptualizations of a topic. However, if research results are selectively chosen, a review has the potential to create a convincing argument for a faulty hypothesis. Improper correlation or extrapolation of data can result in dangerously flawed conclusions. The following paper seeks to illustrate this point, using existing research to argue the hypothesis that cigarette smoking enhances endurance performance and should be incorporated into high-level training programs.
In other words, data can be manipulated. And the blog post concludes:
The point being that whether you’re reading a blog post or a systematic review paper in a prestigious medical journal, you really do need to be skeptical at all times.
But let’s back up: what does the author want to be true?
The title of this paper should be, “Whose Fault Is It If This Article Gets Published?”
The author’s point is that it is up to the readers to be vigilant when reading reviews, because cherry picked data can lead to lies. This is true, but that isn’t what happened in this fake review article.
This article in CMAJ does not actually show how review articles can be misleading if you cherry pick the right data. All of the references and information he uses is correct– what is faulty is the application of his logic.
The reason this is a trick is that his review should have been detected as idiotic by the reviewers of the journal. If they failed at this, then it wasn’t a problem with the analysis of the data, but a problem with the reviewers not seeing the mangled conclusions.
Bad reviews and meta-analysis cherry pick data and apply them to the currently accepted logic in the field– so a reviewer would be powerless against this unless he himself went and investigated the referenced data, as well as data that wasn’t used in the review. This is why many reviews are terrible.
The reason this is published in CMAJ is to shift the burden of responsibility onto the reader, and off the reviewer/journal. “We can’t catch everything!” You’re not supposed to, but you are expected to catch problems in logic.
See the title of this post? I took the review article’s “conclusions”, and extended them, applied them to a group (everyone) that the science didn’t discuss. That is an error reviewers are expected to catch, and when they don’t, it’s their fault.