Tags: atlantic, medical research, misinformation, research
1 comment so far
Today I experienced one of those small miracles where it seems like the entire universe has converged to say “Yes, I agree with you!” when I was e-mailed an article that expresses everything I have been examining and thinking for the last 8 months–“can any medical-research studies be trusted?” In recent months I have become increasingly involved in researching various medical discussions. While initially disgusted by the number of the people quoting statistics and “they say” aphorisms on the internet without citing any kind of research, my turn to peer-reviewed medical journals, government agencies, and well-established professional societies seemed promising. Boy was I wrong.
The first problem I encountered was a marked dearth of research on certain topics even when preliminary research and letters to the editor stressed the need for follow-up studies. Why had no one taken on the topics so easily presented to them?
My second problem was faulty or insufficient research. How were “peer-reviewed” journals approving studies that used narrow demographics or extremely limited participants as their population of study? And what about the literature reviews and topic analyses that incorporated data over ten years old (or older)? How about the number of studies that measure long-term effects of a drug/procedure when the “long-term” lasts six months?
The final straw was the directly contradictory data between comparable research studies. What could account for one study claiming that vaccinating pregnant women in the third trimester prevents influenza in newborns 63% of the time, while another study claims that the protection is negligible (both supplying method and hard numbers)? While the bias of certain professional organizations (often funded by pharmaceutical companies) was obvious in some, even bias cannot sway hard numbers, or so I believed.
So when “Lies, Damned Lies, and Medical Science” by David H. Freeman in the November issue of the Atlantic showed up in my inbox, I was more than thrilled to know I was not alone. The article follows self-proclaimed “meta-researcher” John Ioannidis who, along with his team, has proven, “that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong.”
We have become so accustomed to the way doctors, nutritionists, and scientists later retract studies, or the refutation of older studies by newer ones, that we rarely question why this has happened. Most people, I would surmise, make the assumption that science and research have improved over time and thus provided us with new evidence and information. But, according to Ioannidis, this is not the case. Faulty research, again and again, is. The common errors he lists range from “what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.”
Does this mean scientists and researchers are lazy? Ignorant? Inherently evil? Why would such important and literally life-altering work be composed so shoddily? Bias. (Hmm…sound familiar?) Turns out that even unintentionally, bias has a way of making itself into every step of the research process, influencing outcomes greatly and it doesn’t matter if this bias is self-inflicted or the product of an outside pressure such as those funding the research. (I cannot help but to point out the irony here, in that we must question whether bias played a role in Ioannidis’ research on research.) And while bias is the source of the faulty research, a number of factors perpetuate the misinformation, including sensationalism, lack of thorough research (i.e. ignoring/missing later refuting studies), and lack of duplication of the experiment.
So what is the point of research refuting research? I’ll stop summarizing the article and give you a chance to decide for yourself. But let me leave you with this last thought: Ioannidis’ research, like medical research, provokes us to examine things we held to be true. And just like medical research, it seems to come up short, leaving us with the question, “well what can we do about it?” Perhaps further research is needed. ; )