Medical Science — the Difficulty of Knowing Anything for Sure

© 2010 Peter Free

 

19 January 2011

 

 

Two prevailing causes of uncertainty in medical research and practice

 

There are two primary causes of provable uncertainty in medical research.  The first is due to the complex, multi-component subtlety of biological systems.  The second is a consequence of research bias.

 

As a consequence, 80 to 90 percent of the spectrum of research findings are wrong.

 

Much of (less than obviously correct) medical practice is ineffective.  Or worse.

 

 

Research bias

 

A significant proportion of wrong statements in medicine and medical research are made because researchers are not scientifically objective in their implementation of the scientific method.

 

Meta-analysis is often not a cure for this.  Lumping together a bunch of flawed studies gets us essentially nowhere.

 

 

Certainty is an elusive number

 

Some years ago, Dr. John Ioannidis wrote a couple of essays that demonstrated the statistical questionability of most medical research claims.

 

 

Citations to Ioannidis’ papers

 

John P. A. Ioannidis, Contradicted and Initially Stronger Effects in Highly Cited Clinical Research, JAMA 294(2): 218-228 (13 July 2005)

 

John P. A. Ioannidis, Why Most Published Research Findings Are False, Public Library of Science Medicine [PLoS Med] 2(8): e124, doi: 10.371/journal.pmed.0020124 (30 August 2005)

 

John P. A. Ioannidis, Why Most Published Research Findings Are False: Author’s Reply to Goodman and Greenland, Public Library of Science Medicine [PLoS Med], doi: 10.1371/journal.pmed.0040215 (26 June 2007)

 

 

Writer David H. Friedman recently wrote a follow-up piece regarding Ioannidis’ main points

 

David Friedman, author of Wrong, Why Experts Keep Failing Us — and How to Know when Not to Trust Them (Little, Brown & Company, 2010), wrote that Dr. Ioannidis is globally recognized as an expert regarding on the credibility of medical research.  (I concentrate on Friedman’s article because his writing is more penetrable than Ioannidis’ is for lay readers.)

 

Ioannidis thinks that up to 90 percent of published medical information that practitioners use is flawed:

 

Yet for all his influence, he worries that the field of medical research is so pervasively flawed, and so riddled with conflicts of interest, that it might be chronically resistant to change—or even to publicly admitting that there’s a problem.

 

In the [PLoS Med] paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. . . . 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.

 

© 2010 David H. Freedman, Lies, Damned Lies, and Medical Science, The Atlantic 306(4): 76-86 (November 2010) (paragraph split)

 

 

Distorting data and study design is easy and profitable

 

When you know where you want the research to wind up, it’s easy to get it there.  And it is not difficult to conceal the mistaken route to the flawed end by (i) writing around it, (ii) selectively choosing the data incorporated, and/or (iii) manipulating statistical methods.

 

Pressures for funding and career advancement are difficult to ignore.  Slanting a research article so that it is attention-getting is easy.

 

Research that subsequently refutes the flawed result is considered boring and almost never receives the attention (or memory) that it deserves.

 

(Analogously, think how often retractions or corrections on ordinary news are printed in small blurbs in out-of-the-way portions of the printed media.)

 

Systemically worse, once an attention-getting group has managed to distort truth, other researchers climb on the bandwagon — without first re-testing the first group’s original findings — so as to claim some of the limelight and the money that goes with it.

 

Incentives work against accurate re-testing.  There is not much recognition to be had in either (a) refuting an earlier of finding (especially with the accompanying risk of offending someone powerful in the scientific or medical community) or (b) in simply proving the original study to have been true.

 

In 2005 (in the JAMA article cited above):

 

[Dr. Ioannidis] zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. . . .

 

Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated.

 

If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable.

 

Of those 45 super-cited studies that Ioannidis focused on, 11 had never been retested.

 

© 2010 David H. Freedman, Lies, Damned Lies, and Medical Science, The Atlantic 306(4): 76-86 (November 2010) (paragraph split)

 

 

The peer review system is part of the problem

 

Peer review essentially consists of these same research people, so it tends to favor established thinking, even when that thinking is wrong.

 

Dr. Ioannidis found that even refuted papers persist in being cited as true, even 12 years afterward.  The problem, in his thinking (and mine) is systemic:

 

“Even when the evidence shows that a particular research idea is wrong, if you have thousands of scientists who have invested their careers in it, they’ll continue to publish papers on it,” he says. “It’s like an epidemic, in the sense that they’re infected with these wrong ideas, and they’re spreading it to other researchers through journals.”

 

© 2010 David H. Freedman, Lies, Damned Lies, and Medical Science, The Atlantic 306(4): 76-86 (November 2010) (quoting John Ioannidis)

 

 

Even without bias, subtle realities defy statistical uncovering at reasonable cost

 

Subtle medical effects and causations — even when operating in systems with very limited variables — require very large numbers of experimental and control subjects (both followed for much longer periods than most research does) to uncover.

 

When we recognize the numerous variables that actually affect outcomes, we begin to see that study design is almost always going to have analytical weaknesses that cannot be overcome in one go.

 

That’s why John Ioannidis considers science, generally, to be a “low-yield” enterprise.

 

Conclusion — don’t immediately believe much of what you hear or read in the medical “research” (or any other complex) field

 

It’s probably inflated or wrong.  And its errors are probably inadvertently or deliberately concealed.

 

In regard to my own health, I automatically ignore all statistically weak (small) studies.  Those are only good for providing subsequent researchers with motivation to do larger more scientific studies.

 

I also question research that has industry sponsors or those that were conducted by people with a financial stake in the outcome.  Medical research, as an occupation, lost its umbrella of professional integrity a long time ago.

 

Last, I also almost never act on the advice of a single large study, no matter how allegedly statistically powerful its findings.  Published papers never have enough detail to backtrack all the data that the research group did not want to include.  Nor do they overtly reveal the data-mining software that created impressions of statistical somethings from numerical nothings.

 

Caution like this, however, does not help the average patient, who has neither the relevant background nor the time to think about medical practice’s foundation or lack thereof.

 

That’s why I consider financial greed in the medical field to be such an undesirable thing.