Stirrings of Alarm at Greed’s Rising Distortion of the Scientific Process — (a) The Institute of Medicine’s Warning about Unsubstantiated Medical “Omics” Tests and (b) Academic Psychology’s Courageous Recognition that Some of Its Studies Require Reproducibility Confirmation
© 2012 Peter Free
16 April 2012
Our culture-wide acceptance of greed as a valid personal and professional paradigm is rotting the two professions most dependent on accurate truth-finding: science and medicine
With time, my sense of alarm at greed’s truth-suppressing effect on the scientific process appears to be spreading:
(1) The prestigious Institute of Medicine recently added a warning regarding the distortions involved in medicine’s new field, “omics.”
(2) And 50 prominent academic psychologists (who call themselves the Open Science Collaboration) have mounted the Open Science Framework.
The Framework is intended to replicate experiments that are reported in prestigious journals, so as to see whether the reproducibility tenet of sound science is actually being met.
Organization of the following discussion
I will begin with the Institute of Medicine report.
What the IOM says about scientific complexity and the need for elevated standards for scientific proof segues into academic psychology’s (i) recognition of the same problem and (ii) what to do about it.
First, regarding the Institute of Medicine report — what are “omics”?
“Omics,” in medical research practice, refers to:
diagnostic and prognostic tools based on patterns of nucleic acids, proteins, or other molecules in tissues such as blood . . . .
© 2012 © 2012 Jocelyn Kaiser, Biomarker Tests Need Closer Scrutiny, IOM Concludes, Science 335(6076): 1554 (30 March 2012)
Examples of “omes”
Biology and medicine use the word “ome” to refer to the whole of something. Wikipedia points to genome, metabolome, or proteome, as examples.
(a) The genome (the totality of the body’s genetic material) you are already familiar with. Of these three examples, it is probably the most rationally named because it refers to something with a conceptually coherent way of working.
(b) The metabolome, on the other hand, is not a very good name.
It refers to metabolism’s generally small molecule metabolites — especially those that are involved in important biochemical processes, including hormonal influences, energy generation, and cell signaling.
The “metabolome” concept is an almost meaningless because it covers so many molecules that are doing radically different things in different ways.
(c) The proteome is a somewhat narrower description, and therefore (arguably) more useful. Proteome refers to the entire set of proteins that the body produces.
What led to the Institute of Medicine’s warning about misleading omics research?
The above cited Institute of Medicine Report came on the heels of its investigation of a science integrity scandal at Duke University.
Note
Reading about the Anil Potti scandal at Duke, here, will provide the necessary background to what follows.
The journal, Science, commented regarding the IOM’s findings:
The Duke case, which led to more than two dozen retracted papers, three canceled clinical trials, and lawsuits, is “a watershed illustration” of how systems to ensure the integrity of science can fail, the report says.
© 2012 Jocelyn Kaiser, Biomarker Tests Need Closer Scrutiny, IOM Concludes, Science 335(6076): 1554 (30 March 2012)
Citation — to the Institute of Medicine pre-publication report
Committee on the Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials (Board on Health Care Services, Board on Health Sciences Policy) - Christine M. Micheel, Sharyl J. Nass, and Gilbert S. Omenn, Editors, Evolution of Translational Omics: Lessons Learned and the Path Forward, Institute of Medicine of the National Academies (pre-publication copy, March 2012)
The Duke affair is an indicator of more generalized problems in science and medical research
The elements of the Duke scandal point to the ethical rot at the heart of today’s avaricious (often narcissistic) culture.
The fact that the Institute of Medicine warned about the entire medical community about the dangers of improperly examined omics research indicates that it, too, sees the threat to science integrity as having become culturally widespread.
Professional ethical codes once tried to guard us against the weaknesses that being human bring with it. Today, the onslaught of our culture’s acceptance of the moral legitimacy of greed has overwhelmed them.
Science and medicine both suffer. Our health will, too. It is impossible to do sound medicine on the foundation of quack science.
A key point — the causational complexity of natural phenomena allows avaricious researchers and institutions to manipulate “garbage” data in ways that “prove” false positives
Positive results (hypothesis proved) get people published.
However, science’s peer review process often does not catch false positives, methodological errors, improper statistical analysis, and flawed reasoning.
And prestigious journals have financial and elite status interests in not detecting unwarranted results. You can read a bit about this bit of institutionalized nastiness, here. Continually reporting negative results would lose readership and individual journal’s elite status. Yet, negative results are the most common finding in science.
Obviously, there is a bit of self-interested deception going on.
Note
I have written about the statistical complexity of proving anything for sure, here.
This mathematical phenomenon has nothing to do with greed, but with the mechanics of real world proof. Proving anything in science or medicine requires exacting methodologies and adequate numbers.
Medicine, particularly, should pay strict (and expensive) attention to sample sizes and population distributions, as well as to defining the range of experimental (or observational) variables potentially involved.
Biomedical confounders (variables that have not been properly accounted for) are everywhere. Too few research teams pay enough attention to these. The Institute of Medicine is concerned about this absence of attention.
Everyone involved in the process knows, or should know, the distortions that are occurring. However, most do not oppose the integrity-defeating mess for fear of losing money and fame.
The Duke scandal is a good example of systemically blind collusion. That is probably why the Institute of Medicine pounced on it as an example of a system-wide breakdown.
Why omics, as a research field, is a good illustration of the difficulty of reaching evidence-based medical conclusions
If you have a decent statistical sense, you will immediately recognize that the more complex a system is, the more difficult it is to reach concrete conclusions about the precise ways in which its different components interact and affect each other.
This biological complexity is the gist of the Institute of Medicine’s warning about creating medical tests based on insufficiently tested omics research:
One major problem, [Gilbert] Omenn [chair of the IOM investigating committee] and his colleagues say, is “overfitting”:
Because the initial studies often look for patterns among hundreds of thousands of biomolecules using a small number of patient samples, it is easy to find false correlations with disease outcomes.
The report recommends a set of steps to validate any potential molecular signatures, such as repeating the test on blinded samples from a different institution.
Journals and funders should also require that data and models from papers are freely available so that other researchers can check the results.
© 2012 Jocelyn Kaiser, Biomarker Tests Need Closer Scrutiny, IOM Concludes, Science 335(6076): 1554 (30 March 2012) (paragraph split and reformatted)
Second — Academic psychology’s recognition of the complexity problem and how that necessitates reproducibility testing
Siri Carpenter, also writing in Science, points to the problem that concerns me about science generally:
The greater concern arises from several recent studies that have broadly critiqued psychological research practices, highlighting lax data collection, analysis, and reporting, and decrying a scientific culture that too heavily favors new and counterintuitive ideas over the confirmation of existing results.
Some psychology researchers argue that this has led to too many findings that are striking for their novelty and published in respected journals — but are nonetheless false.
© 2012 Siri Carpenter, Psychology’s Bold Initiative, Science 335(6076): 1558-1561 (30 March 2012) (paragraph split and reformatted)
False positives are often missed because no one bothers to replicate the studies that came up with the alleged connection.
Dr. Brian Nosek is coordinating academic psychology’s Open Science Collaboration. According to writer Siri Carpenter, he thinks that “negative results are virtually unpublishable.”
We can see that, because science careers depend on publication, the system corrupts itself into making false correlations.
Positive results are easy to manufacture. Use small sample sizes — in which statistically random fluctuations will appear to create a valid finding— and into print you go. Similarly, pretend that chance results were what you were testing for.
Note
One of the trends I have noticed in reviewing interesting science, medicine, and psychology studies (at BrainiYak) is the high number of absurdly small research studies that make it into print.
By definition, these are not worth the electrons used to retrieve them — unless one is simply looking for a teaser that will lead to a statistically proper subsequent investigation. But as Dr. Nosek indicates, I have yet to see one of those.
Dr. Nosek’s group now has 30 replication studies underway.
Caveats about the Collaboration’s replication methodology
There are three issues here:
(i) What is this replication effort targeted at?
(ii) What qualifies as a reproduced result?
(iii) Will the replication studies use the same absurdly small sample sizes, drawn from the same absurdly unrepresentative population?
Question One — What is the Collaboration targeted at proving?
The overarching goal of the Collaboration appears to be to assess whether psychology, as a whole, is doing sound science.
But critics point out that its emphasis on recent cognitive and social science is not representative enough to make any claims about the whole field’s integrity.
Question Two — What qualifies as a reproduced result?
There are problems with translating the findings from these reproducibility studies into a meaningful general critique of psychology science.
Given reality’s complexity, and its expected statistical variability, it is obviously difficult to determine just what is going to qualify as a reproduced result. Does the replication have to be statistically near identical, or can it simply trend the same way?
Question Three — Will the replication studies use the same absurdly small sample sizes, drawn from the same absurdly unrepresentative population?
If so, what would that tell us? Will two invalidly small studies of the same unrepresentative population sample reliably demonstrate anything?
At best, this approach would double the original sample size. But — being drawn up by different investigators, in a different place, with probably a slightly variant operational framework — the tiny “meta” analysis would most probably introduce still more confounding variables and equally distorted “chance” variations. If they go the same way as the original, statistically invalid study, we would have reproduced the identical error.
On the other hand, if the goal of the study is simply to say that the initial study was not dishonestly done, a replication might warrant a finding of non-fraud.
But the non-fraud verification is vastly different than demonstrating that the initial study’s findings actually represent a genuinely positive result. Which is what I am interested in.
Money is the predominating commonality in science distortions
The Collaboration’s (anticipated) small sample sizes would be the result of lack of funding.
And more generally, science researchers’ motivation to circumvent proper science methods similarly combines:
(a) a lack of adequate financial resources
with
(b) the lure of fame and riches.
The latter is what went wrong at Duke.
Economics and science
Paula Stephan, How Economics Shapes Science (Harvard University Press, 2012)
The moral? — Greed is a powerful distorter of Science, and right now there is no counter-balancing force for integrity
One of the drawbacks to a culture based predominantly on self-advancement is that there are few moral constraints against going as far as one can in that narcissistic direction.
The Institute of Medicine’s call for care in omics research is inarguably necessary, but unlikely to be achieved.
Why would our science/medical infrastructure turn itself around, simply because it recognizes that what its components are doing lacks scientific integrity?
Why would people care about seeking evidence-based truth, when there are short-cutting riches to be had?
Absent constraints — that financially punish carelessly achieved false positives, methodologically bad science, manipulated data, and error-filled publications — nothing is going to change.
To wit, Duke University’s “bad boy” Dr. Anil Potti is now (probably happily) ensconced as “an advocate for personalized cancer therapies.” His website even brags about his former association with Duke. There is not a word there about disgrace and retracted papers.
And you can bet (your life) that nothing unpleasant has happened to any of his supervisors and colleagues at Duke, or to the journals who published his and related people’s papers without first checking them out.
Peer review? A bad joke.
The science-medical system increasingly fails to find truth because most of us are avaricious. And our culture has elevated greed to the status of a moral imperative. Witness the various rampant permutations of the Gospel of Wealth and Prosperity Theology.
In this environment, medical science’s complexity merely serves as a farm for unwarranted, but nevertheless profitable correlations.
To protect themselves, medical consumers need to be aware of that a good deal of what is advertised as medical advance is (a) nonsense and (b) occasionally harmful.
Pessimistic though this sounds, it is today’s reality. The fact that the prominent Institute of Medicine would feel it necessary to caution people against carelessly traveling obviously questionable investigative roads is proof enough. That so many prominent academic psychologists would be motivated to investigate whether their profession’s own findings are frequently bogus is equally telling.