An Obvious Phenomenon that Most of Us Don’t Recognize Often Enough — the Structural Funneling of Knowledge, including Science and Medicine, through Somebody Else’s Self-Interest and Often Uninformed Opinion Means that “Truth” Is Only Rarely Found
© 2011 Peter Free
16 December 2011
At first glance, this proposition seems absurd, until you think about truth-finding’s underlying social and economic mechanics
Most of what we think we know (“truth”) has been percolated first through other people’s self-interest and uninformed opinions — without either of those accuracy-inhibiting influences being overtly recognized, by them or us.
Structural societal mechanics inhibit truth-identification, whether in politics, science, or medicine.
This knowledge-distorting phenomenon shapes the state of our ignorance more actively than almost all of us recognize.
“Where does this happen, Pete?”
Truth-distortion is most obvious in politics. The battle between the 2012 election American presidential candidates is illustrative. Except, perhaps, to those most thoughtlessly captured by the “Ignorant Isms” these people spew.
Where culture-originated self-deception is less obvious is in science and medicine.
I address science, particularly, because its self-chosen goal is truth-seeking. Consequently, anything that structurally inhibits truth-identification is an obstacle to overcome.
Citation 1 of 2 — an important 2008 paper by Neal S. Young, John P. A. Ioannidis, and Omar Al-Ubaydli
Neal S. Young, John P. A. Ioannidis, and Omar Al-Ubaydli, Why Current Publication Practices May Distort Science, PLoS Medicine 5(10): e201. doi:10.1371/journal.pmed.0050201 (07 October 2008)
This 2008 paper’s economic premise
The authors’ analysis showed how the social and economic mechanics of scientific endeavor distort science’s ability to define what is actually true.
The paper’s initial premise illustrates the value of looking at phenomena in novel ways:
This essay makes the underlying assumption that scientific information is an economic commodity, and that scientific journals are a medium for its dissemination and exchange.
© 2008 Neal S. Young, John P. A. Ioannidis, and Omar Al-Ubaydli, Why Current Publication Practices May Distort Science, PLoS Medicine 5(10): e201. doi:10.1371/journal.pmed.0050201 (07 October 2008)
The Young article's “macro” economic findings
Summarized, the Young, Ionnidis, and Al-Ubaydli article theorized that:
(1) Journals are scarce and deliberately enhance their economic value by imposing an artificial scarcity that pretends to have only limited “print” space (in a digital age).
(2) Prestigious journals’ artificially-induced exclusivity enhances the value of being chosen to individual contributors, benefitting them and the journal.
(3) Editors choose articles based on the number of likely future citations that they are apt to receive.
(4) Article selection, therefore, is not representative of across-the-board findings in any scientific field.
(5) The bottle-neck posed by journals’ scarcity and, presumably, by their bias for certain subjects or impacts leads to narrowing scientific endeavors. Scientists choose study projects based on what they think journal editors want. This economic phenomenon leads to a lack of diversity in scientific investigation, as well as to dubiously-founded research choices.
(6) Publication biases encourage “conventional” scientific behavior and herd-like conformity, while simultaneously suppressing alternative approaches, data, and findings. Boom-bust science results.
(7) Journals do not suffer economic punishment for making truth-seeking errors in their biased unrepresentativeness. But their consumers may suffer harm because the journals’ published truth is often not actually true. “For example, initial clinical studies are often unrepresentative and misleading.”
Citation 2 of 2 — distortion in the scientific endeavor is also evident on the “micro” economic scale
Judith G. M. Rosmalen and Albertine J. Oldehinkel, The Role of Group Dynamics in Scientific Inconsistencies: A Case Study of a Research Consortium, PLoS Medicine 8(12): e1001143. doi:10.1371/journal.pmed.1001143 (13 December 2011)
About this “micro” economics article
Micro economic factors distort science, even when researchers think they tried to screen them out.
In coming to this conclusion, Judith Rosmalen and Albertine Oldehinkel wondered why an ostensibly coordinated group of studies — here done in regard to discovering the relationship of cortisol to mental health — had published contradictory and inconsistent results, despite having instituted procedures to avoid doing that.
According to the authors, the consortium of research teams had used different informants, questionnaires, cutoff levels, composites, statistical measures, and scientific confounders.
“Why did the cortisol research teams go so far off course?”
The research contributors’ motivations and influences were incompatible with coordinated truth-seeking:
(1) PhD candidates were put in charge of sub-sections of the cortisol questions, but their interests lay in efficiently getting a PhD, not in generating a contribution that would necessarily harmonize with the consortium’s overall result.
One can imagine trying to buck one’s thesis advisor, the University department involved, and the need for professional originality — all at the same time. You can guess where the PhD candidate’s loyalty to (a) doing “ideal” science versus (b) getting his or her mitts on a doctorate, is going to fall.
(2) The study’s sub-question research often had multiple authors. Collaboration generally requires not making interpersonal waves. And motivation for actually getting each paper completed meant that those with the most riding on publication would want to avoid print-delaying discussions with the group’s internal or external critics.
(3) Pressure to publish undoubtedly also meant compromises in scientific quality. The reward of publication is professionally far more valuable than achieving slower-moving scientific accuracy. Nobody gives people money for (a) being unpublished, but accurate, as opposed to (b) being published, famous, and possibly inaccurate.
(4) Group “synergy” was low, as a result of individual contributors’ differing self-interests and the temporal length it takes to complete a longitudinal cohort study. Researchers were continually entering and leaving the consortium, which created obvious problems in cohesiveness.
“In short, the strong focus on individual achievements in science hampers group synergy, particularly in multicenter collaborations. . . . The fundamental problem is the way science manages cooperation versus competition.”
© 2011 Judith G. M. Rosmalen and Albertine J. Oldehinkel, The Role of Group Dynamics in Scientific Inconsistencies: A Case Study of a Research Consortium, PLoS Medicine 8(12): e1001143. doi:10.1371/journal.pmed.1001143 (13 December 2011)
Macro and micro distinctions aside — the basic problem is the universality of personal and organizational self-interest
Young et al. proposed fixes for the macro, publishing, level. Here, I paraphrase a few of their more pertinent ones:
(1) Digitally publish unflawed articles, regardless of their suspected present or future influence.
(2) Publish negative results in preference to positive findings, and demand reproducible evidence before publishing positive findings.
(3) Select articles based on methodological quality, rigor, and insightful interpretation.
(4) Create methods for deflating demonstrably false claims that were previously published in prestigious journals.
(5) Include wider data sets to accompany print articles.
(6) Publish critical reviews and summaries of biomedical information.
(7) Disincentivize “follow-the-leader” research behavior and reward people who do novel science.
Caveat — realism versus academics’ “pie-in-the-sky” solutions
Most of the Young paper’s suggestions simultaneously overlook the parallel power of (a) human and organizational self-interest and (b) uncertainty in science.
For example, take Young et al.’s suggestion for process improvement that I paraphrased in item (7) above:
Offer disincentives to herding and incentives for truly independent, novel, or heuristic scientific work.
© 2008 Neal S. Young, John P. A. Ioannidis, and Omar Al-Ubaydli, Why Current Publication Practices May Distort Science, PLoS Medicine 5(10): e201. doi:10.1371/journal.pmed.0050201 (07 October 2008) (at Item 8 in Box 1 under “Conclusions”)
Herd-like research behavior is aimed at achieving publication in the artificially-induced scarcity environment that makes prestigious journals exclusive.
Imitative science will not go away, until prestigious journals simultaneously lose their lust for (a) sensational studies and (b) for imposing the artificially-induced publication scarcity that makes these journals exclusive.
Yet realistically speaking, why would journals want to lose the exclusivity they now possess by turning themselves to a steady diet of accurate, but scientifically negative findings? Until the whole of Science changes to value quality, rather than individual publication triumphs, I don’t see this change in publication focus happening.
On the street level, just how is anyone going to distinguish between science aimed at editors’ preference for impact and “novel” science?
Novel science is, by definition, something we haven’t seen before and, therefore, have no way of reasonably accurately judging.
Furthermore, how is anyone going to disincentivize the herding behavior that Young’s paper is critical of?
Are journals going to publish the names of “crappy and unimaginative” scientists? Who will care?
Will they send “Guido the Beater” over to hammer these less than stellar researchers out of the profession?
Will universities suddenly add “innovative genius” to the list of requirements for a PhD degree? Who is going to assess that quality? And how would the “novel genius” requirement pose any less of a science-diminishing bottle-neck?
Administratively, will the United States establish a National Science Oversight Board that has the power to fine its players for Infractions against Science, in a manner similar to the National Football League’s rules-enforcement practices?
I suspect that skeptical criticisms regarding the achievability of reforming the macro (publishing) level are why authors Rosmalen and Oldehinkel suggested that focusing on the micro level — the dynamics of the research groups themselves — is more likely to be a successful cure for our distorted science illness.
It is easier to control what one is directly involved in than that which one is not.
Rosmalen and Oldehinkel think that emphasizing science’s truth-seeking purpose, rather than simple paper production, might be beneficial at the group level. Steering committees could establish well-defined goals and responsibilities. And regular meetings would keep everyone on track.
Maybe the “micro” solution would work — but probably not
My experience in working with groups is that the undercurrent of individual self-interest always dominates, except perhaps under combat or near-combat conditions, when those "life versus death" circumstances are experienced by trained and integrated fighting units. There, people’s dominating assumption (generally correct) is that if they don’t cooperate almost selflessly together, many, most, or all of them are going to be killed.
The scientific endeavor is inherently a psychological pole apart from the military's group-first focus.
Without a change in Science's culture, I have trouble imagining how an assemblage of science and medical “prima donna wannabes” is going to overcome the competing self-interests and potential rewards that divide them.
The moral? — There is little in our current way of generating medical and science knowledge that inspires confidence in these fields’ ability to obtain scientifically reliable results
I have written about this skeptical conclusion relatively frequently.
For example:
here (difficulty of knowing anything medical or scientific “for sure”),
here (pharmaceutical industry invents bogus illnesses and cures),
here (medical practice kills patients because research didn’t ask the right questions),
here (meaningless study endpoints combined with improper/confusing risk analysis),
here (failure of professional self-regulation in quality-delivery systems),
and
here (conflicts of financial interest that distort the validity of clinical practice guidelines)
The more we think we know, the less likely we are to have made an accurate assessment regarding what is actually true.