On Meaninglessness of Meanings
In isolated silos where the distinguished meet, there
follows the wrath of groupthink, emboldened by the weak minds exercising power
with even weaker science. Sometime ago one used the Fourier Transform by using
multiple filters to dissect an issue of great import and learnt how the mechanics
and levers that made the issue tick. That was real. Not anymore. That real
science held an importance to humanity and that it would become the shoulders
for other giants to climb upon and see further. Time and the proliferation of meaningless
degrees in various disciplines force students to see only the trees for the forests. Alas the
very idea of real progress has crumbled into mini-false-incrementalism. Today
the false prophets of science soar in popularity.
Commoditizing knowledge is a wonderful thing. It leads to
fecund minds. But commoditizing nonsense as science is a far, far dangerous
thing to do than has been done at such a large scale before. Now possessing a
rudimentary knowledge of clicking boxes in a SAS or SPSS program leads one to
conjure up meaningless values that define our scientific knowledge-base. We
extoll the virtue of binary choice between disparate variables as reality.
Correlations they call it are sold in bulk as causality. Compound A “may” help
you live longer. The next day the inverse is true as a competing compound takes
shape. As John Stuart Mills mentioned the sum of several antecedents are a
requisite to produce the consequent (A System of Logic 1843). The fault lies in
the choice of the antecedents.
Is science in general and medical science specifically in
decline? My real answer is a simple “YES.” As the old advertising goes, “We
have come a long way baby,” on the path to self-delusion. There is an added
layer of cruelty, greed, self-aggrandizement and recklessness built into this engine of deceit. It is an insult to science, in general and specifically medicine.
The whole enterprise is a hijacked engineering marvel of loathsome proportions.
Here is where humanity’s claim on life, liberty and the pursuit of happiness is
just a phrase for some.
The pursuit of “truth” today is wrapped around Ronald
Fisher’s p-value; the sanctified hallmark of significance that medical
literature has come to worship, wrongly, I might add. But first let us
deconstruct this barbarian indulgence of “Evidence” as it exists today.
Let me begin by quoting Peter Higgs, who did pioneering
research into fundamental particles back in the 1960s, he groaned “Today I wouldn’t get an academic job. It’s
as simple as that. I don’t think I would be regarded as productive enough’ (The
Guardian, 6 Dec 2013). 60 years hence that statement rings true even more
as real research has been reduced to a plethora of meaningless “studies.”
Today we are mired in a sea of storms. Each storm a
concoction from the minds of some very intelligent and some very crafty
“scientists.” Both are involved in a race for success. The intellectual may
wish it, while the crafty scientist games it. This, as we move forward will
become abundantly clear.
First if we look at the explosion in medical literature.
From 1974 to 2014, the frequency of the words ‘innovative’, ‘groundbreaking’
and ‘novel’ in PubMed abstracts increased by 2500% or more. (1) The vibrancy of
the scientific global industry is evident when “more than 15,000,000 people are now authoring more than 25,000,000
scientific papers in a period that scans 15 years (1996–2011). The
burgeoning material threatens to remove belief in all things considered
science. The pace has only quadrupled in the past 5 years as more
“peer-reviewed” journals have appeared that make it impossible for any article
to gain any rigor of scrutiny. Journalistic bias to print makes all such
studies (true or false, but mostly false) a magnet for Tantalus. (2)
One wonders how this could happen? The answer is simple; “if
your career depends on it,” anyone with a shred of intellectual curiosity would
answer. Majority of these tenure-seekers have a dim apprehension of the real
intellectual currents but a tight grasp on the statistical nuances that exhort
their version of reality- and such is the event horizon of our despair. Brischoux and Angelier in 2015 looked at the career stats of junior researchers hired
in evolutionary biology between 2005 and 2013. They found “persistent increases in the average number of publications at the time
of hiring: newly hired biologists now have almost twice as many publications as
they did 10 years ago (22 in 2013 versus 12.5 in 2005).” (3)
What is even more curious and abundantly clear is that most
new tenure-track positions are filled by graduates from a small number of elite
universities—typically those with very high publication rates. This may come as
a shock but the truth is easy to see. The more elite your institution, the more
likely you get published in a peer-review journal and the more impact you have.
You are cited more and that makes the crazy world of scientific merry-go-round reach a dizzying pace, as it feeds on itself.
In similar vein, using quantitative metrics as a proxy to
assess social behavior, opens the concept to exploitation and corruption.
Today, quantitative metrics are the hallmark of all qualitative measures and
therefore can be exploited to reveal the ether within it in solid form to any
sharp-eyed bureaucrat or a "change creator." And that is so much the better for the researcher who gains
fame at the cost of exploiting ether. Evidence of this appears in the
psychological literature where it is estimated that less than 1% of all
psychological research is ever replicated or duplicated! Imagine that for a
second. And there is much that goes on in there than meets the eye. The main
reason is low-powered studies with small number of case-controlled
participants. It reeks of confirmatory bias. The researcher it appears sets out
with a premise and proves it with an ideal set of limited variables. Case
closed. QED. In other case, hypotheses are being generated at random after the
data-sets are at hand, a biased selection of the data easily generates the
implied logic of proof, leading to more post hoc hypothesis formation for future publications. (4)
False discoveries in the literature also result from the
ongoing fraud of p-hacking (here) http://fivethirtyeight.com/features/science-isnt-broken/
More on that later, but in the same vein, if once considered
important studies with high “impact factor” are retracted but their premise has
been used to guide other studies, one wonders at the proliferation of the mal
content that fills the “evidence” in the scientific libraries. As Retraction Watch
states, “The biggest take-home: The number of retracted articles jumped from 500 in Fiscal Year 2014 to 684 in Fiscal Year 2015 — an increase of 37%. But in the same time period, the number of citations indexed for MEDLINE — about 806,000 — has only increased by 5%.” http://retractionwatch.com/2016/03/24/retractions-rise-to-nearly-700-in-fiscal-year-2015-and-psst-this-is-our-3000th-post/
The problem remains the “gaming” nature of this publishing
beast. Once set in motion, the authors with an axe to grind, know the “how to” and
all credibility is lost. Even those with the best of intentions and stringent
ethics get caught up with the statistical serpent. The funding organization may
pull strings to get the right results, thus muddying the waters even more. The
funders are consigning some if not most newly minted science researchers into the
purgatory of fake statistics.
And the wild success of this emptiness inflames the young researcher’s desires and inflates their egos to keep the train of nonsense choo-chooing. Negative results are statistically manipulated to evoke weak signals through “p-hacking” techniques by using selection bias in the carefully codified variables that always yield the desired result. Everyone gets in the game and that, then, becomes the norm. Drawing analogies to the evolutionary quills on the porcupine to protect from predators and guarantee survival are examples of environmental forces similar to what the modern day scientist faces, especially in the publish or perish world.
And the wild success of this emptiness inflames the young researcher’s desires and inflates their egos to keep the train of nonsense choo-chooing. Negative results are statistically manipulated to evoke weak signals through “p-hacking” techniques by using selection bias in the carefully codified variables that always yield the desired result. Everyone gets in the game and that, then, becomes the norm. Drawing analogies to the evolutionary quills on the porcupine to protect from predators and guarantee survival are examples of environmental forces similar to what the modern day scientist faces, especially in the publish or perish world.
Cohen states, “Statistical
power refers to the probability that a statistical test will correctly reject
the null hypothesis when it is false, given information about sample size,
effect size and likely rates of false positives.” Science however does not
have a mechanism that governs the reliability of the information it imparts.
There is no switch or buzzer that rejects the brazen self-interest of the
individual scientist to guarantee an optimal result. Science still and always will
rely on the integrity of the scientist. Most times the integrity is exposed
through the rigors of others that read and desire to understand such results.
Vankov et al. suggest that statistical power in psychological science appears
to have remained low to the present day (5 and 6). The use of low statistical
power is widespread, with no difference between high impact prestigious
journals versus their low impact counterparts. (7) What is interesting is even
more befuddling that retractions seem to come more often from the “high impact”
journals! (8) Never has so much willful blindness meant so little in the
advancement of scientific literature. As Macleod et. al. have stated (which
should be disconcerting to all who have a stake-hold in the benefits of
science) that 85% of the research resources are being wasted in biomedical
research. (9) And as the noise begins to reach deafening levels, it becomes
difficult to find the signal. “One can
decrease the rate of false positives by requiring stronger evidence to posit
the existence of an effect. However, doing so will also decrease the
power—because even true effects will sometimes generate weak or noisy
signals—unless effort is exerted to increase the size and quality of one’s
dataset. This follows readily from the logic of signal detection theory.”
Iannodis exposes more contradictions in highly cited
clinical research below: “Of 49 highly
cited original clinical research studies, 45 claimed that the intervention was
effective. Of these, 7 (16%) were contradicted by subsequent studies, 7 others
(16%) had found effects that were stronger than those of subsequent studies, 20
(44%) were replicated, and 11 (24%) remained largely unchallenged. Five of 6
highly-cited nonrandomized studies had been contradicted or had found stronger
effects vs 9 of 39 randomized controlled trials (P = .008). Among randomized
trials, studies with contradicted or stronger effects were smaller (P = .009)
than replicated or unchallenged studies although there was no statistically
significant difference in their early or overall citation impact. Matched
control studies did not have a significantly different share of refuted results
than highly cited studies, but they included more studies with
"negative" results” (10).
https://www.ncbi.nlm.nih.gov/pubmed/16014596
Ionnadis et. al. write, “we
have evaluated by meta-analysis 370 studies addressing 36 genetic associations
for various outcomes of disease. We show that significant between-study
heterogeneity (diversity) is frequent, and that the results of the first study
correlate only modestly with subsequent research on the same association. The
first study often suggests a stronger genetic effect than is found by
subsequent studies. Both bias and genuine population diversity might explain
why early association studies tend to overestimate the disease protection or
predisposition conferred by a genetic polymorphism. We conclude that a
systematic meta-analytic approach may assist in estimating population-wide
effects of genetic risk factors in human disease” (11).
It appears that Le Fanu might have been correct when he
professed that all Epidemiology departments should be closed. They should! As
Theodore Dalrymple states so eloquently, “for
what shall it profit an intellectual if he acknowledges the simple truth and
lose his Weltanschauung? Let millions suffer so long as he can retain his sense
of his own righteousness and moral superiority.” It is the perverted use of
statistical methodology (using the far-reaching claws of correlational
fungibles) used in medical research that has caused this rampant misuse/abuse
and threatens the very existence and true nature of the value of rigorous
medical research/science (12). This ongoing accrual of vast amounts of
disinformation is an abysmal bloom of the wanton ignorance pervading the
scientific society. The virtue signaling conformity scions are pushing the rest of us into a dark place from where it maybe difficult to return.
On exposing BS
I will leave you with the 2012 AMGEN study that revealed only
6 of 53 (11%) studies in biomedical research were reproducible. Imagine the
wishful thinking in the 47 others. The magnitude of this information is
staggering. (13) You needn't look further than the latest headlines from Retraction Watch where an estimated 107 papers were retracted from a SINGLE Journal for "Peer Review Fraud!" (also here... http://retractionwatch.com/2017/04/20/new-record-major-publisher-retracting-100-studies-cancer-journal-fake-peer-reviews/ )
REFERENCES:
1.
Vinkers
V, Tijdink J, Otte W. 2015 Use of positive and negative words in
scientific PubMed abstracts between 1974 and 2014: retrospective analysis. Br.
Med. J. 351,
2.
Boyack KW, Klavans R, Sorensen AA, Ioannidis JP
(2013) A list of highly influential biomedical researchers, 1996–2011. Eur J
Clin Invest 43: 1339–1365.
3.
Brischoux
F, Angelier F. 2015 Academia’s
never-ending selection for productivity. Scientometrics 103, 333–336.
4.
Makel MC,
Plucker JA, Hegarty B. 2012 Replications in psychology research: how
often do they really occur? Perspect. Psychol. Sci. 7, 537–542.
5.
Cohen J.
1992 Statistical power analysis. Curr. Dir. Psychol. Sci. 1, 98–101
6.
Vankov I,
Bowers J, Munafò MR. 2014 On the persistence of low power in
psychological science. Q. J. Exp. Psychol. 67, 1037–1040
7.
Ioannidis
JPA. 2006 Concentration of the most-cited papers in the scientific
literature: analysis of journal ecosystems. PLoS ONE 1, e5.
(doi:10.1371/journal.pone.0000005)
8.
Fang FC,
Casadevall A. 2011 Retracted science and
the retraction index. Infect. Immun. 79, 3855–3859
9.
Macleod MR, Michie S, Roberts I, Dirnagl U,
Chalmers I, et al. (2014) Biomedical research: increasing value, reducing
waste. Lancet 383: 101–104.
10. Ioannidis
JPA. Contradicted and initially stronger effects in highly cited clinical
research. JAMA. 2005;294:218–22
11. Ioannidis
JPA, Ntzani EE, Trikalinos TA, Contopoulos-Ioannidis DG. Replication validity
of genetic association studies. Nat Genet. 2001;29:306–309
12. Le
Fanu J. The rise and fall of modern
medicine. New York: Little, Brown; 1999.
13. Begley CG, Ellis LM. Drug
development: Raise standards for preclinical cancer research. Nature. 2012 Mar
28;483(7391):531-3. doi: 10.1038/483531a.