Most arguments begin with the question “why?” No, not the arguments as in shutting down someone’s ability to speak or break down windows in someone’s property or set someone’s automobile on fire. Not those kinds. Here we are talking about the intellectual “why.” The one that requires reason and thinking, not the force of anger and vitriol.
I came across a “why,” recently and it had me in a vertigo, spinning in different directions. Without much ado, let me spell it out a bit.
Here is the portion of the article in question…
“Dr. Presley and colleagues used the Flatiron Health Database to identify patients with advanced NSCLC who received care at 191 oncology practices across the United States during 2011-2016. The 5,688 patients studied had stage IIIB, stage IV, or unresectable non-squamous NSCLC and received at least one line of treatment.
Overall, 15.4% received broad-based genomic sequencing of their tumor, while the rest received routine testing for EGFR and/or ALK alterations only, according to the results reported.
In the broadly tested group, merely 4.5% were given targeted treatment based on testing results. Another 9.8% received routine EGFR/ALK-targeted treatment, and 85.1% did not receive any targeted treatment.
The 12-month unadjusted mortality rate was 49.2% for patients undergoing broad testing, compared with 35.9% for patients undergoing routine testing.
In an instrumental variable analysis done to account for confounding, the 12-month predicted probability of death was 41.1% after broad testing and 44.4% after routine testing (P = .63).
Findings were similar in a propensity score–matched survival analysis(42.0% vs. 45.1%; hazard ratio, 0.92; P = .40), although there was some suggestion of a benefitof broad testing over routine testing in a Kaplan-Meier analysis among the entire unmatched cohort (HR, 0.69; P less than .001).”
And while one could easily glide through this information and come to some quasi understanding of it with a lurking intuitive prick in the temporal lobe of the brain. The question of “why” seems to dog the door open for a wider discussion one would think. And that is where we lay our scene…
The two most pressing arguments in these prediction-based paragraphs are:
1. Instrumental variable analysis
2. Propensity score–matched survival analysis
Big words that a statistician loves to go to sleep thinking about and the clinician or any other scientist either gets tortured or simply ignores and takes it as a matter of fact.
Well then! What do we do about these two posers of great intellect and coruscating aura?
Perhaps dissect them to their innards. You know the ad reduction absurdum and see what cellular cilia are left behind that make these notions tick.
I’ll blindly take the second argument first in my coin toss. The term “propensity score–matched survival analysis” is pretty hifalutin when you get down to it. But as Albert Camus asked… “but what does it all mean…?”
The “Propensity score–matched survival analysis” (A tool for causal inference in non-randomized studies) a fine idea to take a cohort of patients in the control group and rate them from 0 to 1 and another identical group of patients under the treatment arm and rate them between 0 and 1. Where 0 implies no effect and 1 implies 100% success. You then place the distributions of these two groups in opposing sides of a horizontal line (Top and Bottom) as in below…
Here comes an interesting and very clever part. If there is marginal or very little overlap one can consider those people on either side of the spectrum (perhaps called outliers in other statistical methodologies) as being “trimmed off” the discussion.
In other words, we, (the smart intellectuals) will learn nothing from them because well they did not “match.” Ok so far so good! Did I? Hmm I meant to say So far so good? You see the dilemma that unfolds in a simple mind. Like (as millennials love to say), what the hell?
But continuing on our journey, moving further down this clearly tortuous rabbit hole we then after “trimming” the excess (fat, or unmatched cohorts) from the matched groups we then measure the standard deviation Here comes the next leap of faith, based solely on the enterprising decision-making-value-based interpretation of the statistician par excellence: Using a “Caliper” one can then multiply that standard deviation by an arbitrary number (most used 0.2 or 20%) but it can be changed to suit the motive of the designer to get the appropriate result and make the experiment a wild success. Using 0.15 (or 15%) increases that threshold while raising it 0.25 (or 25%) decreases the threshold of the “Aha” moment. So, the Propensity of this “controlled” experiment is done on the comfortable cushioned chair of an expert, as you can see.
Caliper? No Not This one!
This one!
“But, but…” cries common sense. “Shut up” cries the sound from the ivory tower of excellence, “this is above your pay grade. Just shut up.”
Aww…
Let me not forget that other piece of formidable intelligence folded like a ball of dough ready to make flat bread. The only thing needed is some heat. Our #1 statistical analysis or “Instrumental variable analysis” has to be dealt with, with a gentle but firm hand. The term appears very “scientific and seems an attractive method to control for unmeasured confounders in observational epidemiological studies.” (Wikipedia) You get the bold words in there don’t you? It is a statistician’s dream to solve the riddle of non-compliance in finding some causal inference between a treatment and its outcome where other unmeasured confounding variables might be at play. The attempt is to limit the unmeasured confounding variables – but how? By not doing so leads to a violation of the ignorability assumption and thus effecting (biasing) the causal effect estimates. But never mind that, we are on a quest to prove. And by Golly we will prove!
Here the argument is to say that if Z modulates X (treatment) which changes Y (Outcome) and X is not changed but Y is by Z then we can measure the effect of Z on Y. Simple enough. So, it is making a causal inference of the effect on intent to treat based solely on the effect of a single variable. That simply refers to a single variable effecting change on the dependent category. The simplified formula would look like this E = Y^(Z=1) – E = Y^(Z=0). Where the presence or absence of Z effects the effect of Z on Y. But then we introduce another unbounded (unmeasured) variable U that effects both X and Y. To draw any causal inference between Z and Y or even X with U lurking around requires a pseudo-randomization between quasi equivalent parameters and not the usual Medelian Randomization required to answer the effect of Treatment on an Outcome. In other words, draw from a larger group of unmatched and fit them into the current “experiment” to create the appropriate graphs. As one’s sense gets fogged in and bogged down in this analytical enterprise, one realizes the true nature of current science which seems based more on slippery-mathematics and thought experiments through manipulation rather than on real experiments: Medelian Randomization vs. Pseudo or quasi-Randomization of ‘disparate’ groups or if I can be permitted to rake the term ‘a-priori,’ over the coals and pull the reluctant Bayes into this whole mishmash. By manipulating numbers to find causal links between treatment and disease rather than the real cause itself, aren’t we just creating false but intentional heuristic for the future harm to society? No matter how extensive a list of variables (Risks) it is, we simply cannot completely fathom the effect of a treatment and its related Outcome with using a simplified, contextually flawed, argument of using only the glaringly obvious one or two risk variables in determining where the argument goes to rest. Can we? A simplified case in point would be ascertaining Gene X mutation causes Disease X. Not knowing the Gene X modulations by other genes, epigenetic factors like RNA and Exons and the presence or absence of tumor suppressor genes, tumor promoter genes and the EMT transitions in the milieu, one simply cannot give an answer that Treatment X will have a 100% Outcome Y. What it will tell us that there is a likelihood given all other variables influencing the outcome the probability is Y% (And even that is dubious at best). And that is what patients and physicians need to know. Unfortunately given the tools mentioned above with a solid foundation of statistical tweaking, we cannot with certainty even say that. The “Y%” is a largely muddied concept of false premises and false promises. No wonder the big an upcoming poser, the
Big Bluish IBM Watson failed to do render the right treatment and caused potential harm in posing as an Oncologist.
https://www.engadget.com/2018/07/27/ibm-watson-for-oncology-unsafe-treatment-plans-report
Ah my dear inanimate blue-colored friend, as Shakespeare might say, “There are more things on this heaven and earth, than are in your data-base.”
Remember the yesteryears when Jenner used Cowpox against Smallpox in 1796 and Fleming discovered a “Lysozyme in 1923 and then in 1928 discovered penicillin. Or for that matter Marie Curie (Skolodowska) discovered Radon and Polonium in 1911 after Henri Becquerel’s discovery of radioactivity in 1903. Today we make quasi discoveries by manipulating numbers and using soft correlations to get our p-values under 0.05 to win a lecturer-ship or tenure or a directorship and continued employment at a Fortune 500 company. No wonder
Amgen failed to replicate 89% of the 53 landmark studies. Only 6 could be considered "validated."
https://thenextregeneration.wordpress.com/2013/10/26/the-replicability-crisis-in-cancer-research/Remembering that incidental, irrelevant concepts that breed perceptions affect our judgment and behavior. A few of these misadventures are honest but mostly they are intentional and with the verbal force behind them, are swift!
Representativeness judgments can influence estimates of probability, while Representativeness heuristics can play all kinds of havoc on a limitless number of judgments. Imagine the Azande people of Central Africa believing that burnt skull of the red bush monkey was an effective therapy for epilepsy, given the jerky and frenetic movements of the monkey itself. Or for that matter "Blood letting" as a means to cure pneumonia?
It is important to remember that concepts begotten of such Representativeness Heuristics oscillate in elegant and opposing waves and only time gives each side the temporary podium to win over the society of humans. Consanguinity of thought begotten from a intentionally biased point of view leads to monstrous results, ask the Habsburg(s) and their collapsing empire, their jutting jaws and their intermixed (Consanguinous) DNA short lives.
Chasing the pool of money with such manipulative fervor is harming us all. We need a better handle on reality rather than exist in the virtual world of “IF THIS THAN THAT.”