Observations

Contingency table exposes the motive & 78% false positives

We see the reasons why the molecular results in the 1st report were not revealed alongside their original clinical scores.  In a real-world, clinical situation the 1st report’s ADH population would be made up of 78% false positivs (without margin for error, estimated 80.5%, click here). In the illustration below, we see our breakdown of the clinical results above and below the detection point, further separated into those determined to be either mutation carriers or non-carriers. Simply put, big pharma cherry-picks[1] authors and funds their Authoritative report. This is tantamount to promoting off-label, but the two supporting reports must be reconciled to see this.  Big pharma has engineered a social condition, the outcome of which will be off-label prescriptions and a gross exaggeration of the addressable market.

Big pharma has engineered a social condition, the outcome of which will be off-label prescriptions and a gross exaggeration of the addressable market.

Smoothly off-Label: The Trick to the 1st report is the basis for the Authoritative report

The 1st report was designed to take advantage of the overlap between phenotype and genotype – those who clinically look like they might have the disease as opposed to those with a confirmed mutation. They went one step further. They took all of the clinical and then after neglecting to use molecular results to gauge accuracy, simply added in the molecular hits. They get the best of both worlds.

They took all of the clinical and then after neglecting to use molecular results to gauge accuracy, simply added in the molecular hits.

The authors opted to hide this breakdown of their molecular hits under a crude total, creating the illusion of a target-rich patient pool. But how does one hide the original molecular breakdown into clinical categories … after molecularly screening 60,000 people?

Epistemological demotion — rename the genotyped to “Phenotype.” Strikingly, for the 1st report, molecular results were gathered from about 60,000 Copenhagen residents. Despite this largeemphasis on molecular detection, the study nonetheless focused the academic results on the clinical: the genotyped are actually demoted epistemologically and lumped together with the phenotyped. To use a legal analogy, this would be like putting in the work and finding forensic evidence but then reclassifying and presenting it to the jury and judge as “even more circumstantial evidence than previously thought.” This would only make sense if one either did not understand the superiority of forensic evidence or understood and therefore found no profit in the forensic conclusion.

Of course, accepting all mutation carriers below the clinical detection point makes sense. One wants to be sure that all ADH are accounted for.  The clinic would retrieve false negatives, without pulling in more false positives. But then again, this epistemological demotion does not make sense. In a professional prevalence study, such as the reports we are currently analyzing, it makes more sense to use the molecular to gauge the accuracy of the clinical. We want to know how many false positives there are.  How accurate are we? What we have instead is a preference for inaccuracy, because the genotypic would expose and the phenotypic would conceal the false positives. Now we can append the 100% accurate molecular to the inaccurate clinical.  We dodge the standard discipline of putting our results into a contingency table. Inclusion criteria increases profit, and exclusion criteria decreases profit, because the former increases errors and the latter increases accuracy.  In an academic study, who would do this and why? Who is interested in blending the phenotypic in with the genotypic to arrive at a larger number, by lowering accuracy?  Here is the previous illustration with some quotes from the Authoritative report.  It follows, in text, the method from its source for the results: the 1st report.

Who is interested in blending the phenotypic in with the genotypic to arrive at a larger number, by lowering accuracy?

Gold Standard: What is missing from the 1st and Authoritative reports is a concern for false positives … a crosscheck of the clinical procedure with the gold standard: molecular testing.

 “The gold standard of diagnosis is the identification of the underlying genetic defect, which is possible in 80% of cases and enables the identification of affected relatives of the index patient.”[2]

Not so, for the industry. Their Gold Standard is financial gain, not diagnostic accuracy. It is more profitable to discourage the molecular and emphasize the clinical. Those who look like mutation carriers will significantly outnumber actual mutation carriers. In the 1st and Authoritative reports, “FH” patients are only the default of the discouraged molecular approach.  This error-harvest will make up for the inaccessibility and irrelevance of many of the actual mutation carriers.  To the degree that the medical community can be persuaded to regard this clinical approach as the “gold standard,” the clinical setting effectively renames these errors as “True Positives.” This is significant, not in small part because it so subtle. Here’s how the industry can change the indication without FDA approval: shift from the more accurate genotype to the less accurate phenotype. Any legitimacy given to lowering standards will effectively rename “off-label” to “on-label.”

Below: “New EAS Consensus Statement on FH: Improving the care of FH patients,“ referring to the Authoritative report. [3] (Emphasis mine)

New EAS Consensus Statement on FH: Improving the care of FH patients,“ referring to the Authoritative report.

Here is Raul Santos, recipient of pharma “honoraria” and also one of the authors on the Authoritative report. (Emphasis mine)

Also, one need [sic] to consider that what mainly causes ASCVD in FH is the severely elevated LDL-C and not the genetic defects. Therefore, the phenotype is more important in the clinician’s point of view to identify and treat FH. This is clearly seen here in the study of Raal et al where there was variability on the HoFH phenotype.

….

To conclude HoFH is still a devastating disease with a huge burden of ASCVD and early mortality risk, however it is more frequent and heterogeneous than we once thought. Recognizing this heterogeneity and the overlap with HeFH is important for clinical management. Phenotype and not the genotype should be the physicians’ main concern, and those with most severe phenotype have to be more aggressively treated. Unfortunately FH as a whole is still underdiagnosed and undertreated and many opportunities to save lives are being lost.

Acknowledgments: RDS has received honoraria for consulting/speaker activities from: Astra Zeneca, Amgen, Aegerion, Akcea, Biolab, Boeringher-Ingelheim, Cerenis, Genzyme, Kowa, Pfizer, Sanofi/Regeneron, Unilever and Torrent.[4]

Ironically, the “FH” population is indeed “underdiagnosed.” Here is a different perspective, from Fahed and Nemer. In sum, clinical screening is not the efficient way to find mutations carriers.

“FH is a disease that shows great phenotypic variability.” .… “Due to the paucity of data on genotype phenotype correlations, clinical diagnosis will miss a large percentage of FH patients. It is currently estimated that only 15 to 20% of patients with FH are actually diagnosed. A study on 643 Danish probands could not even find a single phenotypic characteristic to predict the existence of a mutation.” [5]

The predominant clinical solution recommended in the Authoritative report is the cause of molecular underdiagnoses.

Problem: So “FH” is indeed underdiagnosed. There will be underdiagnoses … because Clinical testing is not going to find the majority of mutation carriers, and this is because most mutation carriers are below the clinical detection point. The authors’ own 2nd report shows this. And the predominant clinical solution recommended in the Authoritative report is the cause of molecular underdiagnoses.

The reason why molecular screening doesn’t work for the industry is because finding all of these mutation carriers is difficult and unprofitable:

  • Because the majority of mutation carriers score below the clinical detection point, real-world clinics are not going to flag most of them, and thus entire populations would need to be screened for this group to become medically relevant. This is not a profitable strategy.
  • And even of those above the clinical detection point, a significant portion will not be found to contain a mutation. Tightening the standards improves the hit rate as a percentage, but of course this requires that one accept a smaller pool of candidates. This avenue is simply not profitable.
  • However, even if molecular screening were used, many mutation carriers would still be below the cutoff point. Studies show that many have compensating genes, sufficiently good environmental factors, or one of the milder mutations – and consequently, many in this false negative category may not require expensive, specialized medication. This is not profitable.

It’s a big problem when medical relevance is financially irrelevant … and a powerful force on behavior: tapping a mere clinical procedure will inflate the patient pool with quick and easy to acquire false positives. The industry can then simply add in the molecular results, instead of using them to gauge accuracy.

In the end, money twists health values into a logic-pretzel -- like sending flood relief to Bakersfield because New Orleans is inconveniently underwater, and then validating this with the fact that the two cities’ populations just happen to be roughly the same.

Solution:   It is more profitable to steer the medical and investing communities away from genetic testing than toward it … while also trying to convince everyone that environmental solutions do not work well with such a genetically inherited disease. A molecular danger scares up  a clinical profit. Tough sell. But that’s what’s happening.  Treating the false positives with FH drugs is easy if you convince the medical community to stop short of molecular confirmation. It finds many more of those suffering from environmental factors than it will find carriers of LDLR mutations.  Pharma admits that there are too many false negatives, but then uses that admission to neglect most false negatives and thus secure more false positives. It is financially rational; humanly insane.

This neglect of the true mutation carriers below clinical detection is deliberately engineered by the industry. The Authoritative report is a large factor in this effort. Molecular testing is just too expensive and reveals the majority of carriers to be below the detection point. The solution? Broadcast the fact that patients are indeed overlooked in order to elicit sympathy toward securing more clinically determined patients, thus making up for the deficit of on-target patients by prescribing the drugs to larger numbers of off-target patients. In the end, money twists health values into a logic-pretzel — like sending flood relief to Bakersfield because New Orleans is inconveniently underwater, and then validating this with the fact that the two cities’ populations just happen to be roughly the same. 

Deceptive “comparison” of populations: The irony is that the mutation prevalence rate in the 2nd report is very close to the 1st report’s clinical prevalence.  So why not emphasize the more forensic of the two approaches? … the molecular?  Why use the molecular to promote the clinical?

Besides the reasons already discussed, there is another reason for the molecular emphasis in the 2nd report. If the totals of the prevalence counts in two reports are seen as “comparable,’ then 2 mice over here can confirm that there are 2 elephants over there. This will not be so conspicuous, if the average reader is unfamiliar with the terms and definitions in question: two distinct constituents of a total will be given extra cognitive room within which to be “compared.” With the clinical process, a tomato at a distance of 20 yards can looklikea red apple and I can then write down on paper, “It looks like there are 2 apples over there.” That statement is all that the reader sees.  And I can later subject a red apple and a green apple to the more forensic taste test and then write that I once again have a total of “2” and now because this total “2” is truly comparable to the previous total “2,” that tomato is now an apple, on paper. Of course, reconciling the forensic approach with the weaker report would render this deception conspicuous. So we do want the forensic report, but only so much as it exists to recommend the weaker report and no more.

Scienter: The proximity of the totals of the two reports can appear, from one perspective, to be only a coincidence.  But then again, this falsehood has to be deliberate. The authors are experts in their field, they are not unsophisticated, and they knew that the majority of mutation carriers originally scored below the clinical detection point. The authors had this same raw data in the 1st report – since it makes up almost 60% of the population of the 2nd report.  They could not have completed either report without this knowledge. From that perspective, the similarity in the prevalence numbers is not just a coincidence; it plays a necessary role in the deception.


[1] On “Cherry-picking” scientists, see page 113.

[2] Familial Hypercholesterolemia: Developments in Diagnosis and Treatment; Gerald Klose, Ulrich Laufs, Winfried März, Eberhard Windler; 2014

[3] http://www.sciencedirect.com/science/article/pii/S002191501300511X

[4] Homozygous Familial Hypercholesterolemia: phenotype rules! Commentary on the study of Raal et al. Raul D. Santos Lipid Clinic Heart Institute (InCor) University of Sao Paulo Medical School Hospital and Preventive Medicine Centre and Cardiology Program Hospital Israelita Albert Einstein, Sao Paulo, Brazil. Atherosclerosis. 2016 May; 248:252-4. doi: 10.1016/j.atherosclerosis.2016.03.015

[5] Familial Hypercholesterolemia: The Lipids or the Genes? Akl C Fahed and Georges M Nemer; 2011