Using the 1st and 2nd report methods on 1st report’s deductive ceiling

In the previous section we compared the 1st and 2nd reports’ methods on the 2nd report’s data. Now let’s apply the two methods to the 1st report’s “deductive ceiling,” that is, the maximum number of Top4 which are mathematically possible originally above the clinical detection point. (Click here and here.)  Given that the majority of mutations are found in categories below probable and definite, and given the procedure of the very same authors’ 2nd report, 144 of the 184 remaining clinical FH would be false positives.[1] Using the deductive ceiling of 25 for the Top4 originally above the clinical detection point, let’s break down the 1st report. After giving the authors the best footing mathematically possible, the data according to both methodologies is as follows:

The populations in the two reports are mostly different people
Not the same people. Swapped from one report to the other.

Assuming the Ex-Top4 distribution among the clinical categories to be roughly similar to the Top4 and to other scientists’ results, we make the following deductions.[2]  Given the 2nd report’s detailed breakdown, we see that the majority of mutation carriers were excluded from the 1st report because they were not molecularly targeted and they were below the clinical cutoff. It was impossible for them to be at the same time included above the clinical detection point in the “FH” prevalence count of the 1st report. Those mutation carriers below the clinical cutoff cannot also be those who are above the clinical cutoff. From one report to the other, patient populations are swapped. But the “FH” label remains consistent, creating an illusion that one FH total confirms the other. On the contrary, the latest prevalence number in the 2nd report does not prove the source[3] to the Authoritative prevalence count; it refutes it. It is deductively impossible for the 2nd report to correctly identify mutation carriers and for the 1st report not to be inflated with false positives. 

  • This is an analysis based on a deductive limit. Working with responsible statistical comparisons and estimates, the percentage of false positives will certainly be much higher.

Again, any objection one might have in juxtaposing these two procedures together is precisely the point. These are the same two procedures used from one report to another, using 60% of the same population. We must accept that the false positives, at a minimum, are 51% of the total claimed to be “FH” in the 1st report (144 ÷ 284 = 51%).  Given that the 75 below the clinical cutoff are, by that fact, clinically undetectable, the minimum number of false positives in a real-world clinical setting is here estimated at 69%. (144 ÷209 = 69%)

%50 to 70% false postives

[1] See page 64 for the adjustment of the 209 clinical results to 184. In brief, the 1st report’s clinical results among 69,016 are now adjusted to the same scale as the molecular results, 60,710.

[2] For treatment of the distribution of the Top4 and Ex-Top4 within the clinical categories, see page 77: “Weakest link in my analysis is nonetheless very strong.”

[3] The source to the Authoritative report is supposed to be the 1st report.