The 2nd report serves as a “Rosetta Stone” for understanding the 1st report

The 2nd report attempts to confirm the prevalence of the 1st report. However, as my original analysis will demonstrate, it uses a different constituent to do so, and thus, after mathematical reconciliation, it deductively proves the 1st report was inflated with false positives.

majority of mutation carriers are below the cutoff point.

The authors calculated the total number of mutation carriers: 174 ÷ .387 = 450. From here we use the total population of 98,000 to arrive at a prevalence of 1:217.78.[1] The 2nd report provides the number of patients assigned a clinical score. For example, out of 98,000 individuals screened, 316 were assigned to the “probable” category.[2] These are “above”[3] the clinical cutoff point. The report also includes the number of patients within each category who were found through genotyping to carry one of the top four most prevalent mutations (“Top4”). So of the 316 assigned as “Probable FH,” 19 had one of the Top4 and so 297 did not.  Also, in following the procedure used in the 2nd report, the Top4 are said to make up 38.7% of the total mutation spectrum and so we would estimate 30 remaining mutation carriers for the Probable category: 19 Top4 ÷ .387 = Total 49.  Total 49 – 19 Top4 = 30. I refer to these 30 as “Ex-Top4.” Thus, 316 – 19 – 30 = 267 non-mutation carriers in the Probable category. Following this procedure, I have mathematically converted the results as follows. 

Mathematical conversion of 2nd report results.

Significantly, we had 341 in the Probable and Definite categories (table above) and I calculate a maximum of 65 total mutation carriers (25[4] from the Top4 ÷ .387 = 65).[5]  This leaves 276 false positives among 341 determined clinically to be “FH.“

The basis of the 2nd report was molecular analysis and so this high false positive rate for the clinical scores is not an issue – for this 2nd report.  However, we are going to reconcile this data with the 1st report and its Corrigendum.  Since the 1st and 2nd reports share 60% of the same population, and since the results in the 1st report are presented as clinical, we can see from here that the bulk of the clinical false positives in the 2nd report would make up a large part of the “FH” patients identified in the 1st report.

A proportion of false positives in this 2nd report would show up in the 1st report as true positives.

Mathematical conversion of the 2nd report demonstrated in a contingency table

[1] The report used a prevalence of 1:217. I could not duplicate this number without breaking the rules that usually apply when rounding numbers. It appears that the authors used a population of 98,098 in the “Results.”  I would then have to round down from a prevalence of 1:217.996. For more on population discrepancies, click here.

[2] Click here for screenshot.

[3] As in the tables above, the report itself listed the higher scoring DLCN scores in the lower rows, and the lower scoring in the higher rows. For easy comparison, I will keep that orientation in my tables.  However, when I write, “Above the cutoff,” I mean the higher scoring “Probable & Definite” categories in the lower rows and do not refer to the orientation and order of the table rows themselves. Conversely, when I say, “Below the cutoff” or “Below the detection point,” I mean the lower scoring categories, “Unlikely & Possible.”

[4] 19 probable + 6 definite = 25.

[5] Again, we will deal later with the issue of the distribution of the Ex-Top4 among the clinically assigned categories: click here.