How the authors got the 1:223 in the 1st report, a Corrigendum for the Corrigendum:
The authors blended molecular and clinical results and derived a prevalence rate against a population of 69,016, concluding 1:223. Here we will tease the two constituents apart and then compare each on the same scale. Consequently, we will also be adjusting a key number by 12%.
In review, the 1st Report’s prevalence was actually 1:137. To reach this the authors lowered the clinical cutoff point into the third lower category, off-text, enabling them to blend in a less accurate count. In short, off-text, 5 was actually used as the clinical cutoff point, while on-text they printed “6.” (Click here.) Their off-text cutoff point may have been called into question, since they later issued a corrigendum and apology. To arrive at their new number, 1:223, they thus took out this slice from the third lower category, yet they still blended in the second lower category.
But there is a little more here. Even the 1:223 involves another form of blending. One key aspect of the 1st report is the mixing of the molecular results in with the clinical results. The reports did not present the molecular and the clinical as distinct tests, where one is a gold standard measuring the efficiency of the other. Rather, the results from one type of test were simply added to the results from another type of test. And so after promoting (or is it demoting?) the molecular hits to passing clinical scores, they can then divide 69,016, the total population of the clinical test, by their combined 309 “clinical” results. Their result is the 1:223 found in the Corrigendum. But if we responsibly separate the two tests, calculate the results separately, consider the different population sizes, and then bring them back togetheron the same scale, what will we see?
We have two distinct diagnostic procedures, each with its own distinct population. There were 69,016 in the population who were clinically scored; of these, after excluding the Top4 molecular results, 209 were found to be “definite” or “probable.” That’s a rate of 1:330.2. But there were also 100 Top4 molecular hits, not all among these 69,016, but among the genotyped population of 60,710. That’s a separate rate of 1:607.1. Let’s combine these two into one procedure, as the authors did, but this time we will concern ourselves with the population size of each distinct test. We want to bring both tests alongside each other, at the same scale.
One way to do that is to convert both results to a per million ratio. The Top4 molecular hits would be 1,647.2 and the clinically passing scores (excluding the Top4) would be 3,028.5. Now we add the two to get 4,675.7 and 1 million divided by that 4,675.7, we get a prevalence of 1:213.87.
Another way to bring the two tests into the same scale would be to gauge the difference between the molecular and clinical test populations, then apply that difference, by subtraction, to bring one of the two populations and its FH determinations, to the same scale as the other. Since the molecular tests are the gold standard and since that number is key to the deductive analysis I present, I will leave that number alone and adjust the clinical to an equal proportion. So 60,710 molecular tests is 87.965% of the 69,016 clinical tests, and so we then reduce the 209 by that same degree and have 183.85 results out of 60,710, alongside the 100 Top4 molecular hits out of 60,710. Now both tests are set against the same base, 60,710, and we divide by the 284 results of both tests (the 184 clinical, equalized, and the 100 molecular) and arrive at 1:213.77.
Accordingly, in the following pages we will downsize the 69,016 population and 209 of the clinical results by 12%, thus “equalizing” the 2 constituents of the 1st report’s results. Clinical “definite” and “probable,” to scale, will be 184, instead of 209. This works in the authors’ favor.
Opportunity for deduction: The same core population is carried over to the 2nd report
“This prevalence of FH is comparable to our previous report in a smaller sample of the same population based on phenotypic DLCN criteria alone.” ~ 2nd report, Benn, et al. 2016
Let’s lead in with a point that at first appears to be only quibble. The “smaller sample of the same population” was 60% of the total found in the 2nd report. That is, if I had 6 slices of a pie on a plate and then later added 4 more slices, I could reflect and say that at first the plate had less pie and that later it had more pie, but it should be understood that the original portion was much larger in comparison with the portion added.
As for the 1st report’s being solely phenotypic, that is not what it sounds like. Although the population of the clinical portion of the 1st report was 69,016, 60,710 of those were genotyped. “LDLR W23X, W66G, and W556S and APOB R3500Q mutations were genotyped in 60,710 individuals by TaqMan ….” (Click here.) The study took the genotyped hits, gave them a clinical score, and thereafter regarded them as phenotypes. Culturally, the molecular results help justify a clinical procedure which in actual practice will lack the resources to screen an entire population, molecularly.
To the population of 60,710 genotyped
in the 1st report, 37,290 were added for the 2nd report, now
totaling 98,000. These are not studies of two
completely separate populations; we
should remember that the very same 60,710 individuals have been carried over to
the subsequent report. This last point is crucial. The fact that both reports
involve the very same 60,000 people presents us with a decisive deduction: the number of mutation carriers in the 1st
report cannot possibly exceed those of the 2nd report. This
reveals a deductive ceiling, above which the numbers are not credible.
 All 100 Top4 were detected molecularly among the 60,710. A portion of these however will be found among the 69,016 clinically tested, which I estimate to be around 15. (Click here.) We exclude all Top4 from the clinical test to avoid counting the originally passing scores twice in the step which follows. We will first establish a clean separation of the populations into an equally clean separation of the several constituents of the results: the Top4 on the one side, and the Ex-Top above clinical detection and the clinical false positives on the other side. (The Ex-top4 below clinical detection were not included in this study, and this fact is central to my analysis.) If I opted for the other method, of keeping those Top4 which originally scored above clinical detection among then 69,016, then I must introduce a human factor – my estimate. Performing the math with such a clean separation removes my having to introduce, unnecessarily, this additional risk.
 When I refer to 209 clinical results or the adjusted 184, I refer to the number of clinical results after excluding Top4 carriers. An undisclosed number of Top4 are necessarily shared between both constituents, and because we necessarily include them in the 100 Top4 molecular, we will leave them out of the Clinical total, so as not to count them twice. (We also avoid the risk of having to work with an estimate.) 309 – 100 Top4 molecular hits = 209 otherwise clinical results, after excluding Top4.
 In the 2nd report, the title, some illustrations, and the paragraph titled, “Results,” use 98,098. However, the detailed breakdown of unrounded numbers totals up to exactly 98,000. Elsewhere, other numbers are used. Click here for details regarding this red flag.