#### The poisoned premise in the Danish reports

The 1^{st} and 2^{nd }Danish reports only tested for the 4 most frequent mutations. In the 2^{nd} report, the authors used a ratio to calculate the remaining mutations. However, even when using the raw data,[1] one can see the contours of the problem. (Below) **The scores have a similar variability in both the target sample and in the total population.** As we’ve seen in the previous two pages, this single premise of similarity where distinction is required is all that we need to prove the swap.

Accounting for the remaining mutations which were not screened for makes things a bit more complex, but doing so renders the swap conspicuous (along with other shenanigans). The 1^{st} report used mostly a circumstantial** scoring system **for the results (“Phenotyping”). The 2

^{nd}report used

**for its prevalence result (“Genotyping”). Now, industry-funded reports define FH**

*genetic testing***when assessing the urgency of underdiagnosis and then recommend the**

*genetically***procedure for actually finding patients for prescriptions.**

*scoring*#### Therefore, the two Danish reports serve as a proxy for the industry’s publication strategy. When we reconcile the 2nd and 1st Danish reports we are also reconciling the two procedures promoted by the industry: one for prevalence (“underdiagnosis”) and the other for diagnosis (prescription sales).

How does this work out?

**What at first appears
to be mathematical “whack-a-mole” ends up as a static, well-defined swap of
patients: **

**It is mathematically impossible for the genetic-based results in the 2nd report to be correct and for the results of 1st report’s scoring system not to be inflated with false positives.****Moving from***genetic matching*to*circumstantial scoring*, we identify*different*people.

### The reconciliation of the 1^{st} and 2^{nd} Danish report procedures, a summary

My original analysis of these two reports is decisive. I was assisted by the fact that FH is a genetic disease, and thus there is a mathematical and forensic rigor operating in the background of this problem. Additionally, both Danish reports shared the *same *core population, presenting opportunities for deduction.

The deductive reconciliation of the two reports was quite elaborate. The summary illustration below is a representation of the conclusion.

To emphasize this crucial point, the reconciliation of the
two reports serves as a proxy for the industry’s publication strategy, as can
be seen in the recent Regeneron report: **w hen we reconcile the prevalence and
diagnostic procedures found within the publication strategy, the force of
deduction exposes the bait-and-switch: it
is impossible for the same people to be both above and below clinical detection
at the same time. **

### Charts of the Deductive Reconciliation of the 1^{st} and 2^{nd} Danish Reports

The 1^{st} report did not break down genetic hits into
their original scores: before publication, they were *promoted *to passing scores. Unraveling the original scores of these
hits also unraveled the publication scheme. The authors’ resource for *both* publications was key to the
solution: the 2^{nd} Danish report used a population of roughly 100,000
people; *but roughly 60,000 of these were
the very same people used in the 1 ^{st} Danish report. *This shared population thus created an
opportunity for deductive analysis. Taking the 2

^{nd}report’s number of genetic hits which were also passing clinical scores

*as the maximum mathematically possible*in the corresponding category in the 1

^{st}report showed that patients are swapped as we move from the genetic-based to clinical scoring procedures. There were only 25 of the four most frequent mutations which had passing scores among the 2

^{nd}report’s 100,000; this meant that the portion of 60,000 used in the 1

^{st}report could not possibly have had more than 25 such carriers, just as it is impossible for the slice to be larger than the pie out of which it is cut. The higher the number of mutation hits the fewer false positives there will be in the result. Thus, to give the authors the best footing mathematically possible, but also to eliminate any doubt whatsoever, I used the entire 25 on the 1

^{st}report as a “deductive ceiling” — higher numbers are mathematically impossible. (Later in that report, I also estimated what this number might have actually been and arrived at 15.) In the chart below, we use the

**, but on the left we employ the 2nd report’s procedure and on the right, the 1**

*1*^{st}report’s data^{st}report’s procedure.

Only the four most frequent mutations were targeted. In the chart below, I call these the “Top 4.” I call the remaining mutations the “Ex-Top 4.” The 2^{nd} report uses 38.7% as the proportion of Top 4, thus enabling a calculation of the Ex-Top 4. (See the full report for a step-by-step demonstration: http://FHprevalence.com.)

Those counted genetically are *different* people from those counted with the scoring system. As a proxy for the industry’s publication strategy, we can see that the very people used to prove underdiagnosis in the determination of FH prevalence are abandoned and replaced with errors at the diagnostic stage. This 1^{st} report was purported to be the “source” for the industry’s Authoritative report, the latter being cited even in FDA submitted documents.

#### The “poisoned premise”: With a similar variability of FH scores pervading two populations, any cutoff point above any minimum score necessarily precludes all those below itself, while that similarity also precludes use of the cutoff point to make a distinction between all those above itself. With this poisoned premise, switching from genetic matching to cutoffs in a scoring system, the swap of patients is a mathematical necessity.

What if we apply the 1^{st} report’s method to the ** 2^{nd} report’s data**? These two procedures are two separate perspectives within the

*same*population, and according to the authors, the results of the 2

^{nd}report were roughly “comparable” with those of the 1

^{st}report … which suggested a confirmation of the results

*.*However, this is incorrect. Equal quantities still beg the question of the underlying entities.I may have

*two*elephants over there and

*two*mice over here, but their equal quantity does not change mice into elephants.

Using the 1^{st} report’s method on the 2^{nd} report’s data (right chart) would inflate the results with false positives, while swapping out genuine mutation carriers (left chart). 60% of those in the 1^{st} report are also present here in the 2^{nd} report. We cannot use the authors’ two prevalence rates above for comparison. They refer to mostly *different *people. The 1^{st} report, the one inflated with false positives, is supposed to be the source for the “Authoritative” report.

- Let us carry forward a key observation: where are the majority of mutation carriers? Probable & Definite FH or Unlikely & Possible FH? They are mostly short of the passing score — mostly in the Unlikely & Possible clinical categories.

As mentioned, the 2^{nd} report uses 38.7% as the proportion of Top 4 to total carriers. From there the authors calculate the total, which is supposed to account for the number of those with the mutations that were not targeted in the study.The 1^{st} report, on the other hand, simply leaves it to the reader to assume, as is only natural, that the scoring results minus the four most frequent mutations (the “Top 4”) must equal the remaining carriers. They do not. This is key to the error in the 1^{st} report, and this error can be captured in mathematical equations: see the following page for the mathematical proof.* *

[1] I.E., without considering the less frequent mutations which were not included in the study.