An Autopsy Confirmed: Hviid et al. (2019) and the Problems with Vaccine Safety Science
A Case Study in Misclassification, Bias, and the Manufacture of Certainty
In May 2025, a new peer-reviewed article by Jeremy R. Hammond, Jeet Varia, and Brian Hooker delivered a long-overdue and thorough postmortem on the 2019 observational study by Hviid et al., which had claimed to finally put to rest the alleged link between MMR vaccination and autism. That study was widely celebrated by regulators, the media, and industry-linked scientists as definitive. It was no such thing.
The Hammond et al. review, published in the Journal of Biotechnology and Biomedicine, does not simply critique the statistical conclusions of the Hviid paper—it dissects the very structure of its logic, methodology, and assumptions. Their verdict is unequivocal: the Hviid study could not have disproven a vaccine-autism link because it was never designed to find one. If anything, it was designed to avoid one.
One of the most striking findings in Hammond et al.’s analysis is that the Hviid sample significantly underrepresented the true rate of autism in Denmark during the study period. Their dataset reported only 1% autism prevalence among the 650,943 children followed through 2013. But Danish national estimates—using the same diagnostic codes and birth cohorts—place autism prevalence during that time between 1.16% and 1.65%. The discrepancy is not trivial. It translates into more than two thousand missing autism cases. These cases were not randomly omitted—they were structurally excluded by design choices that effectively removed from analysis the children most likely to receive an autism diagnosis: those who were too young at the time follow-up ended.
This alone disqualifies any claim that Hviid et al. had sufficient statistical power to detect or rule out associations between MMR vaccination and autism. But it is not the only failing.
The authors of the 2019 study also excluded children with known genetic syndromes highly comorbid with autism, including Fragile X, Down syndrome, Angelman, Prader-Willi, and others. The rationale appeared to be that if a child had one of these conditions, then their autism “must” have been due to the underlying disorder, not vaccination. That assumption is not only untested—it is logically incoherent. These children were precisely the ones who would be most likely to exhibit adverse outcomes from an immunological trigger, yet they were removed from the sample entirely. A study purporting to assess risk in “genetically susceptible children” excluded the very definition of such children.
The definition of "genetic susceptibility" used in the study was, in fact, nothing more than whether a child had an older sibling previously diagnosed with autism. Children with no siblings, or whose sibling was diagnosed after the study began, were automatically categorized as not susceptible. Half the sample had no siblings and were thus excluded from consideration as genetically vulnerable. In other words, the authors constructed a definition of susceptibility so narrow and so functionally empty that it guaranteed the appearance of null results.
Hammond and colleagues also document a failure to control for healthy user bias. As has been seen in other studies, when parents of a child with autism decide not to vaccinate a younger sibling, or delay vaccination due to early warning signs, it creates the illusion that vaccinated children are at lower risk. Hviid et al. were aware of this bias—having cited Jain et al. (2015), which identified it clearly—but dismissed its relevance without conducting adequate controls. In doing so, they systematically distorted their own subgroup comparisons. Their conclusion that MMR was not associated with autism risk in girls, for example, rests on observed lower rates of autism among vaccinated girls. But this could just as easily reflect the fact that parents of at-risk girls avoided the vaccine, not that the vaccine reduced risk.
On the matter of reproducibility, the Hammond review highlights a contradiction that should have drawn more attention when the study was first published. The hazard ratios reported in the supplemental material suggest an increasing risk of autism in more recent birth cohorts, yet the actual case counts shown in the main figures display the opposite. Statistician Elizabeth Clarkson identified this incongruity early and contacted the authors to request clarification and access to the raw data. Hviid confirmed that the trend in the hazard ratios could not be derived from the published numbers, but declined to share the dataset, citing Danish law. That a study foundational to vaccine policy cannot be reproduced—and that its key trend indicators contradict its own data—should disqualify it from being used to support sweeping public health declarations.
Moreover, Hviid et al. did not study the U.S. vaccine schedule. They examined MMR in a population of Danish children who receive far fewer vaccine doses than their U.S. counterparts—just four by age one, compared to more than two dozen in the United States. Many vaccines given routinely in the U.S.—including rotavirus, hepatitis A, varicella, and others—are not part of the Danish schedule. The authors make no effort to explain why findings from this radically different exposure environment could be generalized to American children, yet the U.S. Centers for Disease Control and mainstream media treated it as definitive proof applicable everywhere.
The study also contained errors regarding the vaccine formulations themselves. The authors misreported the timeline of MMR product usage in Denmark, confusing which years Merck’s formulation was used and which years GlaxoSmithKline’s product was in play. The children vaccinated with Priorix (GSK) during the final years of the study were, by the authors’ own metrics, too young to have received an autism diagnosis by the end of follow-up, meaning a large subset of potentially relevant cases were effectively excluded. Again, this introduces downward bias.
Finally, the authors' institutional affiliations and funding sources introduce significant conflicts of interest. All were affiliated with the Statens Serum Institut, which not only supplies vaccines to the Danish national program but also develops them. Funding was provided by the Danish Ministry of Health and the Novo Nordisk Foundation—an entity with massive pharmaceutical holdings, including in vaccine manufacturers. This is not incidental. These organizations have a vested interest in suppressing findings that might generate vaccine hesitancy, liability, or scrutiny.
As a scientist who critiqued this study in real-time when it was first published, I noted these same issues in my 2019 blog article, "An Autopsy on Hviid et al. (2019)'s MMR Vaccine Science-like Activities." At the time, I pointed out that the reported autism prevalence in the Hviid dataset was anomalously low, and that their statistical power claims could not be trusted without proper case ascertainment. I also warned that excluding genetically vulnerable children and truncating follow-up age would inherently bias the study toward the null. Now, these concerns have been validated and formalized in the academic literature.
What Hammond, Varia, and Hooker have accomplished is more than a rebuttal. It is a methodical dismantling of a public-relations artifact that was passed off as epidemiology. It shows that the tools of science can be wielded with rigor—or with rhetorical intent—and that the difference often lies in the structure of a study’s design, not the sophistication of its statistics.
In the vaccine safety community, we’ve long said that a study incapable of finding an effect cannot be used to prove its absence. The 2019 Hviid paper did not test whether MMR could contribute to autism in vulnerable children. It tested whether an artificially filtered, under-diagnosed, and structurally biased population could be used to affirm a policy narrative.
The answer, it seems, was yes. Until now.
Hammond JR, Varia J, Hooker B. Hviid et al. 2019 Vaccine-Autism Study: Much Ado About Nothing? Journal of Biotechnology and Biomedicine. 2025;8(2):118–140.
DOI: 10.26502/jbb.2642-91280185
Outstanding piece, JLW!
Simply put, it is biased. You can never find what you are not looking for.