Why the Henry Ford Study Could Not Have Detected Association of Vaccines and Autism
You cannot find what you do not have data for.
Executive Summary: The Henry Ford Health System (HFHS) and Health Alliance Plan (HAP) conducted a retrospective cohort study of 18,468 children born between 2000 and 2016 to evaluate associations between vaccination and chronic health outcomes. The authors reported no statistically significant association between vaccination and autism. However, the observed autism rate—just 24 cases in over 18,000 children—is nearly twenty times lower than national prevalence. This article demonstrates that the study’s design features—including short follow-up, premature censoring, diagnostic misclassification, and sparse outcome data—render its autism analysis null. The study could not have detected a true association even if one were present.
The Study’s Autism Signal is Statistically and Mechanistically Suppressed
1. Study Design: Censoring Before Age of Diagnosis
Children were only followed until they disenrolled from HAP or until December 31, 2017. This means that once a child changed insurance plans or aged out of coverage, they were no longer observed—regardless of whether they later received an autism diagnosis. This form of censoring (i.e., right-censoring) erases outcomes that occur after disenrollment.
2. Follow-Up Duration: Too Short to Detect Autism
- Vaccinated children: Median follow-up was 970 days (~2.66 years)
- Unvaccinated children: Median follow-up was 461 days (~1.26 years)
According to the CDC, autism is rarely diagnosed before age 3, and reliable prevalence estimates use 8-year-olds as a standard because many diagnoses occur after age 5. The Henry Ford cohort exited observation well before most diagnoses typically emerge.
3. Age Thresholds Further Exclude Children from Evaluation The authors limited neurodevelopmental outcome analysis to children aged 2 or older. Since half of unvaccinated children were followed for less than 1.3 years, most of them were structurally excluded from being eligible for autism evaluation.
4. Diagnostic Opportunity: Fewer Encounters, Fewer Diagnoses Children in the vaccinated group had an average of 7 annual medical encounters, compared to just 2 for the unvaccinated. Autism diagnosis usually requires repeated visits, referrals, and specialized assessments. With lower encounter frequency, many children may never enter the diagnostic pipeline.
5. Related Diagnoses Fragment the Autism Signal The study treats autism separately from other neurodevelopmental disorders such as speech disorder, ADHD, and developmental delay—even though these are commonly precursors or components of autism.
Notably:
- Neurodevelopmental disorder (NDD): HR 5.53 (95% CI 2.91–10.51)
- Autism: HR 0.62 (95% CI 0.10–3.69)
This pattern suggests that autism cases may have been absorbed into broader or earlier diagnostic categories due to short follow-up.
6. Statistical Underpowering: 24 Autism Cases is Not Enough Hazard ratios with wide confidence intervals (as seen here) indicate extreme imprecision. The autism HR ranges from a 10-fold protective effect to a nearly 4-fold risk. Such results cannot support inference in any direction.
7. Observed vs Expected Autism Prevalence National autism prevalence among 8-year-olds is 2.78% (1 in 36).
- Expected in this cohort: 18,468 x 0.0278 = ~513 cases
- Observed: 24 cases
Even if we conservatively assume only 30% of diagnoses would be present before age 3, we would expect ~154 cases—still over 6 times more than were observed.
8. Diagnostic Timeline Conflict A typical autism diagnosis occurs between ages 3–6. The study’s median observation time of 1.3–2.7 years guarantees most children exited before reaching that window. The paper does not provide stratified follow-up past age 5 or 8, which would be necessary for proper autism detection.
9. Cohort Selectivity vs HFHS System Scale HFHS reports: - >9,500 births per year (2018) - >4 million outpatient visits annually
Yet the study cohort included ~1,086 births per year across 17 years (18,468 total). That’s just ~11–14% of HFHS’s birth population. The cohort is narrow, filtered by HAP enrollment and primary care attribution, and lacks full population representativeness.
10. Misapplied Statistical Tools Cox proportional hazards regression is not suited to analyses with sparse event counts. With only 24 autism cases, proportionality assumptions break down and hazard ratios lose interpretability.
A Visual Explanation is Missing (but Implied)
· Bar chart of expected vs. observed autism cases (e.g., 513 expected vs 24 observed) would highlight the under-ascertainment.
· Timeline overlay of follow-up vs diagnostic age window would clarify how early censoring truncates case accrual.
Suggested Corrections for Future Analyses
A properly powered and valid autism-vaccination cohort study should: - Follow children through at least age 8
- Capture diagnoses even after disenrollment (e.g., via state registries or linked claims) via Electronic Health Records
- Stratify and report outcomes by exact age at diagnosis
- Treat neurodevelopmental disorder as a unified spectrum or use survival models suited to sparse, lagged outcomes
- Include thousands of cases, not dozens
Policy Relevance
The Henry Ford study is often cited to refute claims of an autism–vaccine link. But its design was incapable of detecting such an association. Its autism results are not evidence of no risk—they are evidence of methodological nullification. Misuse of such studies in policy discussions is not merely incorrect; it is a failure of public health epistemology.
Conclusion
The Henry Ford study’s autism analysis fails on structural, statistical, and epidemiological grounds:
- Most children were observed only until age 2–3
- The unvaccinated group had especially short follow-up
- Autism diagnoses require sustained observation and specialist input
- Only 24 autism cases were recorded when over 150 would be expected, even conservatively
The study did not find no association—it was incapable of finding one. Any claim to the contrary reflects a fundamental misunderstanding of time-to-event data and pediatric diagnostic trajectories.
References
· CDC ADDM Network. https://www.cdc.gov/ncbddd/autism/data.html
· Henry Ford Health System Fact Sheets (2014, 2018)
· Lamerato et al., “Impact of Childhood Vaccination on Short and Long-Term Chronic Health Outcomes in Children: A Birth Cohort Study,” Senate Hearing Submission, 2025




By contrast, did Dr. Paul Thomas' study have the data and methods to accurately detect and report what it concluded? I thought it did. So, should we trust the Henry Ford study data on increased chronic ill health and neurological damage in vaccinated younger children compared to unvaccinated group? Even though it was not capable of detecting a comparative higher autism incidence in that younger population? Like, don't condemn the whole based on a faulty part?
Another great analysis, Jack.