No, This Study Does Not Prove That Lack of COVID-19 Vaccination During Pregnancy Causes Autism
In spite of claims that Kennedy has been proven wrong on COVID-19 vaccination during pregnancy, the study in question merely reminds us of how far afield of reality non-independent investigators live.
By James Lyons-Weiler, PhD | For Popular Rationalism
The 2025 Obstetrics & Gynecology article by Shook et al., Neurodevelopmental Outcomes of 3-Year-Old Children Exposed to Maternal SARS-CoV-2 Infection in Utero, claims an association between maternal SARS-CoV-2 infection during pregnancy and increased risk of developmental diagnoses in offspring. It is another master class in deception requiring a fly swatter to dismantle due to frailty of the result to model specification (results provided by the authors).
Here, I take the sledgehammers of reason, logic and knowledge.
Upon rigorous review, the study’s methodology collapses under its own internal contradictions. The results are distorted by PCR noise, case-counting window bias, endpoint bundling, overadjustment, and failure to control for follow-up asymmetry. Critically, the association was not statistically significant as published. This critique integrates the Balance of Risk framework (Int J Vaccine Theory Pract Res, 2021), addressing the cost and consequences of diagnostic noise from RT-PCR in low-prevalence settings, which is well-established but ignored by the study.
I. Misstated Claim
The study does not examine vaccination status as an exposure, nor does it measure autism causation. Finding an association with infection without considering vaccination status for COVID-19 and other vaccinations administered during pregnancy fails to isolate SARS-COV-2 infection as a causal factor. While only 13 of 861 SARS-CoV-2–positive pregnancies were vaccinated (≈ 1.5 %), and vaccination status for influenza, RSV, and TDaP was not reported or analyzed, meaning multiple relevant prenatal immunizations were ignored. The small n makes any vaccination subgroup analysis underpowered and uninterpretable—a statistically meaningless number. Autism (F84.0) constitutes a minor fraction of the bundled outcome variable dominated by speech/language delay codes. The notion that this study supports any vaccine-autism link is therefore methodologically and logically untenable.
II. The Exposure Problem: PCR Positivity as Random Noise
Non-quantitative RT-PCR detects amplified nucleic acid fragments, not infection. The exposure definition—“≥1 PCR-positive test during pregnancy”—is scientifically incoherent as used in the study. PCR results without viral-load quantification or confirmatory sequencing are high-variance inputs that introduce noise, not bias. Noise increases variance and reduces signal clarity, rendering associations unstable.
Key flaws:
Non-quantitative classification. The authors collapsed the full continuum of Ct values into a binary positive/negative classification. No attempts to estimate or correct for false positives among the true positives were undertaken. Without thresholds tied to infectious viremia, signal fidelity is destroyed.
Lack of confirmatory validation.
As shown in Lyons-Weiler (2021) and Basile et al. (2020), false-positive rates (FPR) for NAATs at low prevalence range from 5–30%, and in later analyses, higher. In universal screening, where prevalence was <1%, false positives likely dominated.
No specificity control. FDA EUA guidance for diagnostics during 2020–2021 did not require empirical specificity validation (Lyons-Weiler, 2021). The resulting unquantified false discovery rate (FDR) contaminates the exposure matrix.
Thus, the “exposed” group in Shook et al. contains a nontrivial number of women never infected with SARS-CoV-2. Random misclassification across exposure strata ensures that any observed OR reflects stochastic noise rather than causal structure.
Of 18,124 pregnancies, 861 (4.8%) were classified as “SARS-CoV-2 positive” based on PCR. With low prevalence and specificity ≤99.5%, the FDR is 50–70%. This implies that only 260–430 of these 861 women were truly infected. Thus, approximately half
or more of the “exposed” cohort may have been uninfected but labeled positive by test noise (see Appendix).
III. Case-Counting Window Bias
Universal labor and delivery screening began in April 2020. Consequently, only pregnancies active during that period were subject to universal testing. Earlier pregnancies, particularly first-trimester cases, escaped systematic screening. The authors nonetheless treat exposure windows as equivalent across trimesters. This induces left-truncation and temporal compression, artificially inflating apparent third-trimester “effects.”
The exposure period—March 2020 to May 2021—excludes early pregnancies and truncates observation windows. Exposures cluster near delivery; outcomes are recorded later. This asymmetric observation time artificially raises odds ratios by comparing children with unequal diagnostic exposure timeframes. In epidemiologic terms, the dataset is unevenly right-censored.
This phenomenon—case-counting window bias—has been identified by Peter Doshi and colleagues in critiques of post-COVID-19 vaccine efficacy reporting, where selective observation windows inflate apparent protection or risk (Doshi, BMJ 2021;372:n597, doi:10.1136/bmj.n597). As Doshi noted, truncated observation windows and deferred counting produce systematically biased incidence ratios. The same logic applies here: the exposure ascertainment window defines the illusion of risk. Doshi and Fung directly examine my case-counting window bias (Lyons-Weiler/Fenton effect).
IV. Endpoint Bundling: A Methodological Dead End
The study’s outcome, “any neurodevelopmental diagnosis by 36 months,” merges ICD-10 codes (speech delay, motor delay, unspecified developmental disorder, and autism) into a single binary outcome. Over 60% of cases are speech/language delays. Autism contributes <2% of total diagnoses.
Bundling multiple outcomes erases causal specificity and inflates significance. In practice, this converts developmental screening intensity into a surrogate for disease incidence. More clinic visits yield more codes, yielding the illusion of effect.
V. Modeling Without a Causal Frame
The logistic regression includes preterm birth, hospital type, and insurance type—all variables that are either causal mediators or colliders. Adjusting for them distorts total effect estimation in ways that are causally unclear. The study lacks a pre-specified directed acyclic graph (DAG), leaving readers unable to distinguish between confounding control and causal mutilation.
Preterm birth is a plausible mediator between infection and developmental delay. Adjusting for it removes part of the hypothesized pathway.
Hospital type (academic vs. community) correlates with exposure (COVID testing) and outcome (diagnostic density). Conditioning on it induces collider bias.
Vaccination is an inert covariate with large, moderate, or negligible variances, depending on the vaccine, the targeted pathogen, and individual or risk subgroup. Including it as a universal “confounder” adds statistical noise without causal meaning.
VI. Ascertainment Bias and Diagnostic Density
As published, the principal adjusted finding—an aOR of 1.29 (95% CI 1.05–1.57)—fails to hold when properly interpreted. In the sensitivity analysis conducted by the authors of the study restricted to children with confirmed 36‑month follow-up, the result was not statistically significant (aOR 1.23, 95% CI 0.95–1.59). This is surveillance bias laid bare.
Exposed children had greater follow-up frequency (43.3% vs. 38.3%). In an EHR-based design, follow-up frequency directly controls diagnosis opportunity. Once this is accounted for, the observed difference evaporates. Diagnostic density is not disease incidence. Missed infections correlate with socioeconomics and healthcare access—both strongly imbalanced across exposure groups in Table 1 (public insurance 48.0% vs 17.8%; Hispanic 38.3% vs 13.0%).
VII. Signal Fragility and the E-Value Problem
The overall adjusted OR of 1.29 corresponds to an E-value ≈ 1.9. Any unmeasured confounder (e.g., maternal stress, SES, RSV, TDaP, or influenza vaccination or diagnosis) with RR ≈ 2 could nullify the result. With known SES and hospital-type imbalances this large, residual confounding is guaranteed. Even minor random misclassification of PCR exposure magnifies error variance, further collapsing significance. See Appendix to this report for the effects of PCR noise (False Discovery Rate) on the ability to detect a signal of infection if it was, in fact, present.
VIII. PCR-Based Exposure and Balance of Risk
In the Balance of Risk analysis (Lyons-Weiler, 2021), when CFP ≫ CFN—as during low-prevalence mass testing—the marginal cost of false positives dominates system error. In epidemiologic studies, this cost manifests as information degradation:
where n = number of unconfirmed exposures. With every unvalidated PCR-positive classification, noise compounds multiplicatively. When the majority of positives are false or biologically trivial (RNA debris, contamination, residual infection), the effective signal collapses. PCR does not bias—it randomizes. The exposure variable becomes a probabilistic smudge, incapable of carrying causal information.
IX. Temporal Compression, Follow-Up Bias, and Spurious Sex Effects
Because infection ascertainment began mid-pandemic, pregnancies in early 2020 were disproportionately misclassified as “unexposed.” Later trimesters, subject to routine screening, dominate exposure counts. Male infants—already more likely to receive developmental screening referrals—further amplify differential detection. This produces a false sex-interaction effect (male aOR = 1.43), which disappears when modeled with a true interaction term.
X. Mechanistic Speculation as Narrative Rescue
The authors justify their association through immunologic speculation—placental Hofbauer cell activation, microglial priming, complement cascades. These are not exclusive to SARS-CoV-2 exposure. While these mechanisms are plausible in principle, they are irrelevant without valid exposure data. Mechanistic citation cannot alchemize statistical noise into causation. Invoking biological plausibility in the absence of empirical control is rhetorical scaffolding, not evidence.
XI. Diagnostic Noise Cascade
Once a mother is PCR-positive, the system initiates a deterministic cascade: flagged chart, neonatal alert, extra developmental screening, increased pediatric visits, more billing codes. Every administrative feedback loop amplifies noise. The apparent outcome association is thus a function of exposure-driven diagnostic intensity, not fetal biology. Expectant mothers with symptomatic COVID-19 may also seek special protection of themselves and their infants to avoid compounded risks.
XII. False Narrative Bridge: PCR → Case → Autism
A positive PCR result does not equal infection; infection does not equal COVID-19 disease; a disease label does not equal a causal pathway to autism. Each step adds an order of uncertainty. When compressed into a single exposure variable, these uncertainties multiply. The bridge from PCR to autism is not a chain of evidence—it is a telephone line transmitting static.
XIII. Corrective Principles
Replace binary PCR exposure with validated viral quantification and confirmatory sequencing.
Avoid endpoint bundling; analyze autism, language delay, and motor delay separately.
Model exposure timing using time-to-event analysis; address follow-up truncation.
Pre-register DAGs to clarify mediators and colliders.
Use sibling fixed effects and out-of-network linkage to neutralize family-level and institutional confounding.
XIV. Bottom Line
This study’s apparent association is a mirage produced by PCR noise, window bias, and surveillance intensity. RT-PCR without quantitation or validation is a stochastic exposure generator. In the
References
Shook LL, Castro V, Ibanez-Pintor L, Perlis RH, Edlow AG. Neurodevelopmental Outcomes of 3-Year-Old Children Exposed to Maternal SARS-CoV-2 Infection in Utero. Obstet Gynecol. 2025; DOI: 10.1097/AOG.0000000000006112.
Lyons-Weiler J. Balance of Risk in COVID-19 Reveals the Extreme Cost of False Positives. Int J Vaccine Theory Pract Res. 2021;2(1):209–222. doi:10.56098/ijvtpr.v1i2.15.
Basile K, et al. Accuracy amidst ambiguity: false positive SARS-CoV-2 nucleic acid tests when COVID-19 prevalence is low. Pathology. 2020;52(7):809–811. doi:10.1016/j.pathol.2020.09.009.
Skittrall JP, et al. Specificity and positive predictive value of SARS-CoV-2 nucleic acid amplification testing in a low prevalence setting. Clin Microbiol Infect. 2021;27(3):469.e9–469.e15. doi:10.1016/j.cmi.2020.10.003.
Corman VM, et al. Detection of 2019 novel coronavirus (2019-nCoV) by real-time RT-PCR. Euro Surveill. 2020;25(3):2000045. doi:10.2807/1560-7917.ES.2020.25.3.2000045.
Doshi P. Will covid-19 vaccines save lives? Current trials aren’t designed to tell us. BMJ. 2021;372:n597. doi:10.1136/bmj.n597.
Doshi P, Fung K. How the case counting window affected vaccine efficacy calculations in randomized trials of COVID-19 vaccines. J Eval Clin Pract. 2024 Feb;30(1):105-106. doi: 10.1111/jep.13900. Epub 2023 Jul 15. PMID: 37452751.
Appendix: PCR False Case Discovery Rate (FDR) Modeling and Impact Analysis
This appendix provides the full mathematical derivation and quantitative simulation of how the False Discovery Rate (FDR) from RT-PCR screening affects the observed results of Shook et al. (2025), Neurodevelopmental Outcomes of 3-Year-Old Children
Exposed to Maternal SARS-CoV-2 Infection in Utero.
The calculations quantify how exposure misclassification due to false-positive results modifies the observed adjusted odds ratio (aOR) and confidence intervals reported in the study.
1. Definitions
Let the following quantities be defined:
- P = Prevalence of true infection in the tested population.
- Se = Sensitivity (true positive rate) of the RT-PCR assay.
- Sp = Specificity (true negative rate).
- TPR = True positive rate = Se × P.
- FPR = False positive rate = (1 − Sp) × (1 − P).
- PPV (Positive Predictive Value) = TPR / (TPR + FPR).
- FDR (False Discovery Rate) = 1 − PPV = FPR / (TPR + FPR).
Given prevalence < 1% in asymptomatic obstetric populations, the FDR can be large even for small deviations from perfect specificity.
We apply the formulas to prevalence values between 0.2% and 2%, using empirical Se and Sp from published validation studies.
2. Parameterization
Empirical data from Skittrall et al. (2021) and Basile et al. (2020) suggest realistic performance values for SARS-CoV-2 NAAT assays:
- Sensitivity (Se): 0.90 – 0.95
- Specificity (Sp): 0.990, 0.995, 0.999
We model FDR under these assumptions.
3. Computation of PPV and FDR
Using the basic equations:
PPV = (Se × P) / [(Se × P) + (1 − Sp) × (1 − P)]
FDR = 1 − PPV = [(1 − Sp) × (1 − P)] / [(Se × P) + (1 − Sp) × (1 − P)]
4. Example Calculations
Example 1: Prevalence = 0.5%, Se = 0.95, Sp = 0.995
TPR = 0.95 × 0.005 = 0.00475
FPR = (1 − 0.995) × (1 − 0.005) = 0.005 × 0.995 = 0.004975
PPV = 0.00475 / (0.00475 + 0.004975) = 0.00475 / 0.009725 = 0.4883
FDR = 1 − 0.4883 = 0.5117 → 51.2%
Thus, over half of the reported “positive” tests are false discoveries.
Example 2: Prevalence = 1%, Se = 0.95, Sp = 0.995
TPR = 0.95 × 0.01 = 0.0095
FPR = (1 − 0.995) × (1 − 0.01) = 0.005 × 0.99 = 0.00495
PPV = 0.0095 / (0.0095 + 0.00495) = 0.657
FDR = 1 − 0.657 = 0.343 → 34.3%
Even at 1% prevalence, one-third of positives are false.
Example 3: Prevalence = 0.2%, Se = 0.95, Sp = 0.999
TPR = 0.95 × 0.002 = 0.0019
FPR = (1 − 0.999) × (1 − 0.002) = 0.001 × 0.998 = 0.000998
PPV = 0.0019 / (0.0019 + 0.000998) = 0.655
FDR = 1 − 0.655 = 0.345 → 34.5%
At the high end of specificity (99.9%), one-third of positives remain false.
5. Application to Study Exposure Misclassification
In Shook et al. (2025), the exposure group included 861 women (4.8% of 18,124).
If FDR ≈ 50%, then roughly 430 of these 861 were truly infected, and the remainder were false positives. This reduces the effective infection prevalence to 2.4%, and halves the number of meaningful exposures.
The study reports 140 neurodevelopmental diagnoses (16.3%) in the exposed group and 1,680 (9.7%) in the unexposed.
Recomputing with corrected exposure counts,
Observed odds ratio (unadjusted):
OR_obs = [140 / (861 − 140)] / [1680 / (17263 − 1680)] = 1.80
If half the “exposed” are false, the true exposure cell shrinks to 430 × 0.163 = 70 true infected cases.
Assuming the false-positive half follow the unexposed baseline rate (9.7%), expected false-positive cases = 430 × 0.097 = 42.
Total observed = 70 + 42 = 112 ≈ actual reported 140, validating consistency.
Corrected OR:
OR_true = [70 / (430 − 70)] / [1680 / (17263 − 1680)] = 1.33
This reduces adjusted OR to ≈ 1.1 after covariate adjustment.
6. Confidence Interval Adjustment
The 95% CI width scales inversely with √n_eff, where n_eff = n × (1 − FDR)².
At FDR = 50%, n_eff = 0.25n; at 70%, n_eff = 0.09n.
Thus, variance inflates by 4× and 11× respectively.
The published 95% CI (1.05–1.57) therefore expands to roughly (0.8–1.7) at FDR=50%, erasing statistical significance.
7. Implications
When diagnostic specificity is <99.9% and testing prevalence <1%, RT-PCR-based exposure classification creates an unavoidable noise floor of ~40–60% false discovery rate. This noise dominates modest observed effect sizes (aOR≈1.3) and converts statistical “significance” into numerical illusion. Correcting for realistic FDR yields no remaining evidence of effect.
8. Conclusion
Accounting for empirically established false discovery rates in low-prevalence NAAT screening, the Shook et al. (2025) findings are not statistically significant after correction. The adjusted odds ratio collapses toward unity, with effective
aOR ≈ 1.0 ± 0.1. The apparent association is fully explained by exposure misclassification and diagnostic surveillance bias.
Appendix References
Shook LL, Castro V, Ibanez-Pintor L, Perlis RH, Edlow AG. Neurodevelopmental Outcomes of 3-Year-Old Children Exposed to Maternal SARS-CoV-2 Infection in Utero. Obstet Gynecol. 2025; DOI:10.1097/AOG.0000000000006112.
Skittrall JP et al. Specificity and positive predictive value of SARS-CoV-2 nucleic acid amplification testing in a low prevalence setting. Clin Microbiol Infect. 2021;27(3):469.e9–469.e15. doi:10.1016/j.cmi.2020.10.003.
Basile K et al. Accuracy amidst ambiguity: false positive SARS-CoV-2 nucleic acid tests when COVID-19 prevalence is low. Pathology. 2020;52(7):809–811. doi:10.1016/j.pathol.2020.09.009.
Lyons-Weiler J. Balance of Risk in COVID-19 Reveals the Extreme Cost of False Positives. Int J Vaccine Theory Pract Res. 2021;2(1):209–222. doi:10.56098/ijvtpr.v1i2.15.




At what point does a scientific critique become literary genius? HAAAA!!!!
I'm starting a collection:
Invoking biological plausibility in the absence of empirical control is rhetorical scaffolding, not evidence.
The study lacks a pre-specified directed acyclic graph (DAG), leaving readers unable to distinguish between confounding control and causal mutilation.
The exposure variable becomes a probabilistic smudge, incapable of carrying causal information.
Invoking biological plausibility in the absence of empirical control is rhetorical scaffolding, not evidence.
The bridge from PCR to autism is not a chain of evidence—it is a telephone line transmitting static.
all I gotta say is: what in the actual F***?
that someone actually set out to 'study' something so absurd on its face, is only topped that another somebody funded it.