A Rational Critique of "Real-world effectiveness and causal mediation study of BNT162b2 on long COVID risks in children and adolescents"
Case-counting window bias in the form of not including data until 28 days post infection and 14 days after vaccination is egregious given how well known the bias is.
A recently published article titled “Real-world effectiveness and causal mediation study of BNT162b2 on long COVID risks in children and adolescents” in eClinicalMedine has been touted by one of its authors as showing the COVID-19 mRNA jabs confer protection to the pediatric population from long COVID. While the study addresses an important and timely topic, several methodological issues and analytical flaws significantly undermine the reliability and validity of its findings. As a researcher with expertise in this are, I am compelled to highlight these concerns and the broader implications they may have for public health policies.
The study aimed to evaluate the real-world effectiveness of the BNT162b2 COVID-19 vaccine in reducing the risks of long COVID in children and adolescents during the Delta and Omicron variant phases. It utilized data from 20 health systems within the RECOVER PCORnet network, focusing on pediatric cohorts aged 5–20 years.
The methods included constructing three independent cohorts: adolescents (12–20 years) during Delta (July–November 2021) and children (5–11 years) and adolescents (12–20 years) during Omicron (January–November 2022). Participants were stratified by vaccination status (first dose of BNT162b2 vs. unvaccinated), and outcomes were assessed via causal mediation analysis to separate direct and indirect effects on long COVID, factoring in confounders through propensity score adjustments.
The authors reports the vaccine’s effectiveness was higher during the Delta phase, with overall effectiveness at 95.4% among adolescents. For Omicron, effectiveness was 60.2% in children and 75.1% in adolescents. The study found that the vaccine’s primary benefit was its role in preventing SARS-CoV-2 infections rather than modifying the risk of long COVID directly.
Here are the concerns.
Case-Counting Window Bias (aka, the Lyons-Weiler/Fenton/Neil Effect)
The study defined a case counting window bias by excluding infections occurring within 28 days after the index date for both vaccinated and unvaccinated groups. This design potentially skews early infections toward the unvaccinated group, inflating vaccine efficacy estimates for preventing long COVID. The window essentially delays consideration of outcomes in the vaccinated group, masking potential early adverse effects or reduced immunity immediately following vaccination. This methodological choice could create discrepancies in risk attribution for infection and subsequent conditions like long COVID, introducing bias in favor of the vaccine group.
The exclusion of infections occurring within the first 28 days post-infection and 14 days after the second exposure to a vaccine introduces what is commonly referred to as the case-window counting bias, or the Lyons-Weiler/Fenton/Neil effect. This arbitrary exclusion creates an artificial gap in the data, eliminating cases where the vaccine’s effectiveness might be less pronounced due to the immune system’s incomplete response. By omitting these cases, the study inflates the effectiveness of vaccines and distorts the reality of short-term protection. This bias misguides policymakers, leading to overly optimistic assessments of vaccine performance during critical periods immediately following vaccination. Lyons-Weiler found that the COVID-19 mRNA jabs were only 75%, not 95% efficacious; Fenton and Neil found that the delay in counting cases could lead to high efficacy for a coin toss.
Unverified Vaccination Status
Additionally, defining vaccination status relies solely on electronic health record (EHR) data. This approach does not account for individuals who may have received vaccinations outside the care network, resulting in misclassification bias. Without cross-referencing state or national immunization registries, the accuracy of the vaccinated and unvaccinated group classifications is questionable. Such inaccuracies could significantly skew the study’s findings and undermine its credibility for informing vaccination strategies.
Model Overfit and Subjective Selection of Model and Covariates
The study also suffers from overadjustment and subjective selection of covariates. Variables such as prior testing behavior, healthcare utilization, and comorbidities are included in the analysis, despite some of these lying on the causal pathway between vaccination and long COVID outcomes. Adjusting for these variables risks obscuring the true effects of vaccination. Moreover, the lack of formal model selection techniques, such as stepwise regression, to justify covariate inclusion raises concerns about the robustness of the results. This subjectivity introduces the possibility of overfitting, which could lead to misleading conclusions about vaccine efficacy and safety.
“Collider Bias” Instead of Vaccine x Infection Interaction Term
The authors themselves point to another critical issue - the introduction of collider bias in mediation analysis. By conditioning on infection status, the study creates a spurious association between vaccination and long COVID outcomes. Since both vaccination and long COVID risk influence infection status, the analysis inadvertently distorts estimates of both direct and indirect effects. This flaw challenges the validity of the study’s claims about the pathways through which vaccines reduce long COVID risk.
The study could have looked vaccine x infection and infection x vaccine interaction terms to address this; analysts are not helpless in the face of additional variables if the timing is known.
Further, given Jacques Fantini’s work, the effect of antibody-dependent disease enhancement could have been studied, especially if patients were included from day 1 of exposure to the mRNA jabs. Giving up by calling covariates “colliders” is not useful. Why was the vaccination not considered the collider?
Heterogeneity of “COVID” and Diagnostic Definitions
The stratification of results into Delta and Omicron periods assumes homogeneity within each variant phase, overlooking the evolutionary changes within variants and the overlapping circulation of sublineages. This assumption introduces additional variability that the study fails to account for, potentially leading to erroneous conclusions about vaccine effectiveness during these periods. Such errors, which would be in the form of both unaccounted heterogeneity and temporal confounding, are particularly concerning given the importance of understanding variant-specific vaccine performance.
The definition of long COVID based on a “computable phenotype” also warrants scrutiny. Relying on diagnostic codes and symptom clusters introduces variability, particularly in pediatric populations, where symptoms often overlap with other conditions. Furthermore, underdiagnosis in unvaccinated groups due to disparities in healthcare access likely distorts the observed effectiveness estimates, skewing the results in favor of vaccinated individuals. This misclassification undermines the study’s claims and diminishes its value in addressing long COVID concerns.
Exclusion Bias
Finally, the exclusion of individuals vaccinated later in the study period disproportionately represents early adopters, who may differ systematically from later cohorts. This limits the generalizability of the findings to broader populations and fails to capture the full spectrum of vaccine uptake behaviors. The implications of such selective sampling are significant, as they may lead to policy recommendations that are less effective for diverse populations.
The implications of these methodological shortcomings are compounded by a recent tweet from one of the study’s authors, which overstates the findings and fails to acknowledge the critical limitations. The tweet claims that vaccination resulted in a 95.4% reduced risk of long COVID during the Delta period and a 75.1% reduced risk during the Omicron period for adolescents, with similarly high reductions reported for children during the Omicron period. However, these claims disregard the impact of case-window counting bias, which excludes infections within the first 28 days post-vaccination. By omitting this key limitation, the tweet presents an inflated view of vaccine efficacy, misleading the public and policymakers with mathematical certainty.
Moreover, the tweet’s discussion of long COVID’s computable phenotype definition lacks transparency about the diagnostic challenges inherent in such an approach. The variability and potential for misclassification, particularly in pediatric populations, are significant yet unaddressed in the public statement. This omission undermines the validity of the reported reductions in long COVID risk and casts doubt on the reliability of the study’s conclusions.
The tweet further asserts that propensity score weighting and sensitivity analyses effectively adjusted for confounding and residual bias. While these methods are essential, their effectiveness depends on the robustness of the included variables and the formal assumptions underlying the models. The lack of transparency about covariate selection and the potential for over-adjustment raise questions about the validity of these adjustments. The tweet’s failure to address these concerns gives the impression of unqualified confidence in the study’s methodology, which is unwarranted given the identified flaws.
Another area where the tweet oversimplifies the findings is the use of causal mediation analysis to split vaccine effects into direct and indirect components. This analysis relies on subjective assumptions that are difficult to satisfy, such as the absence of unmeasured confounding between the mediator and the outcome. The study’s reliance on healthcare-seeking behavior and infection status as mediators almost certainly violates these assumptions, introducing bias into the results. By not addressing these limitations, the tweet provides an incomplete and potentially misleading interpretation of the mediation analysis.
In conclusion, while the tweet highlights important aspects of the study, it fails to convey the significant methodological limitations and biases that undermine the findings. Such omissions risk misinforming the public and policymakers, particularly when accurate and transparent communication is essential for public trust. The study’s limitations must be acknowledged and addressed in public statements or discussions to ensure a balanced and accurate understanding of its implications.
Given the significance of this study in shaping public health policy, its methodology and analysis must be robust, transparent, and free from systematic biases. The implications of the identified flaws extend beyond academic debate; they misinform policy decisions, erode public trust in science, and ultimately compromise public health outcomes since the expected outcomes given the studies will not match the public’s real-world experience. I urge the authors and the editorial board to re-evaluate this study’s findings in light of these concerns and consider publishing an addendum, erratum, or reanalysis, of further to address these issues. For example, they could show us what happens with a coin toss assignment to infected and uninfected and vaccinated vs. unvaccinated on same-size data to assess for the Lyons-Weiler/Fenton/Neil effect.
Ensuring the highest standards of scientific integrity is vital as we navigate the complex challenges of public health research.
Q Wu, B Zhang, J Tong, et al., 2025 Real-world effectiveness and causal mediation study of BNT162b2 on long COVID risks in children and adolescents, eClinicalMedicine, Volume 79.
Summary of the issues:
Analysis 1: Baseline Analysis
Finding: Comparison of vaccinated vs. unvaccinated individuals during a specified window post-vaccination.
Bias Identified: Potential for survivor bias and differences in baseline risk profiles.
Analysis 2: Stratification by Variant
Finding: Subdivision of cases into Delta and Omicron periods.
Bias Identified: Overlaps in sublineages and evolutionary dynamics ignored.
Analysis 3: Time-Dependent Effectiveness
Finding: Analysis of effectiveness over time since vaccination.
Bias Identified: Lack of accounting for prior exposure or natural immunity.
Analysis 4: Sensitivity Analysis on 14-Day Window
Finding: Considered individuals vaccinated 14 days prior to infection.
Bias Identified: Potential underestimation of early risk post-vaccination.
Analysis 5: Subgroup Analysis by Age
Finding: Separate effectiveness by age groups.
Bias Identified: Unaccounted comorbidities in older individuals.
Analysis 6: Dose-Dependent Effect Analysis
Finding: Focused on effects after two doses compared to one.
Bias Identified: Did not account for differences in timing or population characteristics.
Analysis 7: Reinfection Analysis
Finding: Comparison of vaccine effectiveness in primary vs. reinfection cases.
Bias Identified: Lack of adjustment for exposure risk variations.
What I see around me is people dropping dead; many of them young. And still, there are people (mainly older ones) queueing up to get their annual shot Both parents-in-law were hospitalized within 3 days. They had problems last year but still took it again. No one listens to me.
Excellent analysis. Your restraint is understood, but it is pretty obvious that the parameters were deliberately set up to rubber stamp the shot as "effective" and exclude any results inconvenient to those promoting its use. A researcher with any sophistication and experience would (or should) understand the bias inherent in their setup. I think they (and their peer reviewers) don't intend for the paper to be read by critical scientists, butr cited by policy-makers and used to further continued, all-demographic use of the product.
Thank you for your exposure of their methodological chicanery. If it were an innocent oversight, we would, of course, expect an immediate retraction. I advise against holding one's breath while waiting for that to happen. Unconsciousness, even death, might occur.