The French Study as a Case Study for U.S. Students of Public Health: Learn How to Engineer a Protective Effect Using Epidemiologic Sleight of Hand
Expanded Teaching Module for Second-Year MPH and Epidemiology Graduate Students. They can learn how to NOT conduct a study by studying this paper.
The Anatomy of a Methodological Mirage: A Case Study in All-Cause Mortality and mRNA Vaccines in France
This case study centers on a highly publicized observational study published in JAMA Network Open in December 2025. The paper, authored by Semenzato et al., analyzed data from over 28 million French adults aged 18 to 59 years and concluded that recipients of COVID-19 mRNA vaccines had a significantly lower risk of all-cause mortality over a 4-year period. Their main finding: a weighted hazard ratio (wHR) of 0.75, implying a 25% reduction in mortality among vaccinated individuals.
On its face, this finding may appear reassuring and well-supported, and this is precisely how the press is representing the result. However, this educational module will walk you through the methodological structure that guaranteed and produced this result. We will identify where key epidemiologic assumptions were violated, where inappropriate or misleading analytic techniques were applied, and where causal interpretation became untethered from the underlying data.
We include precise definitions of core terms, clarifies assumptions the original text made about reader knowledge, and integrates structured guidance for learning causal inference principles in applied epidemiology.
Background: What Is the Role of Epidemiology in Public Health?
The core purpose of epidemiologic research is to estimate the effect of an exposure on an outcome in a way that supports causal inference. In observational data, this requires tools that simulate some of the features of randomized experiments—such as controlling for confounding, aligning time zero across comparison groups, and defining the correct target estimand (the thing being estimated).
Definition: An estimand is the specific quantity a study attempts to estimate. It is impacted by:
- The population of interest
- The exposure being compared
- The outcome being measured
- The time period of interest
- The method of causal comparison (e.g., ATE, ATT)
When studying the effect of COVID-19 vaccination on all-cause mortality, a valid epidemiologic analysis should estimate what would happen, on average, in the population if all individuals were vaccinated versus if all were not. That estimand is called the Average Treatment Effect (ATE). However, the French study does not target ATE. Instead, it estimates the Average Treatment Effect in the Untreated (ATT)—a narrower and more structurally biased estimand that assumes the unvaccinated population as the base.
So, from the beginning, they focused on attempting to demonstrate a desired “fact”, even if the fact is fragile to assumptions.
We will now walk through each structural component of the study and explain how it was aligned—intentionally or unintentionally—to yield an artificial protective signal.
Step 1: Survivorship Conditioning and Immortal Time Bias
Definition: Immortal time bias occurs when the definition of exposure requires that participants survive a certain period of time. It is critical error in observational studies where a portion of the follow-up time (the “immortal” period) is incorrectly attributed to an exposed group, despite the exposure (e.g., treatment) not yet having occurred. During this time, participants must survive to receive the treatment, creating a false survival advantage that makes treatments appear more effective than they are.
This can produce falsely protective associations if not properly accounted for.
In this study, no individual—vaccinated or unvaccinated—was included in the analysis until six months after their assigned index date. For vaccinated individuals, the index date was the actual date of first mRNA vaccination. For unvaccinated individuals, it was a synthetic date (see Step 2). By design:
Any vaccinated individual who died in the first 6 months after vaccination was excluded.
Any unvaccinated individual who died within 6 months of their assigned index date was also excluded.
Obviously, near-exposure mortality was avoided in this study. However, the case counting window bias of six months is one of the largest seen in the era of biased studies designed to fail due to delayed counting of events.
The result is a dataset that begins with only survivors, not full cohorts. By conditioning on survival in the early post-exposure period, the study removes the time window in which acute adverse events (e.g., cardiovascular or neurological events) may be most likely to occur.
This is not a random design quirk. It removes exactly the portion of the timeline most relevant to evaluating vaccine-related mortality.
Step 2: Synthetic Timing and Calendar-Time Misalignment
Unvaccinated individuals in the study were not assigned a common calendar index date (e.g., November 1, 2021). Instead, the authors drew a synthetic index date for each unvaccinated person, sampled from the real distribution of vaccination dates among the vaccinated.
Why this matters:
It detaches the unvaccinated group from real viral waves, public mandates, and societal behaviors.
It artificially aligns their “exposure windows” to a distribution they never experienced.
If fraud were to be conducted, it allows the intelligent selection of dates from the distribution of dates to bias the result. (One solution would be 10,000 analyses with random date assignment and report of the median and distribution).
Their step does no simulate temporal coherence for the analysis. A cohort-based analysis must preserve the real-world timing of exposure and outcome risks. This study replaces that logic with statistical symmetry, undermining biological plausibility.
Step 3: Restriction to Healthcare-Engaged Individuals
Eligibility required at least one reimbursed healthcare claim in 2020. While this ostensibly ensures complete follow-up data, it also selectively removes the healthiest—and most system-disengaged—members of the French population.
This is important because vaccine hesitancy strongly correlates with medical disengagement, distrust, and reduced healthcare usage. By excluding these individuals, the study underrepresents the portion of the unvaccinated population most likely to be both unexposed and low-risk.
This introduces selection bias: the resulting unvaccinated group is not a population analog, but a healthcare-engaged subgroup that may be at higher baseline risk.
Step 4: Misuse of ATT Weighting Instead of ATE
Definitions:
ATT (Average Treatment effect in the Treated/Untreated) estimates what would have happened to the treated/unvaccinated group if they had received the opposite exposure.
ATE (Average Treatment Effect) estimates what would happen if everyone in the population were assigned to each treatment level.
The study authors choose ATT weighting, setting the unvaccinated group as the reference population. Then, vaccinated individuals are reweighted to look like the unvaccinated group on observed characteristics.
This makes the estimate highly sensitive to the structure of the unvaccinated group. And as shown in the study’s own tables, this group is:
More deprived (social index quintile 5 overrepresented)
More likely to smoke or use alcohol
More likely to be hospitalized for COVID-19
Thus, ATT anchoring means vaccinated people are being compared to a structurally higher-risk baseline. Any difference will appear protective by construction.
The study never justifies this choice. The goal of public health vaccine policy is population-level effect, not subgroup-specific counterfactuals.
Step 5: Overadjustment and Collider Stratification
Definition: A collider is a variable that is influenced by both the exposure and the outcome. Conditioning on it creates non-causal associations.
In addition to standard covariates (age, sex), the study adjusts for over 40 additional variables. Some of these are valid confounders. Others are post-treatment variables or colliders:
Cancer surveillance (may be affected by vaccination-induced healthcare contact)
Use of psycholeptic drugs (may be associated with both vaccine uptake and mortality)
Reimbursement indicators (can signal both exposure likelihood and outcome probability)
Adjusting for such variables introduces collider bias, which can attenuate or distort true exposure-outcome relationships.
Step 6: Invalid Use of Negative Control Outcomes (NCOs)
Definition: A Negative Control Outcome is a variable that should not be causally affected by the exposure but shares similar confounding structure. It is used to detect residual bias.
The study uses traumatic injury and unintentional injury as negative controls.
However, these are not neutral. Injury rates are heavily influenced by occupational status, risk-taking behavior, and environmental exposure—all variables unequally distributed between vaccinated and unvaccinated populations. Vaccine effects can include syncope and seizure, mimicking accidental deaths.
When calibrated using these NCOs, the protective effect weakens from HR = 0.75 to HR ≈ 0.83—an implicit admission of unmeasured confounding. But the authors interpret this attenuation as robustness.
Step 7: Missing Cause-of-Death Data
Cause-of-death information was only available through December 2023, yet follow-up continued to March 2025. This means over 40% of all deaths in the study are not assigned to a known cause.
This is especially problematic when considering potential delayed adverse effects (e.g., neurological disorders, latent cancers), which may emerge in year 3 or 4. The absence of cause data makes these deaths indistinguishable from baseline noise.
Key Point: When causes are unknown, no claim can be made about what kinds of deaths were or were not increased by vaccination.
Step 8: Declining Effect Size Over Time
While the headline result reports an average hazard ratio over the full follow-up period, stratified analysis by time window shows the effect size weakening with each successive 3-month period. From a HR of 0.61 in the first 6–9 months, it rises to 0.79 by months 39–42.
This is consistent with early protection from COVID-19 mortality, followed by reversion to the population mean. Yet this erosion of benefit is not acknowledged in the abstract, headline, or public interpretation.
Final Diagnosis: A Study That Could Not Have Found Harm
Each design choice narrows the analysis in a way that selectively removes or obscures adverse effects:
Early deaths: excluded
Time misalignment: built in
Comparator group: structured for higher risk
Adjustment set: includes post-treatment mediators
Endpoint causes: missing for late period
Interpretive framing: avoids estimating the population average effect
This is not a study of vaccine safety. It is at best a target trial emulation whose target is structured not to reflect reality, but to simulate a predetermined conclusion.
Educational Guidance for Students
Key Concepts to Master:
Define your estimand clearly: Is your question population-level (ATE) or subgroup (ATT)?
Avoid immortal time and survivorship conditioning unless biologically justified
Preserve calendar-time alignment when using observational cohorts
Use valid negative controls that are conditionally independent
Understand the difference between confounders, mediators, copredictors, and colliders
Discussion Prompts:
1. Would using a fixed index date for unvaccinated individuals change the conclusions?
2. What alternative NCOs might have been used to test for confounding?
3. How might the results differ if deaths within the first 6 months post-vaccine were included?
4. Should studies with incomplete cause-of-death data report on cause-specific mortality at all?
Conclusion: Methodology is not neutral. It encodes assumptions. In public health, these assumptions must be made transparent and open to challenge, especially when policy decisions rest on their outcomes.
This case study offers a sobering reminder: sometimes, the absence of harm in a study is not evidence of safety—but of design that was never equipped to find risk.
Exercise: Redesigning a Study of COVID-19 mRNA Vaccination and All-Cause Mortality
Background
You have reviewed a large national cohort study that reported a 25% reduction in all-cause mortality among individuals aged 18–59 who received COVID-19 mRNA vaccines in France. However, the original study design included structural flaws that compromised its ability to produce unbiased or generalizable causal inference.
These included:
Conditioning on 6-month survival post-vaccination (survivor bias)
Using synthetic index dates for unvaccinated individuals (time misalignment)
Asymmetric inverse probability weighting (ATT estimand instead of ATE)
Inclusion of post-exposure covariates (collider bias)
Inappropriate use of negative control outcomes
Limited cause-of-death data
Assignment Instructions
Your task is to design an improved epidemiologic study to estimate the causal effect of COVID-19 mRNA vaccination on all-cause mortality in a general population. Your design should avoid the biases identified in the original study.
Part 1: Primary Study Design (Required)
Write a concise but detailed outline of a revised study that:
Defines the correct estimand (ATE or another appropriate choice) and justifies your choice.
Specifies clear eligibility criteria that do not condition on post-exposure survival or healthcare access.
Aligns time zero across vaccinated and unvaccinated individuals based on a biologically or policy-relevant calendar date.
Describes appropriate confounder control, distinguishing between baseline covariates and post-treatment variables.
Handles time-varying confounding and vaccine uptake dynamics, if applicable.
Specifies your model choice (e.g., Cox regression with ATE weights, g-methods, targeted maximum likelihood, etc.), and justifies it.
Proposes a sensitivity analysis to assess robustness (e.g., E-values, falsification endpoints, multiple definitions of exposure).
You may use diagrams (e.g., DAGs) to illustrate your logic if submitting electronically.
Part 2: Optional Extra Credit (Advanced Topics)
2a. Generalizability and Transportability
Describe a secondary design that tests whether your primary findings generalize beyond the included cohort. You may:
Stratify by region, socioeconomic status, or healthcare usage
Use bounding approaches or transport estimators
Propose a nested cohort that could serve as a validation sample
2b. Machine Learning for Heterogeneity of Risk
Design a predictive model (you do not need to code it) that could be trained within the vaccinated cohort only to estimate individual risk of all-cause mortality.
Specify:
What features you would include (e.g., age, comorbidities, SES, time since vaccination)
What is your learning set-design to avoid model overfit and ensure generalizable estimates of accuracy, sensitivity, specificity, ROC-AUC, etc.
What model class (e.g., gradient boosting, survival forests)
What outcome you would predict (e.g., 1-year or 4-year all-cause mortality)
How you would validate the model and interpret the findings
This part aims to surface subgroups within the vaccinated for whom mortality risk may be elevated—an essential consideration when generalizing “average” treatment effects to real-world practice.
Follow-up questions (required)
Q1. Why is risk stratification within the vaccinated an important and highly relevant question?
Q2. Can you think of any bona fide reason for the bias coordination by the authors of the French study?
Q3. How would using a fixed calendar index date for both groups affect bias and interpretability?
Q4. Can you propose a DAG (Directed Acyclic Graph) that illustrates how adjusting for post-vaccination healthcare engagement introduces collider bias? What would happen if you adjusted for it anyway?
Q5. If deaths within 6 months post-vaccination were included, how might this change the results? How would you handle early follow-up period biases like reverse causation?
Q6. What would a “negative control population” look like in this setting? Could you use individuals receiving a flu vaccine during the same period? Why or why not?
Q7. How could you use machine learning to model “propensity to die” conditional on pre-vaccination characteristics, and what would be the ethical implications of deploying such models in surveillance?
Q8. Should causal language (“reduces mortality”) ever be used in observational studies that fail to meet assumptions of exchangeability and positivity? Why or why not?
Q9. The article states “Mrs Semenzato had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.” Do you think all authors of these studies should held equally responsible for “responsibility for the integrity of the data and the accuracy of the data analysis”?
Q10. The supplemental material states that the record-level data are not available for sharing. How could studies like these improve their credibility via overt and transparent data sharing?
Q11. Many readers have sent statements of concern, some of which the authors responded. After having this educational exercise, and reading the comments by the readers, did the authors successfully defend their study?
Q12. Should, in your opinion, the journal retract the study?



go figure ; )
I don’t believe anything an expert has to say. Didn’t fall for Covid. I’ve not even had the flu since December of 2019.
Not being sick one time. Not even a cold.
I’m 62 and remain unjabbed.
This is all I need to know.