The Illusion of Sufficient Oversight: How an Article on 260 Randomized Clinical Trials Failed to Ask the Most Basic Question
Why would TrialSiteNews be so unskeptical about studies on vaccines after all of the evidence of study manipulation?
Because no one told us anything about study designs and we fill the vacuum with assumptions?
In June 2025, TrialSite News published an article titled:
“Comprehensive Review Lists Over 260 Active Control or Placebo-Controlled Vaccine Trials.”
The article reviews an analysis provided by by Dr. Brad Spellberg, Chief Medical Officer at Los Angeles General Medical Center, Dr. Jake Scott, and a team of collaborating physicians on mysite.com.
TrialSite News should be applauded for finally acknowledging the need for long-term safety studies and the reality that vaccine injuries do occur. That’s more honesty than we typically see from most media outlets aligned with establishment public health.
However, the analysis is presented as a definitive response to critics of vaccine trial transparency, the analysis and the TSN article implies that the mere presence of hundreds of placebo-controlled studies should silence all concerns about the rigor and ethics of vaccine science.
Behind the numbers lies a serious omission — one that any thoughtful journalist, scientist, or regulator must ask before drawing conclusions about safety or efficacy:
“How many of these trials were large enough to detect harm?”
The article never asks. It never answers. And that silence carries consequences.
To their credit, those who compiled the data on 260 studies shared their organized file, which permitted our analysis.
What Is Statistical Power, and Why Does It Matter?
In clinical research, statistical power refers to the probability that a study will detect an effect if that effect truly exists.
Specifically:
For efficacy, the goal is to detect whether the vaccine prevents disease at a rate greater than chance.
For safety, the goal is to detect whether the vaccine causes harm at a rate greater than chance.
But statistical power depends heavily on sample size. If the study is too small, even real harms will not show up in the data. This is particularly crucial when the potential harm is infrequent but serious, such as myocarditis, blood clots, or autoimmune disorders.
How Small is Too Small?
Let’s say an adverse event (e.g., myocarditis) occurs in 1 out of every 100 vaccine recipients — that’s a 1% incidence rate. If a trial includes only 100 people in the vaccinated group, there is a 37% chance it won’t detect even a single case.
To detect that 1% rate with 95% confidence, you need at least 299 participants in the vaccinated group. Anything less, and you're running a high risk of a false negative — concluding the vaccine is safe, when in fact, it may not be.
This is not a controversial assertion. It’s a consequence of the binomial distribution, the core probability model underlying all binary outcomes in clinical research.
What Does the Article Actually Show?
TrialSite News presented a list of 260+ trials that it claimed were either active-controlled or placebo-controlled. But a detailed inspection of their own dataset — which includes sample sizes — shows something quite different:
Put simply: nearly 40% of the trials in their own list cannot reliably detect a 1% adverse event. That’s not a statistical footnote — it’s a structural failure.
Let’s look this graphically:
Figure: Most Trials Cannot Detect Harms Even at 1–5% Frequency
This chart illustrates the minimum sample sizes required to detect adverse event rates with 95% confidence. Overlayed are the percentages of trials in TrialSite News's list that fall below each threshold — highlighting that 40% of trials are too small to detect a 1% harm, and nearly 20% can’t even detect a 3% harm. These are not edge cases. They are systemic.
Why This Matters: The Ethics of Detecting Harm
Historicallys, safety has been treated as an afterthought in most vaccine studies. A placebo-controlled design is only ethical if the trial is capable of meaningfully detecting both benefit and harm. In early phases, smaller studies may be justifiable to assess immunogenicity or short-term responses. But once safety is on the table, underpowered trials create a dangerous illusion of reassurance.
They give us:
No meaningful insight into rare adverse events
False claims of safety
Regulatory decisions made on blind spots
And when those trials are aggregated — as they are in the TrialSite article — without statistical context or critical interpretation, the harm is multiplied.
Definitions That Matter
Let’s pause to define the key terms often misused or misunderstood in this discussion:
Randomized Controlled Trial (RCT): A study design in which participants are randomly assigned to receive either the intervention (e.g., vaccine) or a comparator (e.g., placebo or another treatment).
Control Group: The group that does not receive the active intervention. This can be:
Placebo (Inert): A biologically inactive substance.
Active Comparator: A different vaccine or drug.
Adjuvanted Control: A vaccine or injection that contains immune-stimulating compounds but no active disease-targeting agent — sometimes misleadingly labeled "placebo."
Sample Size (N): The number of individuals in a trial group. This directly determines the trial’s power to detect real effects or harms.
Adverse Event (AE): Any negative health event following an intervention, whether causally related or not. Serious AEs include hospitalization, disability, or death.
Number Needed to Harm (NNH): The number of individuals that need to receive the treatment before one person experiences harm due to the intervention. A related concept to Number Needed to Treat (NNT), but for safety.
What Journalism Owes the Public
Quality and objective journalism requires more than compiling lists and quoting statistics.
It requires:
Asking whether the trials were capable of answering the questions they set out to answer.
Scrutinizing definitions (what constitutes the placebo is a good start, there are other questions, see below).
Reporting sample size distributions, power limitations, and trial endpoints.
None of that is present in the article. What we get instead is an attempt to defuse criticism by citing volume: “260 trials with control groups — see? Everything’s fine.”
But volume is not validity. Presence of a control group does not equal protection of the public.
TSN’s acknowledging the existence of trials is not enough. Trials must be adequately designed to detect harm, powered to reveal rare outcomes, and independently reviewed to avoid bias. TSN fails to examine any of these parameters. And in doing so, it inadvertently reinforces the very illusion of oversight it claims to dismantle.
Here’s a structured list of the key questions a rigorous journal, regulator, IRB, or journalist should ask about each trial, beyond power — organized by domain:
I. Methodological Integrity
Is the control group truly inert?
Was the comparator a saline placebo, or was it an adjuvanted or active comparator?
Was it biologically active in any way that could mask harms?
What are the inclusion/exclusion criteria?
Were participants screened to exclude those most at risk of adverse reactions (e.g., autoimmune history, prior vaccine injury)?
How representative are the participants of the general population?
Was the vaccine given to people in the general population who were never included in the “randomized” clinical trials?
What is the trial phase and objective?
Was it designed to assess safety, efficacy, immunogenicity, or durability?
Was long-term safety a primary endpoint, or simply a secondary afterthought?
How were outcomes defined and measured?
Were adverse events pre-specified and rigorously defined?
Was there an active surveillance mechanism (e.g., follow-up calls, health records), or passive reporting?
How was the blinding done?
Was the blinding single, double, or not at all?
Could the design have led to functional unblinding due to different side effect profiles?
II. Safety Signal Sensitivity
What was the duration of follow-up?
Did it include long enough post-intervention monitoring to detect delayed effects (e.g., autoimmunity, neurological onset)?
How were serious adverse events (SAEs) adjudicated?
Independent review board? Sponsor-controlled review?
Was the normative Phase I, II and III trials used to catalog potential and then confirm adverse events?
Was the placebo group vaccinated shortly after the end of the trial, aborting any chance to study long-term effects of vaccination on health?
Was causality assessed, or was “not related” assumed unless proven?
Were all adverse events published?
Or were they selectively reported or aggregated (e.g., “non-specific symptoms”)?
Were rare but serious adverse events tracked over time or across studies (meta-signal analysis)?
III. Statistical and Reporting Practices
Were the trial’s primary and secondary endpoints pre-registered?
Was there outcome switching?
Were statistical methods defined in advance?
Were confidence intervals and absolute risk reductions reported?
Or only relative risk, which can exaggerate perceived benefits?
Was sample size justified with a formal power calculation for both benefit and harm?
Was attrition high?
Were dropouts differentially distributed between groups?
Was there an intention-to-treat analysis?
How was missing data handled?
Was imputation used?
Were sensitivity analyses performed?
IV. Ethical Transparency
Were participants fully informed of trial risks?
Did informed consent mention potential for serious adverse events?
Who funded the trial?
Was there sponsor involvement in protocol design, data collection, or analysis?
Was the trial ethically justified given the existence of effective alternatives?
Placebo controls are not ethically appropriate when known safer alternatives exist.
Was the data shared or accessible to independent researchers?
Did the trial result in regulatory action or label updates?
V. Generalizability and External Validity
Were children, elderly, or comorbid populations included?
If not, can findings be generalized?
Were important subgroups analyzed (e.g., by sex, age, comorbidity)?
Was the trial geographically and ethnically diverse?
Or is it biased toward certain regions/populations?
VI. Contextual Integrity
Was there prior evidence of harm for similar products?
Were known risk domains (e.g., spike protein expression, autoimmunity, vector persistence) addressed?
Was the trial compared or integrated with real-world evidence?
Do post-marketing surveillance data contradict the trial conclusions?
Was the cumulative risk across multiple doses considered?
Was this a single-dose or part of a multi-dose regimen?
Conclusion: Oversight Requires Asking the Right Questions
The presence of a control group does not make a study ethical or informative. If the sample size is too small to detect a 1% adverse event — or even a 3% one — then the trial is not evidence of safety. It is a structural blindfold.
And when journalism amplifies such trials without context, it does not elevate the scientific conversation. It dulls it. It props up a system that confuses motion for progress and methodology for meaning.
TrialSiteNews had a chance to present a nuanced, critically informed view of vaccine trial design.
Instead, they gave us a list — and called it proof.
Postscript: For the Statistically Curious
To calculate the probability of detecting at least one adverse event in n participants when the true rate is p, we use:
Solving this for n when P≥0.95 gives the following minimum sample sizes:
If a trial’s N falls below these thresholds, then by definition, it cannot reliably detect harm at that rate.
That’s not ideology. That’s analytics. We need a 360-degree view on vaccine safety science. A national vaccine study dashboard everyone can see that also educates its users. Because all of our health is influenced by these mass-vaccination programs, and we call deserve the chance to truly understand these issues if we choose.





