25 Comments
Comment deleted
Expand full comment
Comment deleted
Expand full comment
Nov 1, 2022·edited Nov 1, 2022

Lee mistakenly stated that he calculated the FPR. The reviewers did correct it. And Weiler misunderstood the paper, and build on that error with his own bad math.

Expand full comment

I'm confused.

How can you know prevalence without relying on PCR? What if the estimated percent of false positives is large? Won't that undermine your confidence in prevalence estimates?

Is there some calculation of the change of rate of total positives that can give you a reasonable estimate of when actual prevalence peaks? It seems to me that the peak of total positives will be larger than the peak of actual positives because the false positive component will inflate the total positive peak and the total positive peak will occur _after_ the actual positive peak.

Am I being clear?

Expand full comment
author

You have made my point exactly.

Expand full comment

Why would false positives effect the timing of detection of peak infection?

Expand full comment

False positives move the perceived peak because they increase as true prevalence increases and they last a couple of months.

Expand full comment

False positives would decrease as true positives increase, as they are a % of the negatives.

What you seem to be referring to is the time it takes to clear the viral RNA, following infection. Is that really a 'false negative'?

Expand full comment

RNA remnants can test as false positives, i.e., true negatives (as regards infectious virus).

Expand full comment

Can you explain how a sample can it be both a 'false positive' and a 'true negative'?

And how would you characterize a sample with RNA remnants (from a person recovering from an infection)?

Expand full comment

"Can you explain how a sample can it be both a 'false positive' and a 'true negative'?"

Same thing.

"And how would you characterize a sample with RNA remnants (from a person recovering from an infection)?"

A sample?

Expand full comment
Oct 31, 2022·edited Oct 31, 2022

"Thus, the outcomes of the trials in terms of number of cases in the vaccinated groups and number of cases in the unvaccinated groups are bogus."

Isn't there an even bigger problem wrt vaccine efficacy trials due to false negatives (true positives) being ignored? If a mere 20 false negatives occurred in each arm, vaccine efficacy would approach 50%. The NEJM article about Pfizer vaccine efficacy ignored this little problem totally.

If we knew the total number of PCR tests that were conducted in the Pfizer trial, we could apply an estimate of the false negative rate for this period of time to the trial's number of PCR tests to estimate the number of false negatives (true positives) for each arm.

I would appreciate seeing a post about this .

Expand full comment
author

That's an article that's coming showing they knew the PCR primers were failing.

You can go to the back of the classroom now. :)

Expand full comment

False negatives are not the same as true positives. It should be obvious that False negatives are correlated to the number of positives (and true positives), not the number of people tested...

Expand full comment
Oct 31, 2022·edited Oct 31, 2022

Can you explain how you calculated the FPR, if you only know the FDR?

Expand full comment

Shouldn't TP go to 1 when prevalence is 1 (as opposed to 0.7)?

Expand full comment

https://jdee.substack.com/p/testing-testing-part-1

See the last slide. "Cases per 100 PCR tests" range from from like 1 to 25 in the UK, over time. Could someone do some edge-case calculations there to estimate the ranges of false discovery?

Expand full comment

Watching COVID positivity never go much lower than 2% (recall, was it 1%? OK I do not care enough to look it up) in the testing data made me think that the 2 percent was mostly false positives. Just eyeballing it.

Expand full comment

So the efficacy in the trials themselves were nonsense. But what about the study Israel did that was upheld as the definitive proof of efficacy? Can one walk through that and approximate how off that could have been? I know Crawford at Rounding the Earth has pointed out that given the thread of selection bias the efficacy of that study may have been nothing but smoke and mirrors but if we took at face value with this issue with testing what would it say?

Expand full comment

The point about FP etc rates being affected by prevalence is an important point (one that I teach when I cover those concepts!) And this shows why it is so very important.

Expand full comment
Nov 1, 2022·edited Nov 2, 2022

BMJ published a calculator showing this issue, this way back in March 2020...

https://www.bmj.com/content/369/bmj.m1808/rr-22

Expand full comment

Thanks. Learned a lot!

Expand full comment

BMJ published a calculator showing this issue, this way back in March 2020...

https://www.bmj.com/content/369/bmj.m1808/rr-22

Expand full comment

No, the Marines study does not have a 37% FDR. It said that '95% complete viral genomes' were not obtained from 37% - not the same thing.

But this does introduce an interesting question - how do you define a 'positive'? How complete a viral genome is needed to determine the sample to be a 'true positive'? Do you need 95% or is 80% enough?

But, if we accept your assumption, and that the 37% FDR is true for the initial testing of all 1847 participants, this study would indicate that the maximum FPR is 0.33% - that out of every 1000 negative samples, less than 3.3 would return a positive result. We have little to worry that false positives will not significantly increase the positive detection rate. Even at a positive detection rate of 3%, true positives will outweigh false positives by 10:1.

Expand full comment