Lee mistakenly stated that he calculated the FPR. The reviewers did correct it. And Weiler misunderstood the paper, and build on that error with his own bad math.
How can you know prevalence without relying on PCR? What if the estimated percent of false positives is large? Won't that undermine your confidence in prevalence estimates?
Is there some calculation of the change of rate of total positives that can give you a reasonable estimate of when actual prevalence peaks? It seems to me that the peak of total positives will be larger than the peak of actual positives because the false positive component will inflate the total positive peak and the total positive peak will occur _after_ the actual positive peak.
"Thus, the outcomes of the trials in terms of number of cases in the vaccinated groups and number of cases in the unvaccinated groups are bogus."
Isn't there an even bigger problem wrt vaccine efficacy trials due to false negatives (true positives) being ignored? If a mere 20 false negatives occurred in each arm, vaccine efficacy would approach 50%. The NEJM article about Pfizer vaccine efficacy ignored this little problem totally.
If we knew the total number of PCR tests that were conducted in the Pfizer trial, we could apply an estimate of the false negative rate for this period of time to the trial's number of PCR tests to estimate the number of false negatives (true positives) for each arm.
False negatives are not the same as true positives. It should be obvious that False negatives are correlated to the number of positives (and true positives), not the number of people tested...
See the last slide. "Cases per 100 PCR tests" range from from like 1 to 25 in the UK, over time. Could someone do some edge-case calculations there to estimate the ranges of false discovery?
Watching COVID positivity never go much lower than 2% (recall, was it 1%? OK I do not care enough to look it up) in the testing data made me think that the 2 percent was mostly false positives. Just eyeballing it.
So the efficacy in the trials themselves were nonsense. But what about the study Israel did that was upheld as the definitive proof of efficacy? Can one walk through that and approximate how off that could have been? I know Crawford at Rounding the Earth has pointed out that given the thread of selection bias the efficacy of that study may have been nothing but smoke and mirrors but if we took at face value with this issue with testing what would it say?
The point about FP etc rates being affected by prevalence is an important point (one that I teach when I cover those concepts!) And this shows why it is so very important.
No, the Marines study does not have a 37% FDR. It said that '95% complete viral genomes' were not obtained from 37% - not the same thing.
But this does introduce an interesting question - how do you define a 'positive'? How complete a viral genome is needed to determine the sample to be a 'true positive'? Do you need 95% or is 80% enough?
But, if we accept your assumption, and that the 37% FDR is true for the initial testing of all 1847 participants, this study would indicate that the maximum FPR is 0.33% - that out of every 1000 negative samples, less than 3.3 would return a positive result. We have little to worry that false positives will not significantly increase the positive detection rate. Even at a positive detection rate of 3%, true positives will outweigh false positives by 10:1.
Lee mistakenly stated that he calculated the FPR. The reviewers did correct it. And Weiler misunderstood the paper, and build on that error with his own bad math.
I'm confused.
How can you know prevalence without relying on PCR? What if the estimated percent of false positives is large? Won't that undermine your confidence in prevalence estimates?
Is there some calculation of the change of rate of total positives that can give you a reasonable estimate of when actual prevalence peaks? It seems to me that the peak of total positives will be larger than the peak of actual positives because the false positive component will inflate the total positive peak and the total positive peak will occur _after_ the actual positive peak.
Am I being clear?
You have made my point exactly.
Why would false positives effect the timing of detection of peak infection?
False positives move the perceived peak because they increase as true prevalence increases and they last a couple of months.
False positives would decrease as true positives increase, as they are a % of the negatives.
What you seem to be referring to is the time it takes to clear the viral RNA, following infection. Is that really a 'false negative'?
RNA remnants can test as false positives, i.e., true negatives (as regards infectious virus).
Can you explain how a sample can it be both a 'false positive' and a 'true negative'?
And how would you characterize a sample with RNA remnants (from a person recovering from an infection)?
"Can you explain how a sample can it be both a 'false positive' and a 'true negative'?"
Same thing.
"And how would you characterize a sample with RNA remnants (from a person recovering from an infection)?"
A sample?
"Thus, the outcomes of the trials in terms of number of cases in the vaccinated groups and number of cases in the unvaccinated groups are bogus."
Isn't there an even bigger problem wrt vaccine efficacy trials due to false negatives (true positives) being ignored? If a mere 20 false negatives occurred in each arm, vaccine efficacy would approach 50%. The NEJM article about Pfizer vaccine efficacy ignored this little problem totally.
If we knew the total number of PCR tests that were conducted in the Pfizer trial, we could apply an estimate of the false negative rate for this period of time to the trial's number of PCR tests to estimate the number of false negatives (true positives) for each arm.
I would appreciate seeing a post about this .
That's an article that's coming showing they knew the PCR primers were failing.
You can go to the back of the classroom now. :)
False negatives are not the same as true positives. It should be obvious that False negatives are correlated to the number of positives (and true positives), not the number of people tested...
Can you explain how you calculated the FPR, if you only know the FDR?
Shouldn't TP go to 1 when prevalence is 1 (as opposed to 0.7)?
https://jdee.substack.com/p/testing-testing-part-1
See the last slide. "Cases per 100 PCR tests" range from from like 1 to 25 in the UK, over time. Could someone do some edge-case calculations there to estimate the ranges of false discovery?
Watching COVID positivity never go much lower than 2% (recall, was it 1%? OK I do not care enough to look it up) in the testing data made me think that the 2 percent was mostly false positives. Just eyeballing it.
So the efficacy in the trials themselves were nonsense. But what about the study Israel did that was upheld as the definitive proof of efficacy? Can one walk through that and approximate how off that could have been? I know Crawford at Rounding the Earth has pointed out that given the thread of selection bias the efficacy of that study may have been nothing but smoke and mirrors but if we took at face value with this issue with testing what would it say?
The point about FP etc rates being affected by prevalence is an important point (one that I teach when I cover those concepts!) And this shows why it is so very important.
BMJ published a calculator showing this issue, this way back in March 2020...
https://www.bmj.com/content/369/bmj.m1808/rr-22
Thanks. Learned a lot!
BMJ published a calculator showing this issue, this way back in March 2020...
https://www.bmj.com/content/369/bmj.m1808/rr-22
No, the Marines study does not have a 37% FDR. It said that '95% complete viral genomes' were not obtained from 37% - not the same thing.
But this does introduce an interesting question - how do you define a 'positive'? How complete a viral genome is needed to determine the sample to be a 'true positive'? Do you need 95% or is 80% enough?
But, if we accept your assumption, and that the 37% FDR is true for the initial testing of all 1847 participants, this study would indicate that the maximum FPR is 0.33% - that out of every 1000 negative samples, less than 3.3 would return a positive result. We have little to worry that false positives will not significantly increase the positive detection rate. Even at a positive detection rate of 3%, true positives will outweigh false positives by 10:1.