No matter how accurate a COVID-19 test is, there are statistical reasons a test result can be confusing, especially when what you’re testing for is rare. You might think COVID-19 is commonplace but probably less than 1% of the UK population has it at any one time.
ALL diagnostic tests have false positives and false negatives which need to be accounted for. But this creates some interesting statistical paradoxes.
Our test is 100% specific (its ability to correctly identify a positive) in people who are likely infectious – which is great because the level of false negatives (people who are infectious but not picked up by the test) will be very low.
In clinical trials, our test was proven to be 99.5% specific – which means it will correctly identify people who are definitely negative 99.5% of the time. Or, if you test 200 people who are definitely negative, statistically you may get one false positive. And if you test that person again, the probability is 1 in 200 x 200 – or 1 in 40,000 of an individual getting two false positives in a row.
But what about false positives? People who are not infectious but test positive? There’s a fascinating statistical quirk here that was identified in 1763, but that affects all diagnostic tests, from COVID to all sorts of diseases, even workplace drug screening.
Let’s park COVID-19 testing for a moment and look at Bayes’ Theorem.
Say you have a test that is 95% sensitive (it picks up 95% of people who are definitely positive) and 95% specific (it picks up 95% of people who are definitely negative). Let’s also assume what you’re testing for is present in 1% of the population you are testing.