No matter how accurate a COVID-19 test is, there are statistical reasons a test result can be confusing, especially when what you’re testing for is rare. You might think COVID-19 is commonplace but probably less than 1% of the UK population has it at any one time.

ALL diagnostic tests have false positives and false negatives which need to be accounted for. But this creates some interesting statistical paradoxes.

### False Negatives

Our test is 100% specific (its ability to correctly identify a positive) in people who are likely infectious – which is great because the level of **false negatives** (people who are infectious but not picked up by the test) will be very low.

In clinical trials, our test was proven to be **99.5% specific** – which means it will correctly identify people who are ** definitely negative** 99.5% of the time. Or, if you test 200 people who are

**definitely negative**, statistically you may get one false positive. And if you test that person again, the probability is 1 in 200 x 200 – or 1 in 40,000 of an individual getting two false positives in a row.

### False Positives

But what about **false positives**? People who are **not infectious but test positive?** There’s a fascinating statistical quirk here that was identified in 1763, but that affects **all** diagnostic tests, from COVID to all sorts of diseases, even workplace drug screening.

Let’s park COVID-19 testing for a moment and look at **Bayes’ Theorem**.

Say you have a test that is **95% sensitive** (it picks up 95% of people who are definitely positive) and **95% specific** (it picks up 95% of people who are definitely negative). Let’s also assume what you’re testing for is present in 1% of the population you are testing.

### Let’s test 10,000 people

…and 1% have what you’re testing for

So that’s what we would expect were the test perfect. But it’s not. It’s 95% ‘accurate’. Which – because the incidence of what you are looking for is low – leads to some interesting statistical quirks. Let’s look at that again.

But here’s where it gets interesting. Because the low prevalence of whatever you’re testing for is **low**, the negatives get interesting.

Because 5% (95% specificity) of the 9,900 you expect to test negative is a bigger number than the true positives it can mislead you. Let’s do the maths:

It may surprise you that a 95% ‘accurate’ tests can deliver results where most of the positive results are false but it is a well known statistical anomaly, medics are well aware of it and take it into account when interpreting test results. The effect gets far smaller if specificity is closer to 100% or the prevalence of what the test is detecting is higher.

**But the COVID-19 cassette used to diagnose is statistically 100% sensitive** when used in people who may be infectious and clinical trials have shown it tracks PCR results 100% accurately to Ct values as high as 39.