Tyler and I have been pushing pooled testing for months. The primary benefit of pooled testing is obvious. If 1% are infected and we test 100 people individually we need 100 tests. If we split the group into five pools of twenty then if we’re lucky, we only need five tests. Of course, chances are that there will be some positives in at least one group and taking this into account we will require 23.2 tests on average (5 + (1 – (1 – .01)^20)*20*5). Thus, pooled testing reduces the number of needed tests by a factor of 4. Or to put it the other way, under these assumptions, pooled testing increases our effective test capacity by a factor of 4. That’s a big gain and well understood.

An important new paper from Augenblick, Kolstad, Obermeyer and Wang shows that the benefits of pooled testing go well beyond this primary benefit. Pooled testing works best when the prevalence rate is low. If 10% are infected, for example, then it’s quite likely that all five pools will have at least one positive test and thus you will still need nearly 100 tests (92.8 expected). But the reverse is also true. The lower the prevalence rate the fewer tests are needed. But this means that *pooled testing is highly complementary to frequent testing*. If you test frequently then the prevalence rate must be low because the people who tested negative yesterday are very likely to test negative today. Thus from the logic given above, the expected number of tests *falls* as you tests more frequently (per test-cohort).

Suppose instead that people are tested ten times as frequently. Testing individually at this frequency requires ten times the number of tests, for 1000 total tests. It is therefore natural think that group testing also requires ten times the number of tests, for more than 200 total tests. However, this estimation ignores the fact that testing ten times as frequently reduces the probability of infection at the point of each test (conditional on not being positive at previous test) from 1% to only around .1%. This drop in prevalence reduces the number of expected tests – given groups of 20 – to 6.9 at each of the ten testing points, such that the total number is only 69. That is, testing people 10 times as frequently only requires slightly more than three times the number of tests. Or, put in a different way, there is a “quantity discount” of around 65% by increasing frequency.

Peter Frazier, Yujia Zhang and Massey Cashore also point out that you could also do an array-protocol in which each person is tested twice but in two different groups–this doubles the number of initial tests but limits the number of false-positives (both tests must be positive) and the number of needed retests. (See figure.).

Moreover, we haven’t yet taken into account the point of testing which is to reduce the prevalence rate. If we test frequently we can reduce the prevalence rate by quickly isolating the infected population and by reducing the prevalence rate we reduce the number of needed tests. Indeed, under some parameters it’s possible to increase the frequency of testing and at the same time reduce the *total* number of tests!

We can do better yet if we group individuals whose risks are likely to be correlated. Consider an office building with five floors and 100 employees, 20 per floor. If the prevalence rate is 1% and we test people at random then we will need 23.2 tests on average, as before. But suppose that the virus is more likely to transmit to people who work on the same floor and now suppose that we pool each floor. Holding the total prevalence rate constant, we are now likely to have a zero prevalence rate on four floors and a 5% prevalence rate on one floor. We don’t know which floor but it doesn’t matter–the expected number of tests required now falls to 17.8.

The authors suggest using machine learning techniques to uncover correlations which is a good idea but much can be done simply by pooling families, co-workers, and so forth.

The government has failed miserably at controlling the pandemic. Tens of thousands of people have died who would have lived under a more competent government. The FDA only recently said they might allow pooled testing, if people ask nicely. Unbelievably, after telling us we don’t need masks (supposedly a noble lie to help limit shortages), the CDC is still disparaging testing of asymptomatic people (another noble lie?) which is absolutely disastrous. Paul Romer is correct, testing capacity won’t increase until we put soft drink money behind advance market commitments and start using techniques such as pooled testing. Fortunately or sadly, depending on how you look at it, it’s not too late to do better. Some universities are now proposing rapid, frequent testing using pooling. Harvard will test every three days. Cornell will test frequently. Delaware State will test weekly. Lets hope the idea spreads from the ivory tower.