Group testing

In combinatorial mathematics, group testing refers to any procedure which breaks up the task of locating elements of a set which have certain properties into tests on subsets ("groups") rather than on individual elements. A familiar example of this type of technique is the false coin problem of recreational mathematics. In this problem there are n coins and one of them is false, weighing less than a real coin. The objective is to find the false coin, using a balance scale, in the fewest number of weighings. By repeatedly dividing the coins in half and comparing the two halves, the false coin can be found quickly as it is always in the lighter half.[1]

Schemes for carrying out such group testing can be simple or complex and the tests involved at each stage may be different. Schemes in which the tests for the next stage depend on the results of the previous stages are called adaptive procedures, while schemes designed so that all the tests are known beforehand are called non-adaptive procedures. The structure of the scheme of the tests involved in a non-adaptive procedure is known as a pooling design.

Background

Robert Dorfman's paper in 1943 introduced the field of (Combinatorial) Group Testing. The motivation arose during the Second World War when the United States Public Health Service and the Selective service embarked upon a large scale project. The objective was to weed out all syphilitic men called up for induction. However, syphilis testing back then was expensive and testing every soldier individually would have been very cost heavy and inefficient. A basic breakdown of a test is:

Say we have n soldiers, then this method of testing leads to n tests. If we have 70-75% of the people infected then the method of individual testing would be reasonable. Our goal however, is to achieve effective testing in the more likely scenario where it does not make sense to test 100,000 people to get (say) 10 positives.

The feasibility of a more effective testing scheme hinges on the following property. We can combine blood samples and test a combined sample together to check if at least one soldier has syphilis.

Modern interest in these testing schemes has been rekindled by the Human Genome Project.[2]

Formalization of the problem

We now formalize the group testing problem abstractly.

Input: The total number of soldiers n, an upper bound on the number of infected soldiers d. The (unknown) information about which soldier is infected described as a vector \mathbf{x}= (x_1, x_2, ..., x_n) where x_i = 1 if the item i is infected else x_i = 0.

The Hamming Weight of \mathbf{x} is defined as the number of 1's in x. Hence, |x| \leq d where |x| is the Hamming weight. The vector \mathbf{x} is an implicit input since we do not know the positions of 1's in the input. The only way to find out is to run the tests.

Formal notion of a Test

A query/test S is a subset of [n]. The answer to the query  S \subseteq [n] is defined as follows:

A(S) = \begin{cases} 1, \mbox{                 if } \displaystyle\sum_{k\in S} x_k \geq 1\\      0,  \mbox{                otherwise.} \end{cases}

Note that the addition operation used by the summation is the logical-OR, i.e.

A(S) = \displaystyle\bigvee_{i\in S} x_i.

Goal

Compute \mathbf{x} and minimize the number of tests required to determine \mathbf{x}

The question boils down to one of Combinatorial Searching. Combinatorial searching in general can be explained as follows: Say you have a set of n variables and each of these can take on m possible values. So, finding possible solutions that match a certain constraint is a problem of combinatorial searching. The major problem with such questions is that the solution can grow exponentially in the size of the input. Here, we have no direct questions or answers. Any piece of information can only be obtained using an indirect query.

Definition

t(d,n): Given a set of n items with d defects, the minimum number of tests that one would have to make to detect all the defective items is defined as t(d,n).

Consider the case when only one person in the group will test positive. Then if we tested in the naive way, in the best case we would at least have to test the first person to find out if he/she is infected. However, in the worst case one might have to end up testing the entire group and only the last person we test will turn out to really be the one who was infected. Hence, 1 \leq t(d, n) \leq n

Testing Methods

There are two basic principles via which the testing may be carried out:

  1. Adaptive Group Testing is where we test a given subset of items and, we get the answer from the test. We then base the next test on the outcome of the current test.
  2. Non-adaptive Group Testing on the other hand is when all the tests to be performed are decided a priori.[3]

Definition

t^a(d,n): Given a set of n items with d defects, t^a(d,n): is defined as the number of adaptive tests that one would have to make to detect all the defective items.

One should note that in the case of group testing for the Syphilis problem, non-adaptive group testing is crucial. This is because the soldiers might be spread out geographically and adaptive group testing will need a lot of co-ordination.

Mathematical representation of the set of non-adaptive tests

For, S \subseteq [n], define  \chi_i \in \{ 0,1 \}^n such that  i \in S \Leftrightarrow \chi_s(i) = 1. M is a t \times n matrix of \chi_i. \mathbf{x} is the input vector transposed and \mathbf{r} is the resultant. The construction is based on the grounds that for non-adaptive testing with t tests is represented by a t-subset S_i \subseteq [n] (1 \leq i \leq t). \chi_i for a given i is the i^{th} test. M test matrix where m_{i,j} is one if for the i^{th} test,  j \in S. Note that here multiplication is logical AND (\bigwedge) and addition is logical OR (\bigvee). Then, M \times \mathbf{x} = \mathbf{r} where \mathbf{r} is the resultant of the matrix multiplication. To think of this in terms of testing, it is helpful to visualize matrix multiplication. Here, \mathbf{r} will have a 1 in position i if and only if there was a 1 in that position in both M and \mathbf{x} i.e. if that person was tested with that particular group and if he tested out to be positive. M =
\begin{pmatrix} m_{1,1}        \cdots    m_{1,n} \\  \vdots      \ddots    \vdots  \\ m_{t,1}       \cdots    m_{t,n} \end{pmatrix}

\mathbf{x} = \begin{pmatrix} x_1 \\  \vdots      \\    x_n \end{pmatrix} \mathbf{r} = \begin{pmatrix} r_1 \\  \vdots      \\    r_t \end{pmatrix}

Bounds for testing on t^a(d,n) and t(d,n)

 1 \leq t^a(d,n) \leq t(d,n) \leq n

The reason for t^a(d,n) \leq t(d,n) is due to the fact that any non-adaptive test can be performed by an adaptive test by running all of the tests in the first step of the adaptive test. Adaptive tests can be more efficient than non-adaptive tests since the test can be changed after certain things are discovered.

Lower bound on t^a(d,n)

Fix a valid group testing scheme with t tests. Now, for two distinct vectors \mathbf{x} and \mathbf{y} where |\mathbf{x}|, |\mathbf{y}| \leq d, the resulting vectors will not be the same i.e. \mathbf{r(x)} \neq \mathbf{r(y)}. Here \mathbf{r(x)} is the resultant vector when \mathbf{x}. This is because, two valid inputs will never give us the same result. If this ever happened, then we would always have an error in finding both \mathbf{x} and \mathbf{y}. This gives us that the total number of distinct results is the volume of a Hamming Ball of radius d, centered about n i.e. Vol_2(d,n). However, for t bits, the total number of possible distinct vectors is 2^t. Hence, 2^t \geq Vol_2(d,n). Taking the \log on both sides gives us t \geq \log\{Vol_2(d,n)\}.

Now, Vol_2(d,n) \geq {n \choose d} \geq (\frac{n}{d})^d. Therefore, we will end up having to perform a minimum of d\log{\frac{n}{d}} tests.

Thus we have proved, t^a(d,n) \geq d\log\frac{n}{d}

Upper bound on t^a(d,n)

 t^a(d,n) \leq O(d\log{n}) .

Since we know that the upper bound on the number of positives is d, we run a binary search at most d times or until there are no more values to be found. To simplify the problem we try to give a testing sccheme that uses O(\log{n}) adaptive tests to figure out a i such that x_i = 1. The related problem is solved by splitting [n] in two halves and querying to find a 1 in one of those and then proceeding recursively to find the exact position in the half where the query returned a 1. This will take 2\lceil\log{n}\rceil time or if the first query is performed on the whole set, it will take \lceil\log{n}\rceil +1. Once a 1 is found, the search is then repeated after removing the i^{th} co-ordinate. This can be done at most d times. This justifies the running time of O(d\log{n}) . For a full proof and an algorithm for the problem refer to: CSE545 at the University at Buffalo

Upper bound on t(1,n)

t(1,n) \leq \lceil\log{n}\rceil This upper bound is for the special case where d = 1 i.e. there is a maximum of 1 positive. In this case, the matrix multiplication gets simplified and the resultant \mathbf{r} represents the binary representation of i for test i. This gives a lower bound of \lceil\log{n}\rceil. Note that decoding becomes trivial because the binary representation of i gives us the location directly. The group test matrix here is just the parity check matrix H_m for the [2^m - 1, 2^m-m-1, 3] Hamming code.

Thus as the upper and lower bounds are the same, we have a tight bound for t(d,n) when  d = 1. Such tight bounds are not known for general d.

Upper Bounds for Non-Adaptive Group Testing

For non-adaptive group testing upper bounds we shift focus toward disjunct matrices. Disjunct matrices have been used for many of the bounds because of their nice properties. Through use of different constructions of disjunct matrices it has been shown that \Omega(\frac{d^2}{\log{d}}\log{n}) \leq t(d,n). Also for upper bounds we currently have that (i) t(d,n) \leq \mathcal{O}(d^2 \log{n}) (explicit construction) and (ii) t(d,n) \leq \mathcal{O}(d^2 \log^2{n}) (strongly explicit construction). It is good to note that the current known lower bound for t(d,n) is already a \frac{d}{\log{d}} factor larger than the upper bound for t^a(d,n). Another thing to note is that give the smallest upper bound and biggest lower bound they are only off by a factor of \frac{1}{\log{d}} which is fairly small.

See also

Notes

  1. A bit more precisely if there are an odd number of coins to be weighed, pick one to put aside and divide the rest into two equal piles. If the two piles have equal weight, the bad coin is the one put aside, otherwise the one put aside was good and no longer has to be tested.
  2. Colbourn & Dinitz 2007, pg. 574, Section 46: Pooling Designs
  3. Colbourn & Dinitz 2007, pg. 631, Section 56.4

References