The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the count of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most useful items.
The problem often arises in resource allocation with financial constraints. A similar problem also appears in combinatorics, complexity theory, cryptography and applied mathematics.
The decision problem form of the knapsack problem is the question "can a value of at least V be achieved without exceeding the weight W?"
Contents |
In the following, we have n kinds of items, 1 through n. Each kind of item i has a value vi and a weight wi. We usually assume that all values and weights are nonnegative. To simplify the representation, we can also assume that the items are listed in increasing order of weight. The maximum weight that we can carry in the bag is W.
The most common formulation of the problem is the 0-1 knapsack problem, which restricts the number xi of copies of each kind of item to zero or one. Mathematically the 0-1-knapsack problem can be formulated as:
The bounded knapsack problem restricts the number of copies of each kind of item to a maximum integer value . Mathematically the bounded knapsack problem can be formulated as:
The unbounded knapsack problem (UKP) places no upper bound on the number of copies of each kind of item.
Of particular interest is the special case of the problem with these properties:
Notice that in this special case, the problem is equivalent to this: given a set of nonnegative integers, does any subset of it add up to exactly W? Or, if negative weights are allowed and W is chosen to be zero, the problem is: given a set of integers, does any nonempty subset add up to exactly 0? This special case is called the subset sum problem. In the field of cryptography, the term knapsack problem is often used to refer specifically to the subset sum problem.
If multiple knapsacks are allowed, the problem is better thought of as the bin packing problem.
The knapsack problem is interesting from the perspective of computer science because
The subset sum version of the knapsack problem is commonly known as one of Karp's 21 NP-complete problems.
There have been attempts to use subset sum as the basis for public key cryptography systems, such as the Merkle-Hellman knapsack cryptosystem. These attempts typically used some group other than the integers. Merkle-Hellman and several similar algorithms were later broken, because the particular subset sum problems they produced were in fact solvable by polynomial-time algorithms.
One theme in research literature is to identify what the "hard" instances of the knapsack problem look like,[1][2] or viewed another way, to identify what properties of instances in practice might make them more amenable than their worst-case NP-complete behaviour suggests.[3]
Several algorithms are freely available to solve knapsack problems, based on dynamic programming approach,[4] branch and bound approach[5] or hybridizations of both approaches.[3][6][7][8]
If all weights () are nonnegative integers, the knapsack problem can be solved in pseudo-polynomial time using dynamic programming. The following describes a dynamic programming solution for the unbounded knapsack problem.
To simplify things, assume all weights are strictly positive (wi > 0). We wish to maximize total value subject to the constraint that total weight is less than or equal to W. Then for each w ≤ W, define m[w] to be the maximum value that can be attained with total weight less than or equal to w. m[W] then is the solution to the problem.
Observe that m[w] has the following properties:
where is the value of the i-th kind of item.
Here the maximum of the empty set is taken to be zero. Tabulating the results from up through gives the solution. Since the calculation of each involves examining items, and there are values of to calculate, the running time of the dynamic programming solution is . Dividing by their greatest common divisor is an obvious way to improve the running time.
The complexity does not contradict the fact that the knapsack problem is NP-complete, since , unlike , is not polynomial in the length of the input to the problem. The length of the input to the problem is proportional to the number of bits in , , not to itself.
A similar dynamic programming solution for the 0-1 knapsack problem also runs in pseudo-polynomial time. As above, assume are strictly positive integers. Define to be the maximum value that can be attained with weight less than or equal to using items up to .
We can define recursively as follows:
The solution can then be found by calculating . To do this efficiently we can use a table to store previous computations. This solution will therefore run in time and space. Additionally, if we use only a 1-dimensional array to store the current optimal values and pass over this array times, rewriting from to every time, we get the same result for only space.
Another algorithm for 0-1 knapsack, discovered in 1974 [9] and sometimes called "meet-in-the-middle" due to parallels to a similarly-named algorithm in cryptography, is exponential in the number of different items but may be preferable to the DP algorithm when is large compared to n. In particular, if the are nonnegative but not integers, we could still use the dynamic programming algorithm by scaling and rounding (i.e. using fixed-point arithmetic), but if the problem requires fractional digits of precision to arrive at the correct answer, will need to be scaled by , and the DP algorithm will require space and time.
The "meet-in-the-middle" algorithm is as follows:
The algorithm takes space, and efficient implementations of step 3 (for instance, sorting the subsets of B by weight, discarding subsets of B which weigh more than other subsets of B of greater or equal value, and using binary search to find the best match) result in a runtime of . As with the meet in the middle attack in cryptography, this improves on the runtime of a naive brute force approach (examining all subsets of {1...n}), at the cost of using exponential rather than constant space.
George Dantzig proposed a greedy approximation algorithm to solve the unbounded knapsack problem.[10] His version sorts the items in decreasing order of value per unit of weight, . It then proceeds to insert them into the sack, starting with as many copies as possible of the first kind of item until there is no longer space in the sack for more. Provided that there is an unlimited supply of each kind of item, if is the maximum value of items that fit into the sack, then the greedy algorithm is guaranteed to achieve at least a value of . However, for the bounded problem, where the supply of each kind of item is limited, the algorithm may be far from optimal.
Solving the unbounded knapsack problem can be made easier by throwing away items which will never be needed. For a given item i, suppose we could find a set of items J such that their total weight is less than the weight of i, and their total value is greater than the value of i. Then i cannot appear in the optimal solution, because we could always improve any potential solution containing i by replacing i with the set J. Therefore we can disregard the i-th item altogether. In such cases, J is said to dominate i. (Note that this does not apply to bounded knapsack problems, since we may have already used up the items in J.)
Finding dominance relations allows us to significantly reduce the size of the search space. There are several different types of dominance relations,[3] which all satisfy an inequality of the form: , and for some
where and . The vector denotes the number of copies of each member of J.
The i-th item is collectively dominated by J, written as , if the total weight of some combination of items in J is less than wi and their total value is greater than vi. Formally, and for some , i.e. . Verifying this dominance is computationally hard, so it can only be used with a dynamic programming approach. In fact, this is equivalent to solving a smaller knapsack decision problem where2 V = vi, W = wi, and the items are restricted to J.
The i-th item is threshold dominated by J, written as , if some number of copies of i are dominated by J. Formally, , and for some and . This is a generalization of collective dominance, first introduced in[4] and used in the EDUK algorithm. The smallest such defines the threshold of the item i, written . In this case, the optimal solution could contain at most copies of i.
The i-th item is multiply dominated by a single item j, written as , if i is dominated by some number of copies of j. Formally, , and for some i.e. . This dominance could be efficiently used during preprocessing because it can be detected relatively easily.
Let b be the best item, i.e. for all i. This is the item with the greatest density of value.
The i-th item is modularly dominated by a single item j, written as , if i is dominated by j plus several copies of b. Formally, , and i.e. .
Knapsack problems can be applied to real-world decision-making processes in a wide variety of fields, such as the finding the least wasteful cutting of raw materials,[11] selection of capital investments and financial portfolios,[12] selection of assets for asset-backed securitization,[13] and generating keys for the Merkle–Hellman knapsack cryptosystem.[14]
One early application of knapsack algorithms was in the construction and scoring of tests in which the test-takers have a choice as to which questions they answer. On tests with a homogeneous distribution of point values for each question, it is a fairly simple process to provide the test-takers with such a choice. For example, if an exam contains 12 questions each worth 10 points, the test-taker need only answer 10 questions to achieve a maximum possible score of 100 points. However, on tests with a heterogeneous distribution of point values—that is, when different questions or sections are worth different amounts of points— it is more difficult to provide choices. Feuerman and Weiss proposed a system in which students are given a heterogeneous test with a total of 125 possible points. The students are asked to answer all of the questions to the best of their abilities. Of the possible subsets of problems whose total point values add up to 100, a knapsack algorithm would determine which subset gives each student the highest possible score.[15]
The knapsack problem has been studied for more than a century, with early works dating as far back as 1897.[16] It is not known how the name "knapsack problem" originated, though the problem was referred to as such in the early works of mathematician Tobias Dantzig (1884–1956) , suggesting that the name could have existed in folklore before a mathematical problem had been fully defined.[17]
The quadratic knapsack problem was first introduced by Gallo, Hammer, and Simeone in 1960.[18]
A 1998 study of the Stony Brook University algorithms repository showed that, out of 75 algorithmic problems, the knapsack problem was the 18th most popular and the 4th most needed after kd-trees, suffix trees, and the bin packing problem.[19]