Problem size
From Wikipedia, the free encyclopedia
This article does not cite any references or sources. (November 2006) Please help improve this article by adding citations to reliable sources. Unverifiable material may be challenged and removed. |
In the fields of algorithm analysis and computational complexity theory, the running time or space requirements of an algorithm are expressed as a function of the problem size. The problem size measures the size, in some sense, of the input to the algorithm. The problem size has to be cleanly defined before an algorithm analysis can be attempted.
For many problems, the problem size is taken to be the number of bits required to encode the input. For instance, if the problem is to square a given (nonzero) integer, we would typically measure the input size as one plus the floor of the base two logarithm of the input integer (since that describes how many bits are needed to encode the integer in binary notation). However, often the encoding of the input is not canonical; if for instance the problem is one in graph theory, then different problem sizes can be defined, since a graph can be encoded as a list of edges or alternatively as an adjacency matrix.