Blum's speedup theorem

In computational complexity theory Blum's speedup theorem, first stated by Manuel Blum in 1967, is a fundamental theorem about the complexity of computable functions.

Each computable function has an infinite number of different program representations in a given programming language. In the theory of algorithms one often strives to find a program with the smallest complexity for a given computable function and a given complexity measure (such a program could be called optimal). Blum's speedup theorem shows that for any complexity measure there are computable functions that are not optimal with respect to that measure. This also rules out the idea there is a way to assign to arbitrary functions their computational complexity, meaning the assignment to any f of the complexity of an optimal program for f. This does of course not exclude the possibility of finding the complexity of an optimal program for certain specific functions.

Speedup theorem

Given a Blum complexity measure (\varphi, \Phi) and a total computable function f with two parameters, then there exists a total computable predicate g (a boolean valued computable function) so that for every program i for g, there exists a program j for g so that for almost all x

f(x, \Phi_j(x)) \leq \Phi_i(x) \,

f is called the speedup function. The fact that it may be as fast-growing as desired (as long as it is computable) means that the phenomenon of always having a program of smaller complexity remains even if by "smaller" we mean "significantly smaller" (for instance, quadratically smaller, exponentially smaller).

See also

References

External links

This article is issued from Wikipedia - version of the Sunday, September 13, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.