Linear speedup theorem

From Wikipedia, the free encyclopedia

In computational complexity theory, the linear speedup theorem for Turing machines proves that given any c > 0 and any Turing machine solving a problem in time f(n), there is another machine that solves the same problem in time cf(n)+n+2.

Here is a sketch proof for the case c = ½. Suppose the Turing machine M solves the problem in f(n) steps and that it has k tape symbols and s internal states. Construct a new machine M' with k3 tape symbols, each symbol representing a "chunk" of 3 adjacent symbols in machine M. The tape of machine M' is a compressed representation of the tape of machine M, with cell i for machine M' representing the chunk of cells 2i-1, 2i and 2i+1 of machine M (note that these chunks overlap). In one computation step, M' simulates the computation of M until M's head leaves the chunk cells to the left or right (this simulation can be done in a single step because M can be in no more than sk³ states without leaving the chunk or repeating a state). During this simulation M may accept, in which case M' also accepts, or M may loop, in which case M' does nothing (and so also loops). A final subtlety is that because chunks overlap, every transition between chunks in M must be turned into k transitions between cells in M' to take account of the k different symbols that might have been written onto the cell belonging to both chunks. The construction requires that each computation step in M' corresponds to at least 2 computation steps of M. So M' takes no more than ½f(n) steps. By adding delaying steps to M', we can ensure it takes exactly ½f(n) steps.

This proof can be easily generalized to all values of c > 0. A similar technique, known as the "tape compression theorem", allows for a similar constant factor reduction in the space requirements of a Turing machine.

The linear speedup theorem is the reason why complexity theory ignores linear factors and represents the complexity of algorithms with big O notation.

Languages