Space hierarchy theorem

In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems.

The foundation for the hierarchy theorems lies in the intuition that with either more time or more space comes the ability to compute more functions (or decide more languages). The hierarchy theorems are used to demonstrate that the time and space complexity classes form a hierarchy where classes with tighter bounds contain fewer languages than those with more relaxed bounds. Here we define and prove the space hierarchy theorem.

The space hierarchy theorems rely on the concept of space-constructible functions. The deterministic and nondeterministic space hierarchy theorems state that for all space-constructible functions f(n),

\operatorname{SPACE}\left(o(f(n))\right) \subsetneq \operatorname{SPACE}(f(n)),

where SPACE stands for either DSPACE or NSPACE, and o refers to the little o notation.

Statement

Formally, a function f:\mathbb{N} \longrightarrow \mathbb{N} is space-constructible if f(n) \ge \log~n and there exists a Turing machine which computes the function f(n) in space O(f(n)) when starting with an input 1^n, where 1^n represents a string of n 1s. Most of the common functions that we work with are space-constructible, including polynomials, exponents, and logarithms.

For every space-constructible function f:\mathbb{N} \longrightarrow
\mathbb{N}, there exists a language L that is decidable in space O(f(n)) but not in space o(f(n)).

Proof

The goal here is to define a language that can be decided in space O(f(n)) but not space o(f(n)). Here we define the language L:

L = \{~ (\langle M \rangle, 10^k): M \mbox{ does not accept } (\langle M \rangle,
10^k) \mbox{ using space } \le f(|\langle M \rangle, 10^k|)  ~ \}

Now, for any machine M that decides a language in space o(f(n)), L will differ in at least one spot from the language of M, namely at the value of  (\langle M \rangle, 10^k) . The algorithm for deciding the language L is as follows:

  1. On an input x, compute f(|x|) using space-constructibility, and mark off f(|x|) cells of tape. Whenever an attempt is made to use more than f(|x|) cells, reject.
  2. If x is not of the form \langle M \rangle, 10^k for some TM M, reject.
  3. Simulate M on input x for at most 2^{f(|x|)} steps (using f(|x|) space). If the simulation tries to use more than f(|x|) space or more than 2^{f(|x|)} operations, then reject.
  4. If M accepted x during this simulation, then reject; otherwise, accept.

Note on step 3: Execution is limited to 2^{f(|x|)} steps in order to avoid the case where M does not halt on the input x. That is, the case where M consumes space of only O(f(x)) as required, but runs for infinite time.

The above proof holds for the case of PSPACE whereas we must make some change for the case of NPSPACE. The crucial point is that while on a deterministic TM we may easily invert acceptance and rejection (crucial for step 4), this is not possible on a non-deterministic machine.
For the case of NPSPACE we will first modify step 4 to:

  1. If M accepted x during this simulation, then accept; otherwise, reject.

We will now prove by contradiction that L can not be decided by a TM using o(f(n)) cells.
Assuming L can be decided by a TM using o(f(n)) cells, and following from the Immerman–Szelepcsényi theorem follows that \overline L can also be determined by a TM (which we will call \overline M) using o(f(n)) cells.
Here lies the contradiction, therefore our assumption must be false:

  1. If w = (\langle \overline M \rangle, 10^k) (for some large enough k) is not in L then M will accept it, therefore \overline M rejects w, therefore w is in L (contradiction).
  2. If w = (\langle \overline M \rangle, 10^k) (for some large enough k) is in L then M will reject it, therefore \overline M accepts w, therefore w is not in L (contradiction).

Comparison and improvements

The space hierarchy theorem is stronger than the analogous time hierarchy theorems in several ways:

It seems to be easier to separate classes in space than in time. Indeed, whereas the time hierarchy theorem has seen little remarkable improvement since its inception, the nondeterministic space hierarchy theorem has seen at least one important improvement by Viliam Geffert in his 2003 paper "Space hierarchy theorem revised". This paper made several generalizations of the theorem:

Refinement of space hierarchy

If space is measured as the number of cells used regardless of alphabet size, then SPACE(f(n)) = SPACE(O(f(n))) because one can achieve any linear compression by switching to a larger alphabet. However, by measuring space in bits, a much sharper separation is achievable for deterministic space. Instead of being defined up to a multiplicative constant, space is now defined up to an additive constant. However, because any constant amount of external space can be saved by storing the contents into the internal state, we still have SPACE(f(n)) = SPACE(f(n)+O(1)).

Assume that f is space-constructible. SPACE is deterministic.

The proof is similar to the proof of the space hierarchy theorem, but with two complications: The universal Turing machine has to be space-efficient, and the reversal has to be space-efficient. One can generally construct universal Turing machines with O(log(space)) space overhead, and under appropriate assumptions, just O(1) space overhead (which may depend on the machine being simulated). For the reversal, the key issue is how to detect if the simulated machine rejects by entering an infinite (space-constrained) loop. Simply counting the number of steps taken would increase space consumption by about f(n). At the cost of a potentially exponential time increase, loops can be detected space-efficiently as follows: [1]

Modify the machine to erase everything and to go to a specific configuration A on success. Use depth-first search to determine whether A is reachable in the space bound from the starting configuration. The search starts at A and goes over configurations that lead to A. Because of determinism, this can be done in place and without going into a loop. Also (but this is not necessary for the proof), to determine whether the machine exceeds the space bound (as opposed to looping within the space bound), we can iterate over all configurations about to exceed the space bound and check (again using depth-first search) whether the initial configuration leads to any of them.

Corollaries

Corollary 1

For any two functions f_1, f_2: \mathbb{N} \longrightarrow
\mathbb{N}, where f_1(n) is o(f_2(n)) and f_2 is space-constructible, \mathrm{SPACE}(f_1(n)) \subsetneq \mathrm{SPACE}(f_2(n)).

This corollary lets us separate various space complexity classes. For any function n^k is space-constructible for any natural number k. Therefore for any two natural numbers k_1 < k_2 we can prove \mathrm{SPACE}(n^{k_1}) \subsetneq \mathrm{SPACE}(n^{k_2}). We can extend this idea for real numbers in the following corollary. This demonstrates the detailed hierarchy within the PSPACE class.

Corollary 2

For any two nonnegative real numbers a_1 < a_2, \mathrm{SPACE}(n^{a_1})
\subsetneq \mathrm{SPACE}(n^{a_2}).

Corollary 3

NLPSPACE.

Proof

Savitch's theorem shows that \mathrm{NL} \subseteq \mathrm{SPACE}(\log^2n), while the space hierarchy theorem shows that \mathrm{SPACE}(\log^2n) \subsetneq \mathrm{SPACE}(n). Thus we get this corollary along with the fact that TQBF NL since TQBF is PSPACE-complete.

This could also be proven using the non-deterministic space hierarchy theorem to show that NL ⊊ NPSPACE, and using Savitch's theorem to show that PSPACE = NPSPACE.

Corollary 4

PSPACEEXPSPACE.

This last corollary shows the existence of decidable problems that are intractable. In other words their decision procedures must use more than polynomial space.

Corollary 5

There are problems in PSPACE requiring an arbitrarily large exponent to solve; therefore PSPACE does not collapse to DSPACE(nk) for some constant k.

See Also

References

  1. Sipser, Michael (1978). "Halting Space-Bounded Computations". Proceedings of the 19th Annual Symposium on Foundations of Computer Science.


This article is issued from Wikipedia - version of the Monday, February 15, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.