FO (complexity)

In descriptive complexity, a branch of computational complexity, FO is a complexity class of structures which can be recognized by formulas of first-order logic, and also equals the complexity class AC0. Descriptive complexity uses the formalism of logic, but does not use several key notions associated with logic such as proof theory or axiomatization.

Restricting predicates to be from a set X yields a smaller class FO[X]. For instance, FO[<] is the set of star-free languages. The two different definitions of FO[<], in terms of logic and in terms of regular expressions, suggests that this class may be mathematically interesting beyond its role in computational complexity, and that methods from both logic and regular expressions may be useful in its study.

Similarly, extensions of FO formed by the addition of operators give rise to other well-known complexity classes.[1] This allows the complexity of some problems to be established without reference to algorithms.

Definition and examples

The idea

When we use the logic formalism to describe a computational problem, the input is a finite structure, and the elements of that structure are the domain of discourse. Usually the input is either a string (of bits or over an alphabet) and the elements of the logical structure represent positions of the string, or the input is a graph and the elements of the logical structure represent its vertices. The length of the input will be measured by the size of the respective structure. Whatever the structure is, we can assume that there are relations that can be tested, for example "E(x,y) is true iff there is an edge from x to y" (in case of the structure being a graph), or "P(n) is true iff the nth letter of the string is 1." These relations are the predicates for the first-order logic system. We also have constants, which are special elements of the respective structure, for example if we want to check reachability in a graph, we will have to choose two constants s (start) and t (terminal).

In descriptive complexity theory we almost always suppose that there is a total order over the elements and that we can check equality between elements. This lets us consider elements as numbers: the element x represents the number n iff there are (n-1) elements y with y<x. Thanks to this we also may have the primitive predicate "bit", where bit(x,k) is true if only the kth bit of x is 1. (We can replace addition and multiplication by ternary relations such that plus(x,y,z) is true iff x+y=z and times(x,y,z) is true iff x*y=z).

Formally

Let X be a set of predicate. The language FO[X] is defined as the closure by conjunction (\wedge), negation (\neg) and universal quantification (\forall) over elements of the structures. Existential quantification (\exists) and disjunction (\vee) are also often used but those can be defined by means of the first three symbols. The base case is the predicates of X applied to some variables. One always implicitly has a predicate P_a(x) for each letter a of an alphabet, stating that the letter at position x is an a.

The semantics of the formulae in FO is straightforward, \neg A is true iff A is false, A\wedge B is true iff A is true and B is true, and \forall x P(x) is true iff P(v) is true for all values v that x may take in the underlying universe. For P a c-ary predicate, P(x_1,\dots, x_c) is true if and only if when x_i is interpreted as n_i P(n_1,\dots, n_c) is true.

Property

Warning

A query in FO will then be to check if a first-order formula is true over a given structure representing the input to the problem. One should not confuse this kind of problem with checking if a quantified boolean formula is true, which is the definition of QBF, which is PSPACE-complete. The difference between those two problems is that in QBF the size of the problem is the size of the formula and elements are just boolean values, whereas in FO the size of the problem is the size of the structure and the formula is fixed.

This is similar to Parameterized complexity but the size of the formula is not a fixed parameter.

Normal form

Every formula is equivalent to a formula in prenex normal form (where all quantifiers are written first, followed a quantifier-free formula).

Operators

FO without any operators

In circuit complexity, FO(ARB) where ARB is the set of every predicates, the logic where we can use arbitrary predicates, can be shown to be equal to AC0, the first class in the AC hierarchy. Indeed, there is a natural translation from FO's symbols to nodes of circuits, with \forall, \exists being \land and \lor of size n.

FO(BIT) is the restriction of AC0 family of circuit constructible in alternative logarithmic time. FO(<) is the set of Star-free languages.

Partial fixed point is PSPACE

FO(PFP,X) is the set of boolean queries definable in FO(X) where we add a partial fixed point operator.

Let k be an integer, x, y be vectors of k variables, P be a second-order variable of arity k, and \phi be a FO(PFP,X) function using x and P as variables. We can iteratively define (P_i)_{i\in N} such that P_0(x)=false and P_i(x)=\phi(P_{i-1},x) (meaning \phi with P_{i-1} substituted for the second-order variable P). Then, either there is a fixed point, or the list of (P_i)s is cyclic.

PFP(\phi_{P,x})(y) is defined as the value of the fixed point of (P_i) on y if there is a fixed point, else as false. Since Ps are properties of arity k, there are at most 2^{n^k} values for the P_is, so with a polynomial-space counter we can check if there is a loop or not.

It has been proven that FO(PFP,BIT) is equal to PSPACE. This definition is equivalent to FO(2^{n^{O(1)}}).

Least Fixed Point is P

FO(LFP,X) is the set of boolean queries definable in FO(PFP,X) where the partial fixed point is limited to be monotone. That is, if the second order variable is P, then P_i(x) always implies P_{i+1}(x).

We can guarantee monotonicity by restricting the formula \phi to only contain positive occurrences of P (that is, occurrences preceded by an even number of negations). We can alternatively describe LFP(\phi_{P,x}) as PFP(\psi_{P,x}) where \psi(P,x)=\phi(P,x)\vee P(x).

Due to monotonicity, we only add vectors to the truth table of P, and since there are only n^k possible vectors we will always find a fixed point before n^k iterations. Hence it can be shown that FO(LFP,BIT)=P. This definition is equivalent to FO(n^{O(1)}).

Transitive closure is NL

FO(TC,X) is the set of boolean queries definable in FO(X) with a transitive closure (TC) operator.

TC is defined this way: let k be a positive integer and u,v,x,y be vector of k variables. Then TC(\phi_{u,v})(x,y) is true if there exist n vectors of variables (z_i) such that z_1=x, z_n=y, and for all i<n, \phi(z_i,z_{i+1}) is true. Here, \phi is a formula written in FO(TC) and \phi(x,y) means that the variables u and v are replaced by x and y.

FO(TC,BIT) is equal to NL.

Deterministic transitive closure is L

FO(DTC,X) is defined as FO(TC,X) where the transitive closure operator is deterministic. This means that when we apply DTC(\phi_{u,v}), we know that for all u, there exists at most one v such that \phi(u,v).

We can suppose that DTC(\phi_{u,v}) is syntactic sugar for TC(\psi_{u,v}) where \psi(u,v)=\phi(u,v)\wedge \forall x (x=v \vee \neg \phi(u,x)).

It has been shown that FO(DTC,BIT) is equal to L.

Normal form

Any formula with a fixed point (resp. transitive cosure) operator can without loss of generality be written with exactly one application of the operators applied to 0 (resp. 0,(n-1))

Iterating

We will define first-order with iteration, 'FO[t(n)]'; here t(n) is a (class of) functions from integers to integers, and for different classes of functions t(n) we will obtain different complexity classes FO[t(n)].

In this section we will write (\forall x P) Q to mean (\forall x (P\Rightarrow Q)) and (\exists x P) Q to mean (\exists x (P \wedge Q)). We first need to define quantifier blocks (QB), a quantifier block is a list (Q_1 x_1, \phi_1)...(Q_k x_k, \phi_k) where the \phi_is are quantifier-free FO-formulae and Q_is are either \forall or \exists. If Q is a quantifiers block then we will call [Q]^{t(n)} the iteration operator, which is defined as Q written t(n) time. One should pay attention that here there are k*t(n) quantifiers in the list, but only k variables and each of those variable are used t(n) times.

We can now define FO[t(n)] to be the FO-formulae with an iteration operator whose exponent is in the class t(n), and we obtain those equalities:

Logic without arithmetical relations

Let the successor relation, succ, be a binary relation such that \rm{succ}(x,y) is true if and only if x+1=y.

Over first order logic, succ is strictly less expressive than <, which is less expressive than +, which is less expressive than bit. + and \times are as expressive as bit.

Using successor to define bit

It is possible to define the plus and then the bit relations with a deterministic transitive closure.

\rm{plus}(a,b,c)=(\rm{DTC}_{v,x,y,z} \rm{succ}(v,y) \land
\rm{succ}(z,x)) (a,b,c,0) and

\rm{bit}(a,b)=(\rm{DTC}_{a,b,a',b'}\psi)(a,b,1,0) with

\psi=\text{if } b=0 \text{ then }
(\text{if } \exists m(a=m+m+1) \text{ then }(a'=1\land b'=0)\text{ else }
\bot)\text{ else } (\rm{succ}(b',b) \land (a+a=a'\lor
a+a+1=a')

This just means that when we query for bit 0 we check the parity, and go to (1,0) if a is odd(which is an accepting state), else we reject. If we check a bit b>0, we divide a by 2 and check bit b-1.

Hence it makes no sense to speak of operators with successor alone, without the other predicates.

Logics without successor

FO[LFP] and FO[PFP] are two logics without any predicates, apart from the equality predicates between variables and the letters predicates. They are equal respectively to relational-P and FO(PFP) is relational-PSPACE, the classes P and PSPACE over relational machines.[2]

The Abiteboul-Vianu Theorem states that FO(LFP)=FO(PFP) if and only if FO(<,LFP)=FO(<,PFP), hence if and only if P=PSPACE. This result has been extended to other fixpoints.[2] This shows that the order problem in first order is more a technical problem than a fundamental one.

References

  1. Immerman, Neil (1999). Descriptive Complexity. Springer. ISBN 0-387-98600-6.
  2. 1 2 Serge Abiteboul, Moshe Y. Vardi, Victor Vianu: Fixpoint logics, relational machines, and computational complexity Journal of the ACM (JACM) archive, Volume 44 , Issue 1 (January 1997), Pages: 30-56, ISSN 0004-5411

External links

This article is issued from Wikipedia - version of the Thursday, January 14, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.