Dynamic logic (modal logic)

From Wikipedia, the free encyclopedia

Dynamic logic is an extension of modal logic originally intended for reasoning about computer programs and later applied to more general complex behaviors arising in linguistics, philosophy, AI, and other fields.

Contents

[edit] Language

Modal logic is characterized by the modal operators \Boxp (box p) asserting that p is necessarily the case, and \Diamondp (diamond p) asserting that p is possibly the case. Dynamic logic extends this by associating to every action a the modal operators [a] and <a>, thereby making it a multimodal logic. The meaning of [a]p is that after performing action a it is necessarily the case that p holds, that is, a must bring about p. The meaning of <a>p is that after performing a it is possible that p holds, that is, a might bring about p. These operators are related by [a]p¬<a>¬ p and <a>p ≡ ¬[a]¬ p, analogously to the relationship between universal and existential quantifiers.

Dynamic logic permits compound actions built up from smaller actions. While the basic control operators of any programming language could be used for this purpose, Kleene's regular expression operators are a good match to modal logic. Given actions a and b, the compound action a∪b, choice, also written a+b or a|b, is performed by performing one of a or b. The compound action a;b, sequence, is performed by performing first a and then b. The compound action a*, iteration, is performed by performing a zero or more times, sequentially. The constant action 0 or BLOCK does nothing and does not terminate, whereas the constant action 1 or SKIP or NOP, definable as 0*, does nothing but does terminate.

[edit] Axioms

These operators can be axiomatized in dynamic logic as follows, taking as already given a suitable axiomatization of modal logic including such axioms for modal operators as the above-mentioned axiom [a]p¬<a>¬ p and the two inference rules modus ponens (p, p→q\vdashq) and necessitation (p \vdash [a]p).

A1. [0]p

A2. [1]p ≡ p

A3. [a∪b]p ≡ [a]p ∧ [b]p

A4. [a;b]p ≡ [a][b]p

A5. [a*]p ≡ p ∧ [a][a*]p

A6. p ∧ [a*](p → [a]p) → [a*]p

Axiom A1 makes the empty promise that when BLOCK terminates, p will hold, even if p is the proposition false. (Thus BLOCK abstracts the essence of the action of hell freezing over.) A2 says that [NOP] acts as the identity function on propositions, that is, it transforms p into itself. A3 says that if doing one of a or b must bring about p, then a must bring about p and likewise for b, and conversely. A4 says that if doing a and then b must bring about p, then a must bring about a situation in which b must bring about p. A5 is the evident result of applying A2, A3 and A4 to the equation a* = 1∪a;a* of Kleene algebra. A6 asserts that if p holds now, and no matter how often we perform a it remains the case that the truth of p after that performance entails its truth after one more performance of a, then p must remain true no matter how often we perform a. A6 is recognizable as mathematical induction with the action n:=n+1 of incrementing n generalized to arbitrary actions a.

[edit] Derivations

The modal logic axiom [a]p¬<a>¬ p permits the derivation of the following six theorems corresponding to the above.

T1. ¬<0>p

T2. <1>p ≡ p

T3. <a∪b>p ≡ <a>p ∨ < b>p

T4. <a;b>p ≡ <a>< b>p

T5. <a*>p ≡ p ∨ <a><a*>p

T6. <a*>p → p ∨ <a*>(¬p ∧ <a>p)

T1 asserts the impossibility of bringing anything about by performing BLOCK. T2 notes again that NOP changes nothing, bearing in mind that NOP is both deterministic and terminating whence [1] and <1> have the same force. T3 says that if the choice of a or b could bring about p, then either a could bring about p or b could. T4 is just like A4. T5 is explained as for A5. T6 asserts that if it is possible to bring about p by performing a sufficiently often, then either p is true now or it is possible to perform a repeatedly to bring about a situation where p is (still) false but one more performance of a could bring about p.

Box and diamond are entirely symmetric with regard to which one takes as primitive. An alternative axiomatization would have been to take the theorems T1-T6 as axioms, from which we could then have derived A1-A6 as theorems.

The difference between implication and inference is the same in dynamic logic as in any other logic: whereas the implication p→q asserts that if p true then so is q, the inference p\vdashq asserts that if p is valid then so is q. However the dynamic nature of dynamic logic moves this distinction out of the realm of abstract axiomatics into the common-sense experience of situations in flux. The inference rule p\vdash[a]p for example is sound because its premise asserts that p holds at all times, whence no matter where a might take us, p will be true there. The implication p→[a]p is not valid however because the truth of p at the present moment is no guarantee of its truth after performing a. For example p→[a]p will be true in any situation where p is false, or in any situation where [a]p is true, but the assertion x=1 → [x:=x+1]x=1 is false in any situation where x has value 1, and therefore is not valid.

[edit] Derived rules of inference

As for modal logic, the inference rules modus ponens and necessitation suffice also for dynamic logic as the only primitive rules it needs, as noted above. However as usual in logic, many more rules can be derived from these with the help of the axioms. An example instance of such a derived rule in dynamic logic is that if kicking a broken TV once can't possibly fix it, then repeatedly kicking it can't possibly fix it either. Writing k for the action of kicking the TV, and b for the proposition that the TV is broken, dynamic logic expresses this inference as b→[k]b \vdash b→[k*]b, having as premise b → [k]b and as conclusion b → [k*]b. The meaning of [k]b is that it is guaranteed that after kicking the TV, it is broken. Hence the premise b → [k]b means that if the TV is broken, then after kicking it once it will still be broken. k* denotes the action of kicking the TV zero or more times. Hence the conclusion b →[k*]b means that if the TV is broken, then after kicking it zero or more times it will still be broken. For if not, then after the second last kick the TV would be in a state where kicking it once more would fix it, which the premise claims can never happen under any circumstances.

The inference b→[k]b \vdash b→[k*]b is sound. However the implication (b→[k]b) → (b→[k*]b) is not valid because we can easily find situations in which b→[k]b holds but b→[k*]b does not. In any such counterexample situation b must hold but [k*]b must be false, while [k]b however must be true. But this could happen in any situation where the TV is broken but can be revived with two kicks. The implication fails because it only requires that b→[k]b hold now, whereas the inference succeeds because it requires that b→[k]b hold in all situations, not just the present one.

An example of a valid implication is the proposition x≥3 → [x:=x+1]x≥4. This says that if x is greater or equal to 3, then after incrementing x, x must be greater or equal to 4. In the case of deterministic actions a that are guaranteed to terminate, such as x:=x+1, must and might have the same force, that is, [a] and <a> have the same meaning. Hence the above proposition is equivalent to x≥3 → <x:=x+1>x≥4 asserting that if x is greater or equal to 3 then after performing x:=x+1, x might be greater or equal to 4.

[edit] Assignment

The general form of an assignment statement is x := e where x is a variable and e is an expression built from constants and variables with whatever operations are provided by the language, such as addition and multiplication. The Hoare axiom for assignment is not given as a single axiom but rather as an axiom schema.

A7. [x:=e]Φ(x) ≡ Φ(e)

This is a schema in the sense that Φ(x) can be instantiated with any formula Φ containing zero or more instances of a variable x. The meaning of Φ(e) is Φ with those occurrences of x that occur free in Φ, i.e. not bound by some quantifier as in ∀x, replaced by e. For example we may instantiate A7 with [x:=e](x=y²) ≡ e=y², or with [x:=e](b=c+x) ≡ b=c+e. Such an axiom schema allows infinitely many axioms having a common form to be written as a finite expression connoting that form.

The instance [x:=x+1]x≥4 ≡ x+1 ≥ 4 of A7 allows us to calculate mechanically that the example [x:=x+1]x≥4 encountered a few paragraphs ago is equivalent to x+1 ≥ 4, which in turn is equivalent to x≥3 by elementary algebra.

An example illustrating assignment in combination with * is the proposition <(x:=x+1)*>x=7. This asserts that it is possible by incrementing x sufficiently often to make x 7. This of course is not alway true, e.g. if x is 8 to begin with, or 6.5, whence this proposition is not a theorem of dynamic logic. If x is of type integer however, then this proposition is true if and only if x is at most 7 to begin with, that is, it is just a roundabout way of saying x≤7.

Mathematical induction can be obtained as the instance of A6 in which the proposition p is instantiated as Φ(n), the action a as n:=n+1, and n as 0. The first two of these three instantiations are straightforward, converting A6 to (Φ(n) ʌ [(n:=n+1)*](Φ(n)→[n:=n+1]Φ(n))) → [(n:=n+1)*]Φ(n). However the obstensibly simple substitution of 0 for n is not so simple as it brings out the so-called referential opacity of modal logic in the case when a modality can interfere with a substitution.

When we substituted Φ(n) for p, we were thinking of the proposition symbol p as a rigid designator with respect to the modality [n:=n+1], meaning that it is the same proposition after incrementing n as before, even though incrementing n may impact its truth. Likewise the action a is still the same action after incrementing n, even though incrementing n will result in its executing in a different environment. However n itself is not a rigid designator with respect to the modality [n:=n+1]; if it denotes 3 before incrementing n it denotes 4 after. So we can't just substitute 0 for n everywhere in A6.

One way of dealing with the opacity of modalities is to eliminate them. To this end, expand [(n:=n+1)*]Φ(n) as the infinite conjunction [(n:=n+1)0]Φ(n) ʌ [(n:=n+1)1]Φ(n) ʌ [(n:=n+1)2]Φ(n) ʌ …, that is, the conjunction over all i of [(n:=n+1)i]Φ(n). Now apply A4 to turn [(n:=n+1)i]Φ(n) into [n:=n+1][n:=n+1]…Φ(n) having i modalities. Then apply Hoare's axiom i times to this to produce Φ(n+i), then simplify this infinite conjunction to ∀iΦ(n+i). This whole reduction should be applied to both instances of [(n:=n+1)*] in A6, yielding (Φ(n) ʌ ∀i(Φ(n+i)→[n:=n+1]Φ(n+i))) → ∀iΦ(n+i). The remaining modality can now be eliminated with one more use of Hoare's axiom to give (Φ(n) ʌ ∀i(Φ(n+i)→Φ(n+i+1))) → ∀iΦ(n+i).

With the opaque modalities now out of the way we can safely substitute 0 for n in the usual manner of first-order logic to obtain Peano's celebrated axiom (Φ(0) ʌ ∀i(Φ(i)→Φ(i+1))) → ∀iΦ(i), namely mathematical induction.

One subtlety we glossed over here is that ∀i should be understood as ranging over the natural numbers, where i is the superscript in the expansion of a* as the union of ai over all natural numbers i. The importance of keeping this typing information straight becomes apparent if n had been of type integer, or even real, for any of which A6 is perfectly valid as an axiom. As a case in point, if n is a real variable and Φ(n) is the predicate n is a natural number, then axiom A6 after the first two substitutions, that is, (Φ(n) ʌ ∀i(Φ(n+i)→Φ(n+i+1))) → ∀iΦ(n+i), is just as valid, that is, true in every state regardless of the value of n in that state, as when n is of type natural number. If in a given state n is a natural number then the antecedent of the main implication of A6 holds, but then n+i is also a natural number so the consequent also holds. If n is not a natural number then the antecedent is false and so A6 is remains true regardless of the truth of the consequent. We could strengthen A6 to an equivalence p ∧ [a*](p → [a]p) ≡ [a*]p without impacting any of this, the other direction being provable from A5, from which we see that if the antecedent of A6 does happen to be false somewhere then the consequent must be false.

[edit] Test

Dynamic logic associates to every proposition p an action p? called a test. When p holds, the test p? acts as a NOP, changing nothing while allowing the action to move on. When p is false, p? acts as BLOCK. Tests can be axiomatized as follows.

A8. [p?]q ≡ p→q

The corresponding theorem for <p?> is:

T8. <p?>q ≡ p∧q

The construct if p then a else b is realized in dynamic logic as (p?;a)∪(~p?;b). This action expresses a guarded choice: if p holds then p?;a is equivalent to a, whereas ~p?;b is equivalent to BLOCK, and a∪BLOCK is equivalent to a. Hence when p is true the performer of the action can only take the left branch, and when p is false the right.

The construct while p do a is realized as (p?;a)*;~p?. This performs p?;a zero or more times and then performs ~p?. As long as p remains true, the ~p? at the end blocks the performer from terminating the iteration prematurely, but as soon as it becomes false, further iterations of the body p are blocked and the performer then has no choice but to exit via the test ~p?.

[edit] Quantification as random assignment

The random-assignment statement x:=? denotes the nondeterministic action of setting x to an arbitrary value. [x:=?]p then says that p holds no matter what you set x to, while <x:=?>p says that it is possible to set x to a value that makes p true. [x:=?] thus has the same meaning as the universal quantifier ∀x, while <x:=?> similarly corresponds to the existential quantifier ∃x. That is, first-order logic can be understood as the dynamic logic of programs of the form x:=?.

[edit] Possible-world semantics

Modal logic is most commonly interpreted in terms of possible world semantics or Kripke structures. This semantics carries over naturally to dynamic logic by interpreting worlds as states of a computer in the application to program verification, or states of our environment in applications to linguistics, AI, etc. One role for possible world semantics is to formalize the intuitive notions of truth and validity, which in turn permit the notions of soundness and completeness to be defined for axiom systems. An inference rule is sound when validity of its premises implies validity of its conclusion. An axiom system is sound when all its axioms are valid and its inference rules are sound. An axiom system is complete when every valid formula is derivable as a theorem of that system. These concepts apply to all systems of logic including dynamic logic.

[edit] Propositional dynamic logic (PDL)

Ordinary or first-order logic has two types of terms, respectively assertions and data. As can be seen from the examples above, dynamic logic adds a third type of term denoting actions. The dynamic logic assertion [x:=x+1]x≥4 contains all three types: x, x+1, and 4 are data, x:=x+1 is an action, and x≥4 and [x:=x+1]x≥4 are assertions. Propositional logic is derived from first-order logic by omitting data terms and reasons only about abstract propositions, which may be simple propositional variables or atoms or compound propositions built with such logical connectives as and, or, and not.

Propositional dynamic logic, or PDL, was derived from dynamic logic in 1977 by Michael Fischer and Richard Ladner. PDL blends the ideas behind propositional logic and dynamic logic by adding actions while omitting data; hence the terms of PDL are actions and propositions. The TV example above is expressed in PDL whereas the next example involving x:=x+1 is in first-order DL. PDL is to (first-order) dynamic logic as propositional logic is to first-order logic.

Fischer and Ladner showed in their 1977 paper that PDL satisfiability was of computational complexity at most nondeterministic exponential time, and at least deterministic exponential time in the worst case. This gap was closed in 1978 by Vaughan Pratt who showed that PDL was decidable in deterministic exponential time. In 1977 Krister Segerberg proposed a complete axiomatization of PDL, namely any complete axiomatization of modal logic K together with axioms A1-A6 as given above. Completeness proofs for Segerberg's axioms were found by Gabbay, Parikh (1978), Pratt (1979), and Kozen and Parikh (1981).

[edit] History

Dynamic logic was developed by Vaughan Pratt in 1974 in notes for a class on program verification as an approach to assigning meaning to Hoare logic by expressing the Hoare formula p{a}q as p→[a]q. The approach was later published in 1976 as a logical system in its own right. The system parallels A. Salwicki's system of Algorithmic Logic and Edsger Dijkstra's notion of weakest-precondition predicate transformer wp(a,p), with [a]p corresponding to Dijkstra's wlp(a,p), weakest liberal precondition. Those logics however made no connection with either modal logic, Kripke semantics, regular expressions, or the calculus of binary relations; dynamic logic therefore can be viewed as a refinement of algorithmic logic and predicate transformers that connects them up to the axiomatics and Kripke semantics of modal logic as well as to the calculi of binary relations and regular expressions.

[edit] The Concurrency Challenge

Hoare logic, algorithmic logic, weakest preconditions, and dynamic logic are all well suited to discourse and reasoning about sequential behavior. Extending these logics to concurrent behavior however has proved problematic. There are various approaches but all of them lack the elegance of the sequential case. In contrast Amir Pnueli's 1977 system of temporal logic, another variant of modal logic sharing many common features with dynamic logic, differs from all of the above-mentioned logics by being what Pnueli has characterized as an "endogenous" logic, the others being "exogenous" logics. By this Pnueli meant that temporal logic assertions are interpreted within a universal behavioral framework in which a single global situation changes with the passage of time, whereas the assertions of the other logics are made externally to the multiple actions about which they speak. The advantage of the endogenous approach is that it makes no fundamental assumptions about what causes what as the environment changes with time. Instead a temporal logic formula can talk about two unrelated parts of a system, which because they are unrelated tacitly evolve in parallel. In effect ordinary logical conjunction of temporal assertions is the concurrent composition operator of temporal logic. The simplicity of this approach to concurrency has resulted in temporal logic being the modal logic of choice for reasoning about concurrent systems with its problems of synchronization, interference, independence, deadlock, livelock, fairness, etc.

These concerns of concurrency would appear to be less central to linguistics, philosophy, and artificial intelligence, the areas in which dynamic logic is most often encountered nowadays.

For a comprehensive treatment of dynamic logic see the book by David Harel et al cited below.

[edit] References

  • V.R. Pratt, "Semantical Considerations on Floyd-Hoare Logic", Proc. 17th Annual IEEE Symposium on Foundations of Computer Science, 1976, 109-121.
  • D. Harel, D. Kozen, and J. Tiuryn, "Dynamic Logic". MIT Press, 2000 (450 pp).
  • D. Harel, "Dynamic Logic", In D. Gabbay and F. Guenthner, editors, Handbook of Philosophical Logic, volume II: Extensions of Classical Logic, chapter 10, pages 497-604. Reidel, Dordrecht, 1984.

[edit] External links