De Morgan's laws

De Morgan's Laws represented with Venn diagrams

In propositional logic and boolean algebra, De Morgan's laws[1][2][3] are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation.

The rules can be expressed in English as:

The negation of a conjunction is the disjunction of the negations.
The negation of a disjunction is the conjunction of the negations.

or informally as:

"not (A and B)" is the same as "(not A) or (not B)"

also,

"not (A or B)" is the same as "(not A) and (not B)".

The rules can be expressed in formal language with two propositions P and Q as:

\neg(P\land Q)\iff(\neg P)\lor(\neg Q)

and

\neg(P\lor Q)\iff(\neg P)\land(\neg Q),

where:

Applications of the rules include simplification of logical expressions in computer programs and digital circuit designs. De Morgan's laws are an example of a more general concept of mathematical duality.

Formal notation

The negation of conjunction rule may be written in sequent notation:

\neg(P \and Q) \vdash (\neg P \or \neg Q).

The negation of disjunction rule may be written as:

\neg(P \or Q) \vdash (\neg P \and \neg Q).

In rule form: negation of conjunction

\frac{\neg (P \and Q)}{\therefore \neg P \or \neg Q}

and negation of disjunction

\frac{\neg (P \or Q)}{\therefore \neg P \and \neg Q}

and expressed as a truth-functional tautology or theorem of propositional logic:

\begin{align}
  \neg (P \and Q) &\to (\neg P \or \neg Q), \\
   \neg (P \or Q) &\to (\neg P \and \neg Q),
\end{align}

where P and Q are propositions expressed in some formal system.

Substitution form

De Morgan's laws are normally shown in the compact form above, with negation of the output on the left and negation of the inputs on the right. A clearer form for substitution can be stated as:

\begin{align}
  (P \and Q) &\equiv \neg (\neg P \or \neg Q), \\
   (P \or Q) &\equiv \neg (\neg P \and \neg Q).
\end{align}

This emphasizes the need to invert both the inputs and the output, as well as change the operator, when doing a substitution.

Set theory and Boolean algebra

In set theory and Boolean algebra, it is often stated as "union and intersection interchange under complementation",[4] which can be formally expressed as:

\begin{align}
  \overline{A \cup B} &\equiv \overline{A} \cap \overline{B}, \\
  \overline{A \cap B} &\equiv \overline{A} \cup \overline{B},
\end{align}

where:

The generalized form is:

\begin{align}
  \overline{\bigcap_{i \in I} A_{i}} &\equiv \bigcup_{i \in I} \overline{A_{i}}, \\
  \overline{\bigcup_{i \in I} A_{i}} &\equiv \bigcap_{i \in I} \overline{A_{i}},
\end{align}

where I is some, possibly uncountable, indexing set.

In set notation, De Morgan's laws can be remembered using the mnemonic "break the line, change the sign".[5]

Engineering

In electrical and computer engineering, De Morgan's laws are commonly written as:

\overline{A \cdot B} \equiv \overline {A} + \overline {B}

and

\overline{A + B} \equiv \overline {A} \cdot \overline {B},

where:

Text searching

De Morgan’s laws commonly apply to text searching using Boolean operators AND, OR, and NOT. Consider a set of documents containing the words “cars” and “trucks”. De Morgan’s laws hold that these two searches will return the same set of documents:

Search A: NOT (cars OR trucks)
Search B: (NOT cars) AND (NOT trucks)

The corpus of documents containing “cars” or “trucks” can be represented by four documents:

Document 1: Contains only the word “cars”.
Document 2: Contains only “trucks”.
Document 3: Contains both “cars” and “trucks”.
Document 4: Contains neither “cars” nor “trucks”.

To evaluate Search A, clearly the search “(cars OR trucks)” will hit on Documents 1, 2, and 3. So the negation of that search (which is Search A) will hit everything else, which is Document 4.

Evaluating Search B, the search “(NOT cars)” will hit on documents that do not contain “cars”, which is Documents 2 and 4. Similarly the search “(NOT trucks)” will hit on Documents 1 and 4. Applying the AND operator to these two searches (which is Search B) will hit on the documents that are common to these two searches, which is Document 4.

A similar evaluation can be applied to show that the following two searches will return the same set of documents (Documents 1, 2, 4):

Search C: NOT (cars AND trucks),
Search D: (NOT cars) OR (NOT trucks).

History

The laws are named after Augustus De Morgan (1806–1871),[6] who introduced a formal version of the laws to classical propositional logic. De Morgan's formulation was influenced by algebraization of logic undertaken by George Boole, which later cemented De Morgan's claim to the find. Nevertheless, a similar observation was made by Aristotle, and was known to Greek and Medieval logicians.[7] For example, in the 14th century, William of Ockham wrote down the words that would result by reading the laws out.[8] Jean Buridan, in his Summulae de Dialectica, also describes rules of conversion that follow the lines of De Morgan's laws.[9] Still, De Morgan is given credit for stating the laws in the terms of modern formal logic, and incorporating them into the language of logic. De Morgan's laws can be proved easily, and may even seem trivial.[10] Nonetheless, these laws are helpful in making valid inferences in proofs and deductive arguments.

Informal proof

De Morgan's theorem may be applied to the negation of a disjunction or the negation of a conjunction in all or part of a formula.

Negation of a disjunction

In the case of its application to a disjunction, consider the following claim: "it is false that either of A or B is true", which is written as:

\neg(A\lor B).

In that it has been established that neither A nor B is true, then it must follow that both A is not true and B is not true, which may be written directly as:

(\neg A)\wedge(\neg B).

If either A or B were true, then the disjunction of A and B would be true, making its negation false. Presented in English, this follows the logic that "since two things are both false, it is also false that either of them is true".

Working in the opposite direction, the second expression asserts that A is false and B is false (or equivalently that "not A" and "not B" are true). Knowing this, a disjunction of A and B must be false also. The negation of said disjunction must thus be true, and the result is identical to the first claim.

Negation of a conjunction

The application of De Morgan's theorem to a conjunction is very similar to its application to a disjunction both in form and rationale. Consider the following claim: "it is false that A and B are both true", which is written as:

\neg(A\land B).

In order for this claim to be true, either or both of A or B must be false, for if they both were true, then the conjunction of A and B would be true, making its negation false. Thus, one (at least) or more of A and B must be false (or equivalently, one or more of "not A" and "not B" must be true). This may be written directly as,

(\neg A)\lor(\neg B).

Presented in English, this follows the logic that "since it is false that two things are both true, at least one of them must be false".

Working in the opposite direction again, the second expression asserts that at least one of "not A" and "not B" must be true, or equivalently that at least one of A and B must be false. Since at least one of them must be false, then their conjunction would likewise be false. Negating said conjunction thus results in a true expression, and this expression is identical to the first claim.

Formal proof

The proof that (A\cap B)^c = A^c \cup B^c is completed in 2 steps by proving both (A\cap B)^c \subseteq A^c \cup B^c and A^c \cup B^c \subseteq (A\cap B)^c.

Let x \in (A \cap B)^c. Then, x \not\in A \cap B. Because A \cap B = \{y | y \in A \text{ and } y \in B\}, it must be the case that x \not\in A or x \not\in B. If x \not\in A, then x \in A^c, so x \in A^c \cup B^c. Similarly, if x \not\in B, then x \in B^c, so x \in A^c\cup B^c. Thus, \forall x( if x \in (A\cap B)^c, then x \in A^c \cup B^c); that is, (A\cap B)^c \subseteq A^c \cup B^c.

To prove the reverse direction, let x \in A^c \cup B^c, and assume x \not\in (A\cap B)^c. Under that assumption, it must be the case that x \in A\cap B; it follows that x \in A and x \in B, and thus x \not\in A^c and x \not\in B^c. However, that means x \not\in A^c \cup B^c, in contradiction to the hypothesis that x \in A^c \cup B^c; the assumption x \not\in (A\cap B)^c must not be the case, meaning that x \in (A\cap B)^c must be the case. Therefore, \forall x( if x \in A^c \cup B^c, then x \in (A\cap B)^c); that is, A^c \cup B^c \subseteq (A\cap B)^c.

If A^c \cup B^c \subseteq (A\cap B)^c and (A \cap B)^c \subseteq A^c \cup B^c, then (A\cap B)^c = A^c \cup B^c; this concludes the proof of De Morgan's law.

The other De Morgan's law, (A \cup B)^c = A^c \cap B^c, is proven similarly.

Extensions

De Morgan's Laws represented as a circuit with logic gates

In extensions of classical propositional logic, the duality still holds (that is, to any logical operator one can always find its dual), since in the presence of the identities governing negation, one may always introduce an operator that is the De Morgan dual of another. This leads to an important property of logics based on classical logic, namely the existence of negation normal forms: any formula is equivalent to another formula where negations only occur applied to the non-logical atoms of the formula. The existence of negation normal forms drives many applications, for example in digital circuit design, where it is used to manipulate the types of logic gates, and in formal logic, where it is a prerequisite for finding the conjunctive normal form and disjunctive normal form of a formula. Computer programmers use them to simplify or properly negate complicated logical conditions. They are also often useful in computations in elementary probability theory.

Let one define the dual of any propositional operator P(p, q, ...) depending on elementary propositions p, q, ... to be the operator \mbox{P}^d defined by

\mbox{P}^d(p, q, ...) = \neg P(\neg p, \neg q, \dots).

This idea can be generalised to quantifiers, so for example the universal quantifier and existential quantifier are duals:

 \forall x \, P(x) \equiv \neg \exists x \, \neg P(x),
 \exists x \, P(x) \equiv \neg \forall x \, \neg P(x).

To relate these quantifier dualities to the De Morgan laws, set up a model with some small number of elements in its domain D, such as

D = {a, b, c}.

Then

 \forall x \, P(x) \equiv P(a) \land P(b) \land P(c)

and

 \exists x \, P(x) \equiv P(a) \lor P(b) \lor P(c).\,

But, using De Morgan's laws,

 P(a) \land P(b) \land P(c) \equiv \neg (\neg P(a) \lor \neg P(b) \lor \neg P(c))

and

 P(a) \lor P(b) \lor P(c) \equiv \neg (\neg P(a) \land \neg P(b) \land \neg P(c)),

verifying the quantifier dualities in the model.

Then, the quantifier dualities can be extended further to modal logic, relating the box ("necessarily") and diamond ("possibly") operators:

 \Box p \equiv \neg \Diamond \neg p,
 \Diamond p \equiv \neg \Box \neg p.\,

In its application to the alethic modalities of possibility and necessity, Aristotle observed this case, and in the case of normal modal logic, the relationship of these modal operators to the quantification can be understood by setting up models using Kripke semantics.

See also

References

  1. Copi and Cohen
  2. Hurley
  3. Moore and Parker
  4. Boolean Algebra by R. L. Goodstein. ISBN 0-486-45894-6
  5. 2000 Solved Problems in Digital Electronics by S. P. Bali
  6. DeMorgan’s Theorems at mtsu.edu
  7. Bocheński's History of Formal Logic
  8. William of Ockham, Summa Logicae, part II, sections 32 and 33.
  9. Jean Buridan, Summula de Dialectica. Trans. Gyula Klima. New Haven: Yale University Press, 2001. See especially Treatise 1, Chapter 7, Section 5. ISBN 0-300-08425-0
  10. Augustus De Morgan (1806–1871) by Robert H. Orr

External links

This article is issued from Wikipedia - version of the Friday, January 29, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.