Regular language

From Wikipedia, the free encyclopedia

In theoretical computer science, a regular language is a formal language (i.e., a possibly infinite set of finite sequences of symbols from a finite alphabet) that satisfies the following equivalent properties:

Contents

[edit] Regular languages over an alphabet

The collection of regular languages over an alphabet Σ is defined recursively as follows:

  • the empty language Ø is a regular language.
  • the empty string language { ε } is a regular language.
  • For each a ∈ Σ, the singleton language { a } is a regular language.
  • If A and B are regular languages, then AB (union), AB (concatenation), and A* (Kleene star) are regular languages.
  • No other languages over Σ are regular.

All finite languages are regular. Other typical examples include the language consisting of all strings over the alphabet {a, b} which contain an even number of a's, or the language consisting of all strings of the form: several a's followed by several b's.

A simple example of a language that is not regular is the set of strings \{a^nb^n\,\vert\; n\ge 0\}. Some additional examples are given below.

[edit] Complexity results

In computational complexity theory, the complexity class of all regular languages is sometimes referred to as REGULAR or REG and equals DSPACE(O(1)), the decision problems that can be solved in constant space (the space used is independent of the input size). REGULARAC0, since it (trivially) contains the parity problem of determining whether the number of 1 bits in the input is even or odd and this problem is not in AC0.[1] On the other hand, it is not known to contain AC0.

If a language is not regular, it requires a machine with at least Ω(log log n) space to recognize (where n is the input size).[2] In other words, DSPACE(o(log log n)) equals the class of regular languages. In practice, most nonregular problems are solved by machines taking at least logarithmic space.

[edit] Closure properties

The regular languages are closed under the following operations: That is, if L and P are regular languages, the following languages are regular as well:

[edit] Deciding whether a language is regular

To locate the regular languages in the Chomsky hierarchy, one notices that every regular language is context-free. The converse is not true: for example the language consisting of all strings having the same number of a's as b's is context-free but not regular. To prove that a language such as this is not regular, one uses the Myhill-Nerode theorem or the pumping lemma.

There are two purely algebraic approaches to define regular languages. If Σ is a finite alphabet and Σ* denotes the free monoid over Σ consisting of all strings over Σ,  f : Σ* → M is a monoid homomorphism where M is a finite monoid, and S is a subset of M, then the set f −1(S) is regular. Every regular language arises in this fashion.

If L is any subset of Σ*, one defines an equivalence relation ~ (called the syntactic relation) on Σ* as follows: u ~ v is defined to mean

uwL if and only if vwL for all w ∈ Σ*

The language L is regular if and only if the number of equivalence classes of ~ is finite (A proof of this is provided in the article on the syntactic monoid). When a language is regular, then the number of equivalence classes is equal to the number of states of the minimal deterministic finite automaton accepting L.

A similar set of statements can be formulated for a monoid M\subset\Sigma^*. In this case, equivalence over M leads to the concept of a recognizable language.

[edit] Finite languages

A specific subset within the class of regular languages is the finite languages – those containing only a finite number of words. These are obviously regular as one can create a regular expression that is the union of every word in the language, and thus are regular.

[edit] See also

[edit] References

  • Michael Sipser (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN 0-534-94728-X.  Chapter 1: Regular Languages, pp.31–90. Subsection "Decidable Problems Concerning Regular Languages" of section 4.1: Decidable Languages, pp.152–155.
  1. ^ M. Furst, J. B. Saxe, and M. Sipser. Parity, circuits, and the polynomial-time hierarchy. Math. Systems Theory, 17:13–27, 1984.
  2. ^ J. Hartmanis, P. L. Lewis II, and R. E. Stearns. Hierarchies of memory-limited computations. Proceedings of the 6th Annual IEEE Symposium on Switching Circuit Theory and Logic Design, pp. 179–190. 1965.

[edit] External links

Automata theory: formal languages and formal grammars
Chomsky
hierarchy
Grammars Languages Minimal
automaton
Type-0 Unrestricted Recursively enumerable Turing machine
n/a (no common name) Recursive Decider
Type-1 Context-sensitive Context-sensitive Linear-bounded
n/a Indexed Indexed Nested stack
n/a Tree-adjoining etc. (Mildly context-sensitive) Embedded pushdown
Type-2 Context-free Context-free Nondeterministic pushdown
n/a Deterministic context-free Deterministic context-free Deterministic pushdown
Type-3 Regular Regular Finite
n/a Star-free Counter-Free
Each category of languages or grammars is a proper subset of the category directly above it,
and any automaton in each category has an equivalent automaton in the category directly above it.