Type theory

In mathematics, logic, and computer science, a type theory is any of a class of formal systems, some of which can serve as alternatives to set theory as a foundation for all mathematics. In type theory, every "term" has a "type" and operations are restricted to terms of a certain type.

Type theory is closely related to (and in some cases overlaps with) type systems, which are a programming language feature used to reduce bugs. The types of type theory were created to avoid paradoxes in a variety of formal logics and rewrite systems and sometimes "type theory" is used to refer to this broader application.

Two well-known type theories that can serve as mathematical foundations are Alonzo Church's typed λ-calculus and Per Martin-Löf's intuitionistic type theory.

History

The types of type theory were invented by Bertrand Russell in response to his discovery that Gottlob Frege's version of naive set theory was afflicted with Russell's paradox. This theory of types features prominently in Whitehead and Russell's Principia Mathematica. It avoids Russell's paradox by first creating a hierarchy of types, then assigning each mathematical (and possibly other) entity to a type. Objects of a given type are built exclusively from objects of preceding types (those lower in the hierarchy), thus preventing loops.

The common usage of "type theory" is when those types are used with a term rewrite system. The most famous early example is Alonzo Church's lambda calculus. Church's theory of types[1] helped the formal system avoid the Kleene–Rosser paradox that afflicted the original untyped lambda calculus. Church demonstrated that it could serve as a foundation of mathematics and it was referred to as a higher-order logic.

Some other type theories include Per Martin-Löf's intuitionistic type theory, which has been the foundation used in some areas of constructive mathematics and for the proof assistant Agda. Thierry Coquand's calculus of constructions and its derivatives are the foundation used by Coq and others. The field is an area of active research, as demonstrated by homotopy type theory.

Basic concepts

In a system of type theory, each term has a type and operations are restricted to terms of a certain type. A typing judgment M : A describes that the term M has type A. For example, \mathrm{nat} may be a type representing the natural numbers and 0, 1, 2, ... may be inhabitants of that type. The judgement that 2 has type \mathrm{nat} is written as 2 : \mathrm{nat}.

A function in type theory is denoted with an arrow \to. The function \mathrm{addOne} (commonly called successor), has the judgement \mathrm{addOne} : \mathrm{nat} \to \mathrm{nat}. Calling or "applying" a function to an argument is usually written without parentheses, so \mathrm{addOne}\ 2 instead of \mathrm{addOne}(2). (This allows for consistent currying.)

Type theories also contain rules for rewriting terms. These are called conversion rules or, if the rule only works in one direction, a reduction rule. For example, 2 + 1 and 3 are syntactically different terms, but the first reduces to the latter. This reduction is denoted as 2 + 1 \twoheadrightarrow 3.

Difference from set theory

There are many different set theories and many different systems of type theory, so what follows are generalizations.

Optional features

Normalization

The term 2 + 1 reduces to 3. Since 3 cannot be reduced further, it is called a normal form. A system of type theory is said to be strongly normalizing if all terms have a normal form and any order of reductions reaches it. Weakly normalizing systems have a normal form but some orders of reductions may loop forever and never reach it.

For a normalizing system, some borrow the word element from set theory and use it to refer to all closed terms that can reduce to the same normal form. A closed term is one without parameters. (A term like x + 1 with its parameter x is called an open term.) Thus, 2 + 1 and 3 + 0 may be different terms but they're both from the element 3.

A similar idea that works for open and closed terms is convertibility. Two terms are convertible if there exists a term that they both reduce to. For example, 2 + 1 and 1 + 2 are convertible. As are x + (1 + 1) and x + 2. However, x + 1 and 1 + x (where x is a free variable) are not because both are in normal form and they are not the same. Confluent and weakly normalizing systems can test if two terms are convertible by checking if they both reduce to the same normal form.

Dependent types

Main article: Dependent types

A dependent type is a type that depends on a term or on another type. Thus, the type returned by a function may depend upon the argument to the function.

For example, a list of \mathrm{nat}s of length 4 may be a different type than a list of \mathrm{nat}s of length 5. In a type theory with dependent types, it is possible to define a function that take a parameter "n" and returns a list containing "n" zeros. Calling the function with 4 would produce a term with a different type than if the function was called with 5.

Dependent types play a central role in intuitionistic type theory and in the design of functional programming languages like Idris, ATS, Agda and Epigram.

Equality types (or "identity types")

Many systems of type theory have a type that represents equality of types and terms. This type is different from convertibility, and is often denoted propositional equality.

In intuitionistic type theory, the equality type is known as I for identity. There is a type I\ A\ a\ b when A is a type and a and b are both terms of type A. A term of type I\ A\ a\ b is interpreted as meaning that a is equal to b.

In practice, it is possible to build a type I\ \mathrm{nat}\ 3\ 4 but there will not exist a term of that type. In intuitionistic type theory, new terms of equality start with reflexivity. If 3 is a term of type \mathrm{nat}, then there exists a term of type I\ \mathrm{nat}\ 3\ 3. More complicated equalities can be created by creating a reflexive term and then doing a reduction on one side. So if 2+1 is a term of type \mathrm{nat}, then there is a term of type I\ \mathrm{nat}\ (2+1)\ (2+1) and, by reduction, generate a term of type I\ \mathrm{nat}\ (2+1)\ 3. Thus, in this system, the equality type denotes that two values of the same type are convertible by reductions.

Having a type for equality is important because it can be manipulated inside the system. There is usually no judgement to say two terms are not equal; instead, as in the Brouwer–Heyting–Kolmogorov interpretation, we map \neg(a = b) to (a = b) \to \bot, where \bot is the bottom type having no values. There exists a term with type (I\ \mathrm{nat}\ 3\ 4) \to \bot, but not one of type (I\ \mathrm{nat}\ 3\ 3) \to \bot.

Homotopy type theory differs from intuitionistic type theory mostly by its handling of the equality type.

Inductive types

Main article: Inductive type

A system of type theory requires some basic terms and types to operate on. Some systems build them out of functions using Church encoding. Other systems have inductive types: a set of base types and a set of type constructors that generate types with well-behaved properties. For example, certain recursive functions called on inductive types are guaranteed to terminate.

Coinductive type are infinite data types created by giving a function that generates the next element(s). See Coinduction and Corecursion.

Induction induction is a feature for declaring an inductive type and a family of types that depends on the inductive type.

Induction recursion allows a wider range of well-behaved types but requires that the type and the recursive functions that operate on them be defined at the same time.

Universe types

Types were created to prevent paradoxes, such as Russell's paradox. However, the motives that lead to those paradoxes—being able to say things about all types—still exist. So many type theories have a "universe type", which contains all other types (and not itself).

In systems where you might want to say something about universe types, there is a hierarchy of universe types, each containing the one below it in the hierarchy. The hierarchy is defined as being infinite, but statements must only refer to a finite number of universe levels.

Type universes are particularly tricky in type theory. The initial proposal of intuitionistic type theory suffered from Girard's paradox.

Computational component

Many systems of type theory, such as the simply-typed lambda calculus, intuitionistic type theory, and the calculus of constructions, are also programming languages. That is, they are said to have a "computational component". The computation is the reduction of terms of the language using rewriting rules.

A system of type theory that has a well-behaved computational component also has a simple connection to constructive mathematics through the BHK interpretation.

Non-constructive mathematics in these systems is possible by adding operators on continuations such as call with current continuation. However, these operators tend to break desirable properties such as canonicity and parametricity.

Systems of type theory

Major

Minor

Active

Practical impact

Programming languages

Main article: Type system

There is extensive overlap and interaction between the fields of type theory and type systems. Type systems are a programming language feature designed to identify bugs. Any static program analysis, such as the type checking algorithms in the semantic analysis phase of compiler, has a connection to type theory.

A prime example is Agda, a programming language which uses intuitionistic type theory for its type system. The programming language ML was developed for manipulating type theories (see LCF) and its own type system was heavily influenced by them.

Mathematical foundations

The first computer proof assistant, called Automath, used type theory to encode mathematics on a computer. Martin-Löf specifically developed intuitionistic type theory to encode all mathematics to serve as a new foundation for mathematics. There is current research into mathematical foundations using homotopy type theory.

Mathematicians working in category theory already had difficulty working with the widely accepted foundation of Zermelo–Fraenkel set theory. This led to proposals such as Lawvere's Elementary Theory of the Category of Sets (ETCS).[2] Homotopy type theory continues in this line using type theory. Researchers are exploring connections between dependent types (especially the identity type) and algebraic topology (specifically homotopy).

Proof assistants

Main article: Proof assistant

Much of the current research into type theory is driven by proof checkers, interactive proof assistants, and automated theorem provers. Most of these systems use a type theory as the mathematical foundation for encoding proofs, which is not surprising, given the close connection between type theory and programming languages:

Multiple type theories are supported by LEGO and Isabelle. Isabelle also supports foundations besides type theories, such as ZFC. Mizar is an example of a proof system that only supports set theory.

Linguistics

Type theory is also widely in use in formal theories of semantics of natural languages, especially Montague grammar and its descendants. In particular, categorial grammars and pregroup grammars make extensive use of type constructors to define the types (noun, verb, etc.) of words.

The most common construction takes the basic types e and t for individuals and truth-values, respectively, and defines the set of types recursively as follows:

A complex type \langle a,b\rangle is the type of functions from entities of type a to entities of type b. Thus one has types like \langle e,t\rangle which are interpreted as elements of the set of functions from entities to truth-values, i.e. indicator functions of sets of entities. An expression of type \langle\langle e,t\rangle,t\rangle is a function from sets of entities to truth-values, i.e. a (indicator function of a) set of sets. This latter type is standardly taken to be the type of natural language quantifiers, like everybody or nobody (Montague 1973, Barwise and Cooper 1981).

Social sciences

Gregory Bateson introduced a theory of logical types into the social sciences; his notions of double bind and logical levels are based on Russell's theory of types.

Relation to category theory

Although the initial motivation for category theory was far removed from foundationalism, the two fields turned out to have deep connections. As John Lane Bell writes: "In fact categories can themselves be viewed as type theories of a certain kind; this fact alone indicates that type theory is much more closely related to category theory than it is to set theory." In brief, a category can be viewed as a type theory by regarding its objects as types (or sorts), i.e. "Roughly speaking, a category may be thought of as a type theory shorn of its syntax." A number of significant results follow in this way:[3]

The interplay, known as categorical logic, has been a subject of active research since then; see the monograph of Jacobs (1999) for instance.

See also

References

  • W. Farmer, The seven virtues of simple type theory, Journal of Applied Logic, Vol. 6, No. 3. (September 2008), pp. 267–286.
  1. Alonzo Church, A formulation of the simple theory of types, The Journal of Symbolic Logic 5(2):5668 (1940)
  2. ETCS in nLab
  3. John L. Bell (2012). "Types, Sets and Categories". In Akihiro Kanamory. Handbook of the History of Logic. Volume 6. Sets and Extensions in the Twentieth Century (PDF). Elsevier. ISBN 978-0-08-093066-4.

Further reading

External links

This article is issued from Wikipedia - version of the Sunday, February 14, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.