Type system
From Wikipedia, the free encyclopedia
In computer science, a type system defines how a programming language classifies values and expressions into types, how it can manipulate those types and how they interact. A type indicates a set of values that have the same sort of generic meaning or intended purpose (although some types, such as abstract types and function types, might not get represented as values in the running computer program). Type systems vary significantly between languages with, perhaps, the most important variations being their compile-time syntactic and run-time operational implementations.
A compiler may use the static type of a value in order to optimize the storage it needs and the choice of algorithms for operations on the value. For example, in many C compilers the "float" data type is represented in 32-bits in accordance with the IEEE specification for single-precision floating point numbers. Thus C uses the respective floating-point operations to add, multiply etc. those values.
The depth of type constraints and the manner of their evaluation affects the typing of the language. Further, a programming language may associate an operation with varying concrete algorithms on each type in the case of type polymorphism. Type theory studies type systems, although the concrete type systems of programming languages originate from practical issues of computer architecture, compiler implementation and language design.
Contents |
[edit] Basis
Assigning datatypes ("typing") gives meaning to collections of bits. Types usually have associations either with values in memory or with objects such as variables. Because any value simply consists of a set of bits in a computer, hardware makes no distinction even between memory addresses, instruction code, characters, integers and floating-point numbers. Types inform programs and programmers how they should treat those bits.
Major functions that type systems provide include:
- Safety - Use of types may allow a compiler to detect meaningless or probably invalid code. For example, we can identify an expression
"Hello, World" + 3
as invalid because one cannot add (in the usual sense) a string literal to an integer. As discussed below, strong typing offers more safety, but it does not necessarily guarantee complete safety (see type-safety for more information). - Optimization - Static type-checking may provide useful information to a compiler. For example, if a type says a value must align at a multiple of 4, the compiler may be able to use more efficient machine instructions.
- Documentation - In more expressive type systems, types can serve as a form of documentation, since they can illustrate the intent of the programmer. For instance, timestamps may be a subtype of integers -- but if a programmer declares a function as returning a timestamp rather than merely an integer, this documents part of the meaning of the function.
- Abstraction (or modularity) - Types allow programmers to think about programs at a higher level, not bothering with low-level implementation. For example, programmers can think of strings as values instead of as a mere array of bytes. Or types can allow programmers to express the interface between two subsystems. This localizes the definitions required for interoperability of the subsystems and prevents inconsistencies when those subsystems communicate.
A program typically associates each value with one particular type (although a type may have more than one subtype). Other entities, such as objects, modules, communication channels, dependencies, or even types themselves, can become associated with a type. For example:
A type system, specified in each programming language, stipulates the ways typed programs may behave and makes behavior outside these rules illegal. An effect system typically provides more fine-grained control than a type system.
More formally, type theory studies type systems.
[edit] Type checking
The process of verifying and enforcing the constraints of types – type checking – may occur either at compile-time (a static check) or run-time (a dynamic check). Static type-checking becomes a primary task of the semantic analysis carried out by a compiler. If a language enforces type rules strongly (that is, generally allowing only those automatic type conversions which do not lose information), one can refer to the process as strongly typed, if not, as weakly typed.
[edit] Static and dynamic typing
A programming language is statically typed if type checking may be performed without testing equivalence of run-time expressions. A statically typed programming language respects the phase distinction between run-time and compile-time phases of processing. A language has a compile-time phase if separate modules of a program can be type checked separately (separate compilation), without information about all modules that exist at run time. A programming language is dynamically typed if the language supports run-time (dynamic) dispatch on tagged data. A programming language is dependently typed, if the phase distinction is violated and consequently the type checking requires testing equivalence of run-time expressions. [1]
In dynamic typing, type checking often takes place at runtime because variables can acquire different types depending on the execution path. Static type systems for dynamic types usually need to explicitly represent the concept of an execution path, and allow types to depend on it.
Dynamic typing often occurs in "scripting languages" and other rapid application development languages. Dynamic types appear more often in interpreted languages, whereas compiled languages favor static types. See typed and untyped languages for a fuller list of typed and untyped languages.
The term duck typing refers to a form of dynamic typing implemented in languages which "guess" the type of a value.
To see how type checking works, consider the following pseudocode example:
var x; // (1) x := 5; // (2) x := "hi"; // (3)
In this example, (1) declares the name x; (2) associates the integer value 5 to the name x; and (3) associates the string value "hi" to the name x. In most statically typed systems, this code fragment would be illegal, because (2) and (3) bind x to values of inconsistent type.
By contrast, a purely dynamically typed system would permit the above program to execute, since types are attached to values, not variables. The implementation of a dynamically typed language will catch errors related to the misuse of values – "type errors" – at the time of the computation of the erroneous statement or expression. In other words, dynamic typing catches errors during program execution.
A typical implementation of dynamic typing will keep all program values "tagged" with a type, and check the type tag before using any value in an operation. For example:
var x = 5; // (1) var y = "hi"; // (2) var z = x + y; // (3)
In this code fragment, (1) binds the value 5 to x; (2) binds the value "hi" to y; and (3) attempts to add x to y. In a dynamically typed language, the value bound to x might be a pair (integer, 5), and the value bound to y might be a pair (string, "hi"). When the program attempts to execute line 3, the language implementation checks the type tags integer and string, and if the operation + (addition) is not defined over these two types it signals an error.
Some statically typed languages have a "back door" in the language that enables programmers to write code that does not statically type check. For example, Java and C-style languages have "casts".
The presence of static typing in a programming language does not necessarily imply the absence of dynamic typing mechanisms. For example, Java uses static typing, but certain operations require the support of runtime type tests, which are a form of dynamic typing. See programming language for more discussion of the interactions between static and dynamic typing.
[edit] Static and dynamic type checking in practice
The choice between static and dynamic typing requires some trade-offs.
Static typing finds type errors reliably and at compile time. This should increase the reliability of the delivered program. However, programmers disagree over how commonly type errors occur, and thus what proportion of those bugs which are written would be caught by static typing. Static typing advocates believe programs are more reliable when they have been type-checked, while dynamic typing advocates point to distributed code that has proven reliable and to small bug databases. The value of static typing, then, presumably increases as the strength of the type system is increased. Advocates of strongly typed languages such as ML and Haskell have suggested that almost all bugs can be considered type errors, if the types used in a program are sufficiently well declared by the programmer or inferred by the compiler.[2]
Static typing usually results in compiled code that executes more quickly. When the compiler knows the exact data types that are in use, it can produce optimized machine code. Further, compilers in statically typed languages can find shortcuts more easily. Some dynamically-typed languages such as Common Lisp allow optional type declarations for optimization for this very reason. Static typing makes this pervasive. See optimization.
By contrast, dynamic typing may allow compilers and interpreters to run more quickly, since changes to source code in dynamically-typed languages may result in less checking to perform and less code to revisit. This too may reduce the edit-compile-test-debug cycle.
Statically-typed languages which lack type inference — such as Java — require that programmers declare the types they intend a method or function to use. This can serve as additional documentation for the program, which the compiler will not permit the programmer to ignore or drift out of synchronization. However, a language can be statically typed without requiring type declarations, so this is not a consequence of static typing.
Static typing allows construction of libraries which are less likely to be accidentally misused by their users. This can be used as an additional mechanism for communicating the intentions of the library developer.
Dynamic typing allows constructs that some static type systems would reject as illegal. For example, eval functions, which execute arbitrary data as code, become possible (however, the typing within that evaluated code might remain static). Furthermore, dynamic typing accommodates transitional code and prototyping, such as allowing a string to be used in place of a data structure.
Dynamic typing typically makes metaprogramming more powerful and easier to use. For example, C++ templates are typically more cumbersome to write than the equivalent Ruby or Python code. More advanced run-time constructs such as metaclasses and introspection are often more difficult to use in statically-typed languages. This has led some writers, such as Paul Graham, to speculate that many design patterns observed in statically-typed languages are simply evidence of "the human compiler" repeatedly writing out metaprograms.[3]
[edit] Strong and weak typing
One definition of strongly typed involves not allowing an operation to succeed on arguments which have the wrong type. A C cast gone wrong exemplifies the absence of strong typing; if a programmer casts a value in C, not only must the compiler allow the code, but the runtime should allow it as well. This allows compact and fast C code, but it can make debugging more difficult.
Some pundits use the term memory-safe language (or just safe language) to describe languages that do not allow undefined operations to occur. For example, a memory-safe language will also check array bounds.
Weak typing means that a language will implicitly convert (or cast) types when used. Revisiting the previous example:
var x = 5; // (1) var y = "37"; // (2) x + y; // (3)
Writing the code above in a weakly-typed language, it is not clear what kind of result one would get. Some languages such as Visual Basic, would produce runnable code which would yield the result 42: the system would convert the string "37" into the number 37 to make sense of the operation; other languages like JavaScript would produce the result "537": the system would convert the number 5 to the string "5" and then concatenate the two. In both Visual Basic and JavaScript, the resulting type is determined by rules that take both operands (the values to the left and right of the operator) into consideration. In some languages, such as AppleScript, the resulting type of a value is determined by the type of the left-most operand only.
Careful language design has also allowed languages to appear weakly-typed (through type inference and other techniques) for usability while preserving the type checking and protection offered by languages such as VB.Net, C# and Java.
Reduction of operator overloading, such as not using "+" for string concatenation in addition to arithmetic addition, can reduce some of the confusion caused by dynamic typing. Some languages use periods (.) or ampersands (&) for string concatenation, for example.
[edit] Safely and unsafely typed systems
A third way of categorizing the type system of a programming language uses the safety of typed operations and conversions. Computer scientists consider a language "type-safe" if it does not allow operations or conversions which lead to erroneous conditions.
Let us again have a look at the pseudocode example:
var x = 5; // (1) var y = "37"; // (2) var z = x + y; // (3)
In languages like Visual Basic variable z in the example will acquire the value 42. While the programmer may or may not have intended this, nevertheless the language defines the result specifically and the program does not crash or assign an ill-defined value to z. In this respect such languages are type-safe.
Now let us look at the same example in C:
int x = 5; char y[] = "37"; char* z = x + y;
In this example z will point to a memory address five characters beyond y, equivalent to two characters after the terminating zero character of the string pointed to by y. The content of that location is undefined, and might lie outside addressable memory, and so dereferencing z at this point could cause termination of the program. We have a well-typed, but not memory-safe program — a condition that cannot occur in a type-safe language.
[edit] Polymorphism and types
The term '"'polymorphism'"' refers to the ability of code (in particular, functions or classes) to act on values of multiple types, or to the ability of different instances of the same data-structure to contain elements of different types. Type systems that allow polymorphism generally do so in order to improve the potential for code re-use: in a language with polymorphism, programmers need only implement a data structure such as a list or a dictionary once, rather than once for each type of element with which they plan to use it. For this reason computer scientists sometimes call the use of certain forms of polymorphism generic programming. The type-theoretic foundations of polymorphism are closely related to those of abstraction, modularity and (in some cases) subtyping.
[edit] Duck typing
In some programming environments, two objects can have the same type even when they have nothing in common. One example is the C++ duality between an iterator and a pointer. Both provide an * operation, implemented by widely different mechanisms.
This technique is called "duck typing", based on the aphorism, "If it waddles like a duck, and quacks like a duck, it's a duck!"
[edit] Explicit or implicit declaration and inference
Many static type systems, such as C's and Java's, require type declarations: the programmer must explicitly associate each variable with a particular type. Others, such as Haskell's, perform type inference: the compiler draws conclusions about the types of variables based on how programmers use those variables. For example, given a function f(x,y) which adds x and y together, the compiler can infer that x and y must be numbers -- since addition is only defined for numbers. Therefore, any call to f elsewhere in the program that specifies a non-numeric type (such as a string or list) as an argument would signal an error.
Numerical and string constants and expressions in code can and often do imply type in a particular context. For example, an expression 3.14
might imply a type of floating-point; while [1, 2, 3]
might imply a list of integers; typically an array.
[edit] Types of types
A type of types is a kind. Kinds appear explicitly in typeful programming, such as a type constructor in the Haskell programming language, which returns a simple type after being applied to enough simple types. For example, the type constructor Either has the kind * -> * -> * and its application Either String Integer is a simple type (kind *). However, in most programming languages type construction is implicit and hard coded in the grammar, there is no notion of kind as a first class entity.
Types fall into several broad categories:
- primitive types — the simplest kind of type, e.g. integer and floating-point number
- integral types — types of whole numbers, e.g. integers and natural numbers
- floating point types — types of numbers in floating-point representation
- composite types — types composed of basic types, e.g. arrays or records. Abstract data types have attributes of both composite types and interfaces, depending on whom you talk to.
- subtype
- derived type
- object types, e.g. type variable
- partial type
- recursive type
- function types, e.g. binary functions
- universally quantified types, such as parameterized types
- existentially quantified types, such as modules
- refinement types, types which identify subsets of other types
- dependent types, types which depend on run-time values
[edit] Compatibility: equivalence and subtyping
A type-checker for a statically typed language must verify that the type of any expression is consistent with the type expected by the context in which that expression appears. For instance, in an assignment statement of the form x := e
, the inferred type of the expression e must be consistent with the declared or inferred type of the variable x
. This notion of consistency, called compatibility, is specific to each programming language.
Clearly, if the type of e and the type of x
are the same and assignment is allowed for that type, then this is a valid expression. In the simplest type systems, therefore, the question of whether two types are compatible reduces to that of whether they are equal (or equivalent). Different languages, however, have different criteria for when two type expressions are understood to denote the same type. These different equational theories of types vary widely, two extreme cases being structural type systems, in which any two types are equivalent that describe values with the same structure, and nominative type systems, in which no two syntactically distinct type expressions denote the same type (i.e., types must have the same "name" in order to be equal).
In languages with subtyping, the compatibility relation is more complex. In particular, if A is a subtype of B, then a value of type A can be used in a context where one of type B is expected, even if the reverse is not true. Like equivalence, the subtype relation is defined differently for each programming language, with many variations possible. The presence of parametric or ad hoc polymorphism in a language may also have implications for type compatibility.
[edit] Controversy
There are often conflicts between those who prefer strong and/or statically-typed languages and those who prefer dynamic or free-form typing. The first group claims that heavy use of typing allows compilers and interpreters to catch more errors before they become a bigger problem. The second group claims that freer typing results in smaller, simpler code, which itself results in fewer errors because it is allegedly easier to inspect. They see reliance on types as resulting in a kind of programming bureaucracy. Related to this is the consideration that there's often no need to manually declare types in a programming language with type inference, thus the overhead is automatically lowered for some languages.
Which group an individual falls into may depend on the type of software, the skill of the team members, interactions with other systems, and the size of the programming team. Personal preferences tied to an individual's psychology may also play a role.
In general, smaller or nimbler projects seem to better fit freer typing, while formal, larger projects that rely on divisions of labor (programmer, analyst, tester, etc.) appear to work better under heavy typing.[citation needed]
[edit] Type system cross reference list
Programming language | static / dynamic | strong / weak | safety | Nominative / structural |
---|---|---|---|---|
Ada | static | strong | safe | nominative |
assembly language | none | strong | unsafe | structural |
APL | dynamic | weak | safe | nominative |
BASIC | static | weak | safe | nominative |
C | static | weak | unsafe | nominative |
Cayenne | dependent | strong | safe | structural |
Centura | static | weak | safe | nominative |
C++ | static | strong | unsafe | nominative |
C#[4] | static | strong | both | nominative |
Clipper | dynamic | weak | safe | duck |
D | static | strong | unsafe | nominative |
Delphi | static | strong | safe | nominative |
E | dynamic | strong | safe | nominative + duck |
Eiffel | static | strong | safe | nominative |
Fortran | static | strong | safe | nominative |
Groovy | dynamic | strong | safe | duck |
Haskell | static | strong | safe | structural |
Io | dynamic | strong | safe | duck |
Java | static | strong | safe | nominative |
JavaScript | dynamic | weak | safe | duck |
Lisp | dynamic | strong | safe | structural |
Lua[5] | dynamic | strong | safe | structural |
ML | static | strong | safe | structural |
Objective-C[6] | dynamic | weak | safe | duck |
Pascal | static | strong | safe | nominative |
Perl 1-5 | dynamic | weak | safe | nominative |
Perl 6[7] | hybrid | hybrid | safe | duck |
PHP | dynamic | weak | safe | ? |
Pike | static+dynamic | strong | safe | structural |
Python | dynamic | strong | safe | duck |
Ruby | dynamic | strong | safe | duck |
Scheme | dynamic | strong | safe | nominative |
Smalltalk | dynamic | strong | safe | duck |
Visual Basic | hybrid | hybrid | safe | nominative |
Windows PowerShell | hybrid | hybrid | safe | duck |
xHarbour | dynamic | weak | safe | duck |
- ^ Pierce, B.: Advanced Topics in Types and Programming Languages, page 305
- ^ http://citeseer.ifi.unizh.ch/xi98dependent.html
- ^ http://www.paulgraham.com/icad.html
- ^ The C basis is unchanged. 3.0 has hybrid typing with Anonymous Types. Can be both unsafe and safe with use of 'unsafe' functions and code blocks.
- ^ Variables can change type with the use of metatables.
- ^ Applies to the Objective-C extension only.
- ^ Not yet released.