Functional calculus
From Wikipedia, the free encyclopedia
In mathematics, a functional calculus is a theory allowing one to apply mathematical functions to mathematical operators. If f is a function, say a numerical function of a real number, and M is an operator, there is no particular reason why the expression
- f(M)
should make sense. If it does, then we are not using f on its original function domain any longer. This passes nearly unnoticed if we talk about 'squaring a matrix', though, which is the case of f(x) = x2 and M an n×n matrix. The idea of a functional calculus is to create a principled approach to this kind of overloading of the notation.
The most immediate case is to apply polynomial functions to a square matrix, extending what has just been discussed. In the finite dimensional case, the polynomial functional calculus yields quite a bit of information about the operator. For example, consider the family of polynomials which annihilates an operator T. This family is an ideal in the ring of polynomials. Furthermore, it is a nontrivial ideal: let n be the finite dimension of the algebra of matrices, then {I, T, T2...Tn} is linearly dependent. So ∑ αi Ti = 0 for some scalars αi. This implies that the polynomial ∑ αi xi lies in the ideal. Since the ring of polynomials is a principal ideal domain, this ideal is generated by some polynomial m. The polynomial m is precisely the minimal polynomial of T. One has, for instance, a scalar α is an eigenvalue of T if and only if α is a root of m. Also, sometimes m can be used to calculate the exponential of T efficiently.
The polynomial calculus is not as informative in the infinite dimensional case. Consider the unilateral shift with the polynomials calculus; the ideal defined above is now trivial. Thus one is interested in functional calculi more general than polynomials. The subject is closely linked to spectral theory, since for a diagonal matrix or multiplication operator, it is rather clear what the definitions should be.
For technical accounts see: