Copula (statistics)
From Wikipedia, the free encyclopedia
In statistics, a copula is used as a general way of formulating a multivariate distribution in such a way that various general types of dependence can be represented. Other ways of formulating multivariate distributions include conceptually-based approaches in which the real-world meaning of the variables is used to imply what types of relationships might occur. In contrast, the approach via copulas might be considered as being more raw, but it does allow much more general types of dependencies to be included than would usually be invoked by a conceptual approach.
The approach to formulating a multivariate distribution using a copula is based on the idea that a simple transformation can be made of each marginal variable in such a way that each transformed marginal variable has a uniform distribution. When applied in a practical context, such transformations might be fitted as an initial step for each margin, or the parameters of the transformations might be fitted jointly with those of the copula.
There are many families of copulas which differ in the detail of the dependence they represent. A family will typically have several parameters which relate to the strength and form of the dependence. However, it is possible to specify a dependency structure and for a copula to emerge using a conditioning technique such as the D distribution.
Contents |
[edit] Definition
A copula is a multivariate joint distribution defined on the n-dimensional unit cube [0, 1]n such that every marginal distribution is uniform on the interval [0, 1].
Specifically, is an n-dimensional copula (briefly, n-copula) if:
- whenever has at least one component equal to 0;
- whenever has all the components equal to 1 except the i-th one, which is equal to ui;
- is n-increasing, i.e., for each
where the . Question for readers: What is here?
[edit] Sklar's theorem
The theorem proposed by Sklar [1] underlies most applications of the copula. Sklar's theorem states that given a joint distribution function H for p variables, and respective marginal distribution functions, there exists a copula C such that the copula binds the margins to give the joint distribution.
For the bivariate case, Sklar's theorem can be stated as follows. For any bivariate distribution function H(x, y), let F(x) = H(x, (−∞,∞)) and G(y) = H((−∞,∞), y) be the univariate marginal probability distribution functions. Then there exists a copula C such that
(where we have identified the distribution C with its cumulative distribution function). Moreover, if marginal distributions, say, F(x) and G(y), are continuous, the copula function C is unique. Otherwise, the copula C is unique on the range of values of the marginal distributions.
[edit] Fréchet-Hoeffding copula boundaries
Minimum copula: This is the lower bound for all copulas. In the bivariate case only, it represents perfect negative dependence between variates.
For n-variate copulas, the lower bound is given by
Maximum copula: This is the upper bound for all copulas. It represents perfect positive dependence between variates:
For n-variate copulas, the upper bound is given by
Conclusion: For all copulas C(u,v),
In the multivariate case, the corresponding inequality is
[edit] Gaussian copula
One example of a copula often used for modelling in finance is the Gaussian Copula, which is constructed from the bivariate normal distribution via Sklar's theorem. For X and Y distributed as standard bivariate normal with correlation ρ the Gaussian copula function is
where U = U(X) and V = V(Y) are cumulative probability distributions whose result is in (0,1) and Φ denotes the cumulative normal density. This are "percentile to percentile transfomations", where X and Y are transformed into
- A = (Φ − 1(U)) and B = (Φ − 1(V))
Imagine that you have a time series for X in Excel. Then the first point of your new variable A1 will simply be:
- = NORMSINV(PERCENTRANK(Time series range, X1)).
Differentiating C yields to:
where:
is the density function for the bivariate normal variate with Pearson's product moment correlation coefficient ρ.
[edit] Archimedean copulas
One particularly simple form of a n-dimensional copula is
where Ψ is known as a generator function. Such copulas are known as Archimedean. Any generator function which satisfies the properties below is the basis for a valid copula:
Product copula: Also called the independent copula, this copula has no dependence between variates. Its density function is unity everywhere.
Where the generator function is indexed by a parameter, a whole family of copulas may be Archimedean. For example:
Clayton copula:
For θ = 0 in the Clayton copula, the random variables are statistically independent. The generator function approach can be extended to create multivariate copulas, by simply including more additive terms.
Gumbel copula:
Frank copula:
[edit] Applications
Copulas are used in the pricing of collateralized debt obligations.
[edit] See also
[edit] References
[edit] Notes
- ^ Sklar (1959)
[edit] General
- David G. Clayton (1978), "A model for association in bivariate life tables and its application in epidemiological studies of familial tendency in chronic disease incidence", Biometrika 65, 141-151. JSTOR (subscription)
- Frees, E.W., Valdez, E.A. (1998), "Understanding Relationships Using Copulas", North American Actuarial Journal 2, 1-25. Link to NAAJ copy
- Roger B. Nelsen (1999), An Introduction to Copulas. ISBN 0-387-98623-5.
- S. Rachev, C. Menn, F. Fabozzi (2005), Fat-Tailed and Skewed Asset Return Distributions. ISBN 0-471-71886-6.
- A. Sklar (1959), "Fonctions de répartition à n dimensions et leurs marges", Publications de l'Institut de Statistique de L'Université de Paris 8, 229-231.
- W.T. Shaw, K.T.A. Lee (2006), "Copula Methods vs Canonical Multivariate Distributions: The Multivariate Student T Distibution with General Degrees of Freedom". PDF