Invariant estimator
From Wikipedia, the free encyclopedia
This article or section needs copy editing for grammar, style, cohesion, tone or spelling. You can assist by editing it now. A how-to guide is available. (April 2008) |
In statistics, an invariant estimator or equivariant estimator is a non-Bayesian estimator having a certain intuitively appealing quality, which is defined formally below. Roughly speaking, invariance means that the estimator's behavior is unchanged when both the measurements and the parameters are transformed in a compatible way. The requirement of invariance is sometimes imposed when seeking an estimator, leading to what is called the optimal invariant estimator.
Contents |
[edit] Setting
Invariance is defined in the deterministic (non-Bayesian) estimation scenario. Under this setting, we are given a measurement x which contains information about an unknown parameter θ. The measurement x is modeled as a random variable having a probability density function f(x | θ) which depends on θ.
We would like to estimate θ given x. The estimate, denoted by a, is a function of the measurements and belongs to a set A. The quality of the result is defined by a loss function L = L(a,θ) which determines a risk function R = R(a,θ) = E[L(a,θ) | θ].
We denote the sets of possible values of x, θ, and a by X, Θ, and A, respectively.
[edit] Definition
An invariant estimator is an estimator which obeys the following two rules:
- Principle of Rational Invariance: The action taken in a decision problem should not depend on transformation on the measurement used
- Invariance Principle: If two decision problems have the same formal structure (in terms of X, Θ, f(x | θ) and L) then the same decision rule should be used in each problem
To define invariant estimator formally we will first set some definitions about groups of transformations:
A group of transformation of X, to be denoted by G is a set of (measurable) 1:1 and onto transformations of X into itself, which satisfies the following conditions:
- If and then
- If then (where )
- (i.e there is an identity transformation )
Datasets x1 and x2 in X are equivalent if x1 = g(x2) for some . All the equivalent points form an equivalence class. Such an equivalence class is called an orbit (in X). The x0 orbit, X(x0), is the set . If X consists of a single orbit then g is said to be transitive.
A family of densities F is said to be invariant under the group G if, for every and there exists a unique such that Y = g(x) has density f(y | θ * ). θ * will be denoted .
If F is invariant under the group G then the loss function L(θ,a) is said to be invariant under G if for every and there exists an such that for all . a * will be denoted .
is a group of transformations from Θ to itself and is a group of transformations from A to itself.
An estimation problem is invariant under G if there exists three such groups as defined above.
For an estimation problem that is invariant under G, estimator δ(x) is invariant estimator under G if for all and .
[edit] Properties
- The risk function of an invariant estimator δ is constant on orbits of Θ. Equivalently for all and .
- The risk function of an invariant estimator with transitive is constant.
For a given problem the invariant estimator with the lowest risk is termed the "best invariant estimator". Best invariant estimator cannot be achieved always. A special case for which it can be achieved is the case when is transitive.
[edit] Example: Location parameter
θ is a location parameter if the density of X is f(x − θ). For and L = L(a − θ) the problem is invariant under . The invariant estimator in this case must satisfy thus it is of the form δ(x) = x + K (). is transitive on Θ so we have here constant risk: R(θ,δ) = R(0,δ) = E[L(X + K) | θ = 0]. The best invariant estimator is the one that brings the risk R(θ,δ) to minimum.
In the case that L is squared error δ(x) = x − E[X | θ = 0]
[edit] Pitman estimator
Given the estimation problem: that has density and loss L( | a − θ | ). This problem is invariant under , and (additive groups).
The best invariant estimator δ(x) is the one that minimize (Pitman's estimator, 1939).
For the square error loss case we get that
If (normal distribution) than
If (Cauchy distribution) than and when
[edit] References
- James O. Berger Statistical Decision Theory and Bayesian Analysis. 1980. Springer Series in Statistics. ISBN 0-387-90471-9.
- The Pitman estimator of the Cauchy location parameter, Gabriela V. Cohen Freue, Journal of Statistical Planning and Inference 137 (2007) 1900 – 1913