Point estimation
In statistics, point estimation involves the use of sample data to calculate a single value (known as a statistic) which is to serve as a "best guess" or "best estimate" of an unknown (fixed or random) population parameter.
More formally, it is the application of a point estimator to the data.
In general, point estimation should be contrasted with interval estimation: such interval estimates are typically either confidence intervals in the case of frequentist inference, or credible intervals in the case of Bayesian inference.
Point estimators
- minimum-variance mean-unbiased estimator (MVUE), minimizes the risk (expected loss) of the squared-error loss-function.
- best linear unbiased estimator (BLUE)
- minimum mean squared error (MMSE)
- median-unbiased estimator, minimizes the risk of the absolute-error loss function
- maximum likelihood (ML)
- method of moments, generalized method of moments
Bayesian point-estimation
Bayesian inference is based on the posterior distribution. Many Bayesian point-estimators are the posterior distribution's statistics of central tendency, e.g., its mean, median, or mode:
- Posterior mean, which minimizes the (posterior) risk (expected loss) for a squared-error loss function; in Bayesian estimation, the risk is defined in terms of the posterior distribution.
- Posterior median, which minimizes the posterior risk for the absolute-value loss function.
- maximum a posteriori (MAP), which finds a maximum of the posterior distribution; for a uniform prior probability, the MAP estimator coincides with the maximum-likelihood estimator;
The MAP estimator has good asymptotic properties, even for many difficult problems, on which the maximum-likelihood estimator has difficulties. For regular problems, where the maximum-likelihood estimator is consistent, the maximum-likelihood estimator ultimately agrees with the MAP estimator.[1][2][3] Bayesian estimators are admissible, by Wald's theorem.[4][2]
Special cases of Bayesian estimators are important:
Several methods of computational statistics have close connections with Bayesian analysis:
Properties of point estimates
See also
- Predictive inference
- Induction (philosophy)
- Philosophy of statistics
- Algorithmic inference
- Interval estimation
Notes
- ↑ Ferguson, Thomas S (1996). A course in large sample theory. Chapman & Hall. ISBN 0-412-04371-8.
- ↑ 2.0 2.1 Le Cam, Lucien (1986). Asymptotic methods in statistical decision theory. Springer-Verlag. ISBN 0-387-96307-3.
- ↑ Ferguson, Thomas S. (1982). "An inconsistent maximum likelihood estimate". Journal of the American Statistical Association 77 (380): 831–834. JSTOR 2287314.
- ↑ Lehmann, E.L.; Casella, G. (1998). Theory of Point Estimation, 2nd ed. Springer. ISBN 0-387-98502-6.
Bibliography
- Bickel, Peter J. and Doksum, Kjell A. (2001). Mathematical Statistics: Basic and Selected Topics I (Second (updated printing 2007) ed.). Pearson Prentice-Hall.
- Lehmann, Erich (1983). Theory of Point Estimation.
- Liese, Friedrich and Miescke, Klaus-J. (2008). Statistical Decision Theory: Estimation, Testing, and Selection. Springer.