Kernel regression

From Wikipedia, the free encyclopedia
Not to be confused with Kernel principal component analysis.

The kernel regression is a non-parametric technique in statistics to estimate the conditional expectation of a random variable. The objective is to find a non-linear relation between a pair of random variables X and Y.

In any nonparametric regression, the conditional expectation of a variable Y relative to a variable X may be written:

\operatorname {E}(Y|X)=m(X)

where m is an unknown function.

Nadaraya-Watson kernel regression

Nadaraya 1964 and Watson 1964 proposed to estimate m as a locally weighted average, using a kernel as a weighting function. The Nadaraya-Watson estimator is:

\widehat {m}_{h}(x)={\frac  {\sum _{{i=1}}^{n}K_{h}(x-X_{i})Y_{i}}{\sum _{{i=1}}^{n}K_{h}(x-X_{i})}}

where K is a kernel with a bandwidth h. The fraction is a weighting term with sum 1.

Derivation

\operatorname {E}(Y|X)=\int yf(y|x)dy=\int y{\frac  {f(x,y)}{f(x)}}dy

Using the kernel density estimation for the joint distribution f(x,y) and f(x) with a kernel K,

{\hat  {f}}(x,y)=n^{{-1}}h^{{-2}}\sum _{{i=1}}^{{n}}K\left({\frac  {x-x_{i}}{h}}\right)K\left({\frac  {y-y_{i}}{h}}\right),
{\hat  {f}}(x)=n^{{-1}}h^{{-1}}\sum _{{i=1}}^{{n}}K\left({\frac  {x-x_{i}}{h}}\right)

we obtain the Nadaraya-Watson estimator.

Priestley-Chao kernel estimator

\widehat {m}_{{PC}}(x)=h^{{-1}}\sum _{{i=1}}^{n}(x_{i}-x_{{i-1}})K\left({\frac  {x-x_{i}}{h}}\right)y_{i}

Gasser-Müller kernel estimator

\widehat {m}_{{GM}}(x)=h^{{-1}}\sum _{{i=1}}^{n}\left[\int _{{s_{{i-1}}}}^{{s_{i}}}K\left({\frac  {x-u}{h}}\right)du\right]y_{i}

where s_{i}={\frac  {x_{{i-1}}+x_{i}}{2}}

Example

This example is based upon Canadian cross-section wage data consisting of a random sample taken from the 1971 Canadian Census Public Use Tapes for male individuals having common education (grade 13). There are 205 observations in total.

We consider estimating the unknown regression function using Nadaraya-Watson kernel regression via the R np package that uses automatic (data-driven) bandwidth selection; see the np vignette for an introduction to the np package.

The figure below shows the estimated regression function using a second order Gaussian kernel along with asymptotic variability bounds

Estimated Regression Function.

Script for example

The following commands of the R programming language use the npreg() function to deliver optimal smoothing and to create the figure given above. These commands can be entered at the command prompt via cut and paste.

library(np) # non parametric library
data(cps71)
attach(cps71)

m <- npreg(logwage~age)

plot(m,plot.errors.method="asymptotic",
     plot.errors.style="band",
     ylim=c(11,15.2))

points(age,logwage,cex=.25)

Related

According to Salsburg 2002, pp. 290–1, the algorithms used in kernel regression were independently developed and used in fuzzy systems: "Coming up with almost exactly the same computer algorithm, fuzzy systems and kernel density-based regressions appear to have been developed completely independently of one another."

References

    • Nadaraya, E. A. (1964). "On Estimating Regression". Theory of Probability and its Applications 9 (1): 141–2. doi:10.1137/1109020. 
    • Li, Qi; Racine, Jeffrey S. (2007). Nonparametric Econometrics: Theory and Practice. Princeton University Press. ISBN 0-691-12161-3. 
    • Simonoff, Jeffrey S. (1996). Smoothing Methods in Statistics. Springer. ISBN 0-387-94716-7. 
    • Salsburg, D. (2002). The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century. W.H. Freeman. ISBN 0-8050-7134-2. 
    • Watson, G. S. (1964). "Smooth regression analysis". Sankhyā: The Indian Journal of Statistics, Series A 26 (4): 359–372. JSTOR 25049340. 

    Statistical implementation

     kernreg2 y x, bwidth(.5) kercode(3) npoint(500) gen(kernelprediction gridofpoints)

    External links

    This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.