[go: nahoru, domu]

Jump to content

Kernel (statistics): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
img clean up
Jonbarron (talk | contribs)
No edit summary
(17 intermediate revisions by 14 users not shown)
Line 1: Line 1:
{{short description|Window function}}
{{other uses|Kernel (disambiguation)}}

{{refimprove|date=May 2012}}
{{refimprove|date=May 2012}}
The term '''kernel''' is a term in [[statistics | statistical analysis]] used to refer to a [[window function]]. The term "kernel" has several distinct meanings in different branches of statistics.
The term '''kernel''' is used in [[statistics|statistical analysis]] to refer to a [[window function]]. The term "kernel" has several distinct meanings in different branches of statistics.


==Bayesian statistics==
==Bayesian statistics==
In statistics, especially in [[Bayesian statistics]], the '''kernel''' of a [[probability density function]] (pdf) or [[probability mass function]] (pmf) is the form of the pdf or pmf in which any factors that are not functions of any of the variables in the domain are omitted.{{Citation needed|date=May 2012}} Note that such factors may well be functions of the [[parameter]]s of the pdf or pmf. These factors form part of the [[normalization factor]] of the [[probability distribution]], and are unnecessary in many situations. For example, in [[pseudo-random number sampling]], most sampling algorithms ignore the normalization factor. In addition, in [[Bayesian analysis]] of [[conjugate prior]] distributions, the normalization factors are generally ignored during the calculations, and only the kernel considered. At the end, the form of the kernel is examined, and if it matches a known distribution, the normalization factor can be reinstated. Otherwise, it may be unnecessary (for example, if the distribution only needs to be sampled from).
In statistics, especially in [[Bayesian statistics]], the kernel of a [[probability density function]] (pdf) or [[probability mass function]] (pmf) is the form of the pdf or pmf in which any factors that are not functions of any of the variables in the domain are omitted.<ref>{{cite journal |last1=Schuster |first1=Eugene |title=Estimation of a probability density function and its derivatives |journal=The Annals of Mathematical Statistics |date=August 1969 |volume=40 |issue=4 |page=1187-1195 |doi=10.1214/aoms/1177697495|doi-access=free }}</ref> Note that such factors may well be functions of the [[parameter]]s of the pdf or pmf. These factors form part of the [[normalization factor]] of the [[probability distribution]], and are unnecessary in many situations. For example, in [[pseudo-random number sampling]], most sampling algorithms ignore the normalization factor. In addition, in [[Bayesian analysis]] of [[conjugate prior]] distributions, the normalization factors are generally ignored during the calculations, and only the kernel considered. At the end, the form of the kernel is examined, and if it matches a known distribution, the normalization factor can be reinstated. Otherwise, it may be unnecessary (for example, if the distribution only needs to be sampled from).


For many distributions, the kernel can be written in closed form, but not the normalization constant.
For many distributions, the kernel can be written in closed form, but not the normalization constant.
Line 21: Line 24:


==Nonparametric statistics==
==Nonparametric statistics==
{{further|Kernel smoothing}}

In [[nonparametric statistics]], a kernel is a weighting function used in [[non-parametric]] estimation techniques. Kernels are used in [[kernel density estimation]] to estimate [[random variable]]s' [[density function]]s, or in [[kernel regression]] to estimate the [[conditional expectation]] of a random variable. Kernels are also used in [[time-series]], in the use of the [[periodogram]] to estimate the [[spectral density]] where they are known as [[window functions]]. An additional use is in the estimation of a time-varying intensity for a [[point process]] where window functions (kernels) are convolved with time-series data.
In [[nonparametric statistics]], a kernel is a weighting function used in [[non-parametric]] estimation techniques. Kernels are used in [[kernel density estimation]] to estimate [[random variable]]s' [[density function]]s, or in [[kernel regression]] to estimate the [[conditional expectation]] of a random variable. Kernels are also used in [[time-series]], in the use of the [[periodogram]] to estimate the [[spectral density]] where they are known as [[window functions]]. An additional use is in the estimation of a time-varying intensity for a [[point process]] where window functions (kernels) are convolved with time-series data.


Line 41: Line 46:
[[File:Kernels.svg|thumb|500px|All of the kernels below in a common coordinate system.]]
[[File:Kernels.svg|thumb|500px|All of the kernels below in a common coordinate system.]]


Several types of kernel functions are commonly used: uniform, triangle, Epanechnikov,<ref>Named for {{cite journal |last=Epanechnikov |first=V. A. |year=1969 |title=Non-Parametric Estimation of a Multivariate Probability Density |journal=Theory Probab. Appl. |volume=14 |issue=1 |pages=153–158 |doi=10.1137/1114019 }}</ref> quartic (biweight), tricube,<ref>{{cite journal|author=Altman, N. S.|authorlink=Naomi Altman|year=1992|title=An introduction to kernel and nearest neighbor nonparametric regression|journal=The American Statistician|volume=46|issue=3|pages=175–185|doi=10.1080/00031305.1992.10475879}}</ref> triweight, Gaussian, quadratic<ref>{{cite journal|author1=Cleveland, W. S. |author2=Devlin, S. J.|year=1988|title=Locally weighted regression: An approach to regression analysis by local fitting|journal=Journal of the American Statistical Association|volume=83|pages=596–610|doi=10.1080/01621459.1988.10478639}}</ref> and cosine.
Several types of kernel functions are commonly used: uniform, triangle, Epanechnikov,<ref>Named for {{cite journal |last=Epanechnikov |first=V. A. |year=1969 |title=Non-Parametric Estimation of a Multivariate Probability Density |journal=Theory Probab. Appl. |volume=14 |issue=1 |pages=153–158 |doi=10.1137/1114019 }}</ref> quartic (biweight), tricube,<ref>{{cite journal|author=Altman, N. S.|author-link=Naomi Altman|year=1992|title=An introduction to kernel and nearest neighbor nonparametric regression|journal=The American Statistician|volume=46|issue=3|pages=175–185|doi=10.1080/00031305.1992.10475879|hdl=1813/31637|hdl-access=free}}</ref> triweight, Gaussian, quadratic<ref>{{cite journal|author1=Cleveland, W. S.|author1-link= William S. Cleveland |author2=Devlin, S. J.|author2-link= Susan J. Devlin |year=1988|title=Locally weighted regression: An approach to regression analysis by local fitting|journal=Journal of the American Statistical Association|volume=83|issue=403|pages=596–610|doi=10.1080/01621459.1988.10478639}}</ref> and cosine.


In the table below, if <math>K</math> is given with a bounded [[Support (mathematics)|support]], then <math> K(u) = 0 </math> for values of ''u'' lying outside the support.
In the table below, if <math>K</math> is given with a bounded [[Support (mathematics)|support]], then <math> K(u) = 0 </math> for values of ''u'' lying outside the support.
Line 53: Line 58:
Image:Kernel_epanechnikov.svg |[[V. A. Epanechnikov|Epanechnikov]] <br/> <math>K(u) = \frac{3}{4}(1-u^2)\ 1_{(|u|\leq1)}</math>
Image:Kernel_epanechnikov.svg |[[V. A. Epanechnikov|Epanechnikov]] <br/> <math>K(u) = \frac{3}{4}(1-u^2)\ 1_{(|u|\leq1)}</math>
Image:Kernel_quartic.svg |Quartic <br/> <math>K(u) = \frac{15}{16}(1-u^2)^2\ 1_{(|u|\leq1)}</math>
Image:Kernel_quartic.svg |Quartic <br/> <math>K(u) = \frac{15}{16}(1-u^2)^2\ 1_{(|u|\leq1)}</math>
Image:Kernel_triweight.svg |Triweight <br/> <math>K(u) = \frac{35}{32}(1-u^2)^3\ 1_{(|u|\leq1)}</math>
Image:Kernel_triweight.svg |Triweight <br/> <math>K(u) = \frac{35}{36}(1-u^2)^3\ 1_{(|u|\leq1)}</math>
Image:Kernel_exponential.svg |[[Normal_distribution|Gaussian]] <br/> <math>K(u) = \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}u^2}</math>
Image:Kernel_exponential.svg |[[Normal_distribution|Gaussian]] <br/> <math>K(u) = \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}u^2}</math>
Image:Kernel_cosine.svg |Cosine <br/> <math>K(u) = \frac{\pi}{4}\cos\left(\frac{\pi}{2}u\right)1_{(|u|\leq1)}</math>
Image:Kernel_cosine.svg |Cosine <br/> <math>K(u) = \frac{\pi}{4}\cos\left(\frac{\pi}{2}u\right)1_{(|u|\leq1)}</math>
Line 85: Line 90:
| 98.6%
| 98.6%
|-
|-
! Epanechnikov
! [[Epanechnikov_distribution|Epanechnikov]]
(parabolic)
(parabolic)
| <math>K(u) = \frac{3}{4}(1-u^2) </math>
| <math>K(u) = \frac{3}{4}(1-u^2) </math>
Line 154: Line 159:
|data-sort-value="0.26516504294495535"| &nbsp; <math>\frac{3\sqrt{2}}{16}</math>
|data-sort-value="0.26516504294495535"| &nbsp; <math>\frac{3\sqrt{2}}{16}</math>
| not applicable
| not applicable
|}
|}


==See also==
==See also==
Line 160: Line 165:
*[[Kernel smoother]]
*[[Kernel smoother]]
*[[Stochastic kernel]]
*[[Stochastic kernel]]
*[[Positive-definite kernel]]
*[[Density estimation]]
*[[Density estimation]]
*[[Multivariate kernel density estimation]]
*[[Multivariate kernel density estimation]]
*[[Kernel method]]


{{More footnotes|date=May 2012}}
{{More footnotes|date=May 2012}}
Line 171: Line 178:
| last = Li
| last = Li
| first = Qi
| first = Qi
| authorlink =
|author2=Racine, Jeffrey S.
|author2=Racine, Jeffrey S.
| title = Nonparametric Econometrics: Theory and Practice
| title = Nonparametric Econometrics: Theory and Practice
| publisher = Princeton University Press
| publisher = Princeton University Press
| year = 2007
| year = 2007
| isbn = 978-0-691-12161-1}}
| location =
| pages =
| url =
| doi =
| id =
| isbn = 0-691-12161-3}}


*{{cite web
*{{cite web
Line 187: Line 188:
|first=Walter
|first=Walter
|title=APPLIED SMOOTHING TECHNIQUES Part 1: Kernel Density Estimation
|title=APPLIED SMOOTHING TECHNIQUES Part 1: Kernel Density Estimation
|url=http://www.statoek.wiso.uni-goettingen.de/veranstaltungen/ast/ast_part1.pdf|accessdate=12 August 2015}}
|url=http://staff.ustc.edu.cn/~zwp/teach/Math-Stat/kernel.pdf|access-date=6 September 2018}}


*{{cite journal|year=2002
*{{cite journal|year=2002

Revision as of 20:45, 25 June 2024

The term kernel is used in statistical analysis to refer to a window function. The term "kernel" has several distinct meanings in different branches of statistics.

Bayesian statistics

In statistics, especially in Bayesian statistics, the kernel of a probability density function (pdf) or probability mass function (pmf) is the form of the pdf or pmf in which any factors that are not functions of any of the variables in the domain are omitted.[1] Note that such factors may well be functions of the parameters of the pdf or pmf. These factors form part of the normalization factor of the probability distribution, and are unnecessary in many situations. For example, in pseudo-random number sampling, most sampling algorithms ignore the normalization factor. In addition, in Bayesian analysis of conjugate prior distributions, the normalization factors are generally ignored during the calculations, and only the kernel considered. At the end, the form of the kernel is examined, and if it matches a known distribution, the normalization factor can be reinstated. Otherwise, it may be unnecessary (for example, if the distribution only needs to be sampled from).

For many distributions, the kernel can be written in closed form, but not the normalization constant.

An example is the normal distribution. Its probability density function is

and the associated kernel is

Note that the factor in front of the exponential has been omitted, even though it contains the parameter , because it is not a function of the domain variable .

Pattern analysis

The kernel of a reproducing kernel Hilbert space is used in the suite of techniques known as kernel methods to perform tasks such as statistical classification, regression analysis, and cluster analysis on data in an implicit space. This usage is particularly common in machine learning.

Nonparametric statistics

In nonparametric statistics, a kernel is a weighting function used in non-parametric estimation techniques. Kernels are used in kernel density estimation to estimate random variables' density functions, or in kernel regression to estimate the conditional expectation of a random variable. Kernels are also used in time-series, in the use of the periodogram to estimate the spectral density where they are known as window functions. An additional use is in the estimation of a time-varying intensity for a point process where window functions (kernels) are convolved with time-series data.

Commonly, kernel widths must also be specified when running a non-parametric estimation.

Definition

A kernel is a non-negative real-valued integrable function K. For most applications, it is desirable to define the function to satisfy two additional requirements:

  • Symmetry:

The first requirement ensures that the method of kernel density estimation results in a probability density function. The second requirement ensures that the average of the corresponding distribution is equal to that of the sample used.

If K is a kernel, then so is the function K* defined by K*(u) = λKu), where λ > 0. This can be used to select a scale that is appropriate for the data.

Kernel functions in common use

All of the kernels below in a common coordinate system.

Several types of kernel functions are commonly used: uniform, triangle, Epanechnikov,[2] quartic (biweight), tricube,[3] triweight, Gaussian, quadratic[4] and cosine.

In the table below, if is given with a bounded support, then for values of u lying outside the support.

Kernel Functions, K(u) Efficiency[5] relative to the Epanechnikov kernel
Uniform ("rectangular window")

Support:

"Boxcar function"

    92.9%
Triangular

Support:

    98.6%
Epanechnikov

(parabolic)

Support:

    100%
Quartic
(biweight)

Support:

    99.4%
Triweight

Support:

    98.7%
Tricube

Support:

    99.8%
Gaussian     95.1%
Cosine

Support:

    99.9%
Logistic     88.7%
Sigmoid function     84.3%
Silverman kernel[6]     not applicable

See also

References

  1. ^ Schuster, Eugene (August 1969). "Estimation of a probability density function and its derivatives". The Annals of Mathematical Statistics. 40 (4): 1187-1195. doi:10.1214/aoms/1177697495.
  2. ^ Named for Epanechnikov, V. A. (1969). "Non-Parametric Estimation of a Multivariate Probability Density". Theory Probab. Appl. 14 (1): 153–158. doi:10.1137/1114019.
  3. ^ Altman, N. S. (1992). "An introduction to kernel and nearest neighbor nonparametric regression". The American Statistician. 46 (3): 175–185. doi:10.1080/00031305.1992.10475879. hdl:1813/31637.
  4. ^ Cleveland, W. S.; Devlin, S. J. (1988). "Locally weighted regression: An approach to regression analysis by local fitting". Journal of the American Statistical Association. 83 (403): 596–610. doi:10.1080/01621459.1988.10478639.
  5. ^ Efficiency is defined as .
  6. ^ Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis. Chapman and Hall, London.
  • Li, Qi; Racine, Jeffrey S. (2007). Nonparametric Econometrics: Theory and Practice. Princeton University Press. ISBN 978-0-691-12161-1.
  • Comaniciu, D; Meer, P (2002). "Mean shift: A robust approach toward feature space analysis". IEEE Transactions on Pattern Analysis and Machine Intelligence. 24 (5): 603–619. CiteSeerX 10.1.1.76.8968. doi:10.1109/34.1000236.