Dirac measure

From Wikipedia, the free encyclopedia

In mathematics, a Dirac measure is a measure δx on a set X (with any sigma algebra of subsets of X) that gives the singleton set {x} the measure 1, for a chosen element x \in X:

\delta_{x} \left( \{ x \} \right) = 1.

In general, the measure is defined by

\delta_{x} (A) = \left\{ \begin{matrix} 0, & x \not \in A; \\ 1, & x \in A. \end{matrix} \right.

for any measurable set A \subseteq X.

The Dirac measure is a probability measure, and in terms of probability it represents the almost sure outcome x in the sample space X. We can also say that the measure is a single atom at x. (But treating the Dirac measure as an atomic measure is not correct when we consider the sequential definition of Dirac delta, as the limit of a delta sequence). The Dirac measures are the extreme points of the convex set of probability measures on X.

The name is a back-formation from the Dirac delta function, considered as a Schwartz distribution, for example on the real line; measures can be taken to be a special kind of distribution. The identity

\int_{X} f(y) \, \mathrm{d} \delta_{x} (y) = f(x),

which, in the form

\int_{X} f(y) \delta_{x} y \, \mathrm{d} (y) = f(x),

is often taken to be part of the defintion of the "delta function", holds as a theorem of Lebesgue integration.

The support of the Dirac measure δx is the singleton set {x}.

In other languages