Degenerate distribution

From Wikipedia, the free encyclopedia

Degenerate
Probability mass function
Plot of the degenerate distribution PMF for k0=0
PMF for k0=0. The horizontal axis is the index i of ki. (Note that the function is only defined at integer indices. The connecting lines do not indicate continuity.)
Cumulative distribution function
Plot of the degenerate distribution CDF for k0=0
CDF for k0=0. The horizontal axis is the index i of ki.
Parameters k_0 \in (-\infty,\infty)\,
Support k=k_0\,
Probability mass function (pmf) 
    \begin{matrix}
    1 & \mbox{for }k=k_0 \\0 & \mbox{otherwise }
    \end{matrix}
Cumulative distribution function (cdf) 
    \begin{matrix}
    0 & \mbox{for }k<k_0 \\1 & \mbox{for }k\ge k_0
    \end{matrix}
Mean k_0\,
Median k_0\,
Mode k_0\,
Variance 0\,
Skewness 0\,
Excess kurtosis 0\,
Entropy 0\,
Moment-generating function (mgf) e^{k_0t}\,
Characteristic function e^{ik_0t}\,

In mathematics, a degenerate distribution is the probability distribution of a discrete random variable whose support consists of only one value. Examples include a two-headed coin and rolling a die whose sides all show the same number. While this distribution does not appear random in the everyday sense of the word, it does satisfy the definition of random variable.

The degenerate distribution is localized at a point k0 on the real line. The probability mass function is given by:

f(k;k_0)=\left\{\begin{matrix} 1, & \mbox{if }k=k_0 \\ 0, & \mbox{if }k \ne k_0 \end{matrix}\right.

The cumulative distribution function of the degenerate distribution is then:

F(k;k_0)=\left\{\begin{matrix} 1, & \mbox{if }k\ge k_0 \\ 0, & \mbox{if }k<k_0 \end{matrix}\right.


[edit] Constant random variable

In probability theory, a constant random variable is a discrete random variable that takes a constant value, regardless of any event that occurs. This is technically different from an almost surely constant random variable, which may take other values, but only on events with probability zero. Constant and almost surely constant random variables provide a way to deal with constant values in a probabilistic framework.

Let  X: Ω → R  be a random variable defined on a probability space  (Ω, P). Then  X  is an almost surely constant random variable if

\Pr(X = c) = 1,

and is furthermore a constant random variable if

X(\omega) = c, \quad \forall\omega \in \Omega.

Note that a constant random variable is almost surely constant, but not necessarily vice versa, since if  X  is almost surely constant then there may exist  γ ∈ Ω  such that  X(γ) ≠ c  (but then necessarily Pr({γ}) = 0, in fact Pr(X ≠ c) = 0).

For practical purposes, the distinction between  X  being constant or almost surely constant is unimportant, since the probability mass function  f(x)  and cumulative distribution function  F(x)  of  X  do not depend on whether  X  is constant or 'merely' almost surely constant. In either case,

f(x) = \begin{cases}1, &x = c,\\0, &x \neq c.\end{cases}

and

F(x) = \begin{cases}1, &x \geq c,\\0, &x < c.\end{cases}

The function  F(x)  is a step function.

[edit] See also

Dirac delta function