Phase-type distribution
From Wikipedia, the free encyclopedia
A phase--type distribution is a probability distribution that results from a system one or more inter-related poisson processes occurring in sequence, or phases. The sequence in which each of the phases occur may itself be a stochastic process. The distribution can be represented by a random variable describing the time until absorption of a Markov Chain with one absorbing state. Each of the states of the Markov Chain represent one of the phases.
The following probability distributions are all considered special cases of a Phase--type distribution:
- Exponential distribution - 1 phase.
- Erlang distribution - 2 or more identical phases in sequence.
- Deterministic distribution (or constant) - The limiting case of an Erlang distribution, as the number of phases become infinite, while the time in each state becomes zero.
- Coxian distribution - 2 or more (not necessarily identical) phases in sequence, with a probability of transitioning to the terminating/absorbing state after each phase.
- Hyperexponential - 2 or more non-identical phases, that each have a probability of occurring in a mutually exclusive, or parallel, manner. (Note: The exponential distribution is the degenerate situation when all the parallel phases are identical.)
Contents |
[edit] Discrete phase-type distribution
[edit] Definition and theorems
Assume that for the matrix is the transition matrix of a m + 1--state discrete Markov Chain with one absorbing state. Hence, arranging the matrix to have the m + 1--th state as the absorbing one, we will have,
Here, is a matrix and is a vector. As is a stochastic matrix we have,
Here, is a matrix with all ones as its elements. Continuing with (\ref{eq1}) we have,
Hence, if is non--singular we will have,
Here, we prove that actually is always non--singular. Assume the contrary case. Hence, there exists a non--zero vector for which,
Assuming that we have,
Note that the equality happens when all xis have the same sign ( is positive). Assume that xk is the largest element of . Hence,
Hence, the equalities should happen. This means all have to be equal to . Given the previous condition that all xi have the same sign, this means . Hence, according to (\ref{eq2}), we have,
As the elements of give the sum of different rows of , so is stochastic. Something resulting in which makes the absorbing state unreachable from the other states, making the time till absorption infinite. From this argument we find a constraint for the matrix . Namely, at least one row of should sum to less than one. Here, we draw another, more important, conclusion that can not have any right eigenvalue with absolute value of more than one. This basically means that the matrix tends to zero as k goes to infinity.
The time till absorption of the Markov Chain represented by the matrix given in (\ref{eq4}) is denoted as . Here, is the initialization probability and is a vector of all non--negative entries summing up to one. The probability density function of this random variable equals,
Hence, we have,
Using (\ref{eq5}) we have,
Also, trying to find the generating function, we have,
[edit] See also
[edit] References
- Sheldon M Ross. Introduction to Probability Models, 8th edition. Chapter 6: Continuous Time Markov Chains, Academic Press; December, 2002