Continuous-time Markov process

From Wikipedia, the free encyclopedia

In probability theory, a continuous-time Markov process is a stochastic process { X(t) : t ≥ 0 } that satisfies the Markov property and takes values from a set called the state space. The Markov property states that at any times s > t > 0, the conditional probability distribution of the process at time s given the whole history of the process up to and including time t, depends only on the state of the process at time t. In effect, the state of the process at time s is conditionally independent of the history of the process before time t, given the state of the process at time t.

Contents

[edit] Mathematical definitions

Intuitively, one can define a time-homogeneous Markov process as follows. Let X(t) be the random variable describing the state of the process at time t. Now prescribe that in some small increment of time from t to t+h, the probability that the process makes a transition to some state j, given that it started in some state ij at time t, is given by

\Pr(X(t+h) = j | X(t) = i) = q_{ij}h + o(h),\,

where o(h) represents a quantity that goes to zero faster than h goes to zero (see the article on order notation). Hence, over a sufficiently small interval of time, the probability of a particular transition is roughly proportional to the duration of that interval.

Continuous-time Markov processes are most easily defined by specifying the transition rates qij, and these are typically given as the ij-th elements of the transition rate matrix, Q (sometimes called a Q-matrix by convention). Q is a finite matrix according to whether or not the state space of the process is finite (it may be countably infinite, for example in a Poisson process where the state space is the non-negative integers). The most intuitive continuous-time Markov processes have Q-matrices that are:

  • conservative—the i-th diagonal element qii of Q is given by
q_{ii} = -q_{i} = -\sum_{j\neq i} q_{ij},
  • stable—for any given state i, all elements qij (and qii) are finite.

(Note, however, that a Q-matrix may be non-conservative, unstable or both.) When the Q-matrix is both stable and conservative, the probability that no transition happens in some time r is

\Pr(X(s) = i ~\forall~ s\in(t, t+r]\, |\, X(t) = i ) = e^{-q_{i}r}.

That is, the probability distribution of the waiting time until the first transition is an exponential distribution with rate parameter qi (= −qii), and continuous-time Markov processes are thus memoryless processes.

The stationary probability distribution, π, of a continuous-time Markov process, Q, may (subject to some important technical assumptions) be found from the property

πQ = 0.

Note that

πe = 1,

where e is a column matrix with all elements consisting of 1's.

A time dependent (time heterogeneous) Markov process is a Markov process as above, but with the q-rate a function of time, denoted qij(t).

[edit] Embedded Markov chain

One method of finding the stationary probability distribution, π, of an ergodic Continuous-time Markov process, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as a jump process. Each element of the one-step transition probability matrix of the EMC, S, is denoted by sij, and represents the conditional probability of transitioning from state i into state j. These conditional probabilities may be found by


s_{ij} = \left\{
\begin{matrix}
{ q_{ij} \over \sum_{k \neq i} q_{ik}} & \mathrm{if} i \neq j \\
0 & \mathrm{otherwise}.
\end{matrix}
\right.

From this, S may be written as

S = I - D_Q^{-1}Q,

where DQ = diag{Q} is the diagonal matrix of Q.

To find the stationary probability distribution vector, we must next find φ such that

φ(IS) = 0,

with φ being a row vector, such that all elements in φ are greater than 0 and ||φ||1 = 1, and the 0 on the right side also being a row vector of 0's. From this, π may be found as

\pi = {-\phi D_Q^{-1} \over \left\|  \phi D_Q^{-1} \right\|_1}.

Note that S may be periodic, even if Q is not. Once π is found, it must be normalized to a unit vector.

Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δ-skeleton.

[edit] See also

[edit] References

  • William J. Stewart (1994). Introduction to the Numerical Solution of Markov Chains. Princeton University Press, 17-23. ISBN 0691036993. 

[edit] External links

Languages