Continuous-time Markov process

From Wikipedia, the free encyclopedia

In probability theory, a continuous-time Markov process is a stochastic process { X(t) : t ≥ 0 } that satisfies the Markov property and takes values from amongst the elements of a discrete set called the state space. The Markov property states that at any times s > t > 0, the conditional probability distribution of the process at time s given the whole history of the process up to and including time t, depends only on the state of the process at time t. In effect, the state of the process at time s is conditionally independent of the history of the process before time t, given the state of the process at time t.

Contents

[edit] Mathematical definitions

Intuitively, one can define a Markov process as follows. Let X(t) be the random variable describing the state of the process at time t. Now prescribe that in some small increment of time from t to t+h, the probability that the process makes a transition to some state j, given that it started in some state ij at time t, is given by

\Pr(X(t+h) = j | X(t) = i) = q_{ij}h + o(h),\,

where o(h) represents a quantity that goes to zero faster than h as h goes to zero (see the article on order notation). Hence, over a sufficiently small interval of time, the probability of a particular transition is roughly proportional to the duration of that interval.

Continuous-time Markov processes are most easily defined by specifying the transition rates qij, and these are typically given as the ij-th elements of the transition rate matrix, Q (sometimes called a Q-matrix by convention). Q is a finite matrix according to whether or not the state space of the process is finite (it may be countably infinite, for example in a Poisson process where the state space is the non-negative integers). The most intuitive continuous-time Markov processes have Q-matrices that are:

  • conservative—the i-th diagonal element qii of Q is given by
q_{ii} = -q_{i} = -\sum_{j\neq i} q_{ij},
  • stable—for any given state i, all elements qij (and qii) are finite.

(Note, however, that a Q-matrix may be non-conservative, unstable or both.) When the Q-matrix is both stable and conservative, the probability that no transition happens in some time r is

\Pr(X(t+r) = i | X(s) = i ~\forall~ s\in[t, t+r)) = e^{-q_{i}r}.

Therefore, the probability distribution of the waiting time until the first transition is an exponential distribution with rate parameter qi (= −qii), and continuous-time Markov processes are thus memoryless processes.

The stationary probability distribution, π, of a continuous-time Markov process, Q, may be found from the property

πQ = 0.

Note that

πe = 1,

where e is a column matrix with all elements consisting of 1's.

[edit] Related processes

Given that a process that started in state i has experienced a transition out of state i, the conditional probability that the transition is into state j is

{q_{ij} \over \sum_{k \neq i} q_{ik}}= {q_{ij} \over q_i}.

Using these probabilities, the sequence of states visited by the process (the so-called jump process) can be described by a (discrete-time) Markov chain. The transition matrix P of the jump chain has elements pij = qij/qi, ij, and pii = 0.

Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δ-skeleton.

[edit] Embedded Markov chain

One method of finding the stationary probability distribution, π, of an ergodic Continuous-time Markov process, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain. Each element of the one-step transition probability matrix of the EMC, S is denoted by sij, such that

s_{ij} = { q_{ij} \over \sum_{k \neq i} q_{ik}},

if i is not equal to j and is 0 otherwise.

From this, S may be written as

S = I - D_Q^{-1}Q,

where DQ = diag{Q} is the diagonal matrix of Q.

To find the stationary probability distribution vector, we must next find φ such that

φ(IS) = 0,

with φ being a row vector, such that all elements in φ are greater than 0 and ||φ||1 = 1 (the 1-norm, ||x||1, is explained in Norm_(mathematics)), and the 0 on the right side also being a row vector of 0's. From this, π may be found as

\pi = {-\phi D_Q^{-1} \over \left\|  \phi D_Q^{-1} \right\|_1}.

Note that S may be periodic, even if Q is not. Once π is found, it must be normalized to a unit vector.

[edit] See also

In other languages