Markov chain

From Wikipedia, the free encyclopedia

In mathematics, a Markov chain, named after Andrey Markov, is a discrete-time stochastic process with the Markov property.

A Markov chain describes at successive times the states of a system. At these times the system may have changed from the state it was in the moment before to another state, or it may have stayed in the same state. The changes of state are called transitions. The Markov property means that the conditional probability distribution of the state in the future, given the state of the process currently and in the past, depends only on its current state and not on its state in the past.

Contents

[edit] Definition

A Markov chain is a sequence of random variables X1, X2, X3, ... with Markov property, namely that, given the present state, the future and past states are independent. Formally,

\Pr(X_{n+1}=x|X_n=x_n, \ldots, X_1=x_1, X_0=x_0) = \Pr(X_{n+1}=x|X_n=x_n).\,

The possible values of Xi form a countable set S called the state space of the chain. (There are also continuous-time Markov processes, which have countable state space but have a continuous index). Markov chains are often described by a directed graph, where the edges are labeled by the probabilities of going from one state to the other states.

A finite state machine is an example of a Markov chain. If the machine is in state y at time n, then the probability that it moves to state x at time n + 1 depends only on the current state and does not depend on the time n.

A time-homogeneous Markov chain (or, a Markov chain with time-homogeneous transition probabilities) is a process where one has

\Pr(X_{n+1}=x|X_n=y) = \Pr(X_{n}=x|X_{n-1}=y)\,

for all n. A general, non-time-homogeneous Markov chain does not require this property, and so one may have

\Pr(X_{n+1}=x|X_n=y) \neq \Pr(X_{n}=x|X_{n-1}=y)\,

in general.

[edit] Properties of Markov chains

Define the probability of going from state i to state j in n time steps as

p_{ij}^{(n)} = \Pr(X_n=j\mid X_0=i) \,

and the single-step transition as

p_{ij} = \Pr(X_1=j\mid X_0=i) \,

The n-step transition satisfies the Chapman-Kolmogorov equation, that for any 0<k<n,

p_{ij}^{(n)} = \sum_{r \in S} p_{ir}^{(k)} p_{rj}^{(n-k)}

The marginal distribution Pr(Xn = x) is the distribution over states at time n. The initial distribution is Pr(X0 = x). The evolution of the process through one time step is described by

\Pr(X_{n+1}=j) = \sum_{r \in S} p_{rj} Pr(X_n=r) = \sum_{r \in S} p_{rj}^{(n)} Pr(X_0=r)

The superscript (n) is intended to be an integer-valued label only; however, if the Markov chain is time-stationary, then this superscript can also be interpreted as a "raising to the power of", discussed further below.

[edit] Reducibility

A state j is said to be accessible from state i (written ij) if, given that we are in state i, there is a non-zero probability that at some time in the future, we will be in state j. That is, that there exists an n such that

\Pr(X_{n}=j | X_0=i) > 0.\,

A state i is said to communicate with state j (written ij) if it is true that both i is accessible from j and that j is accessible from i. A set of states C is a communicating class if every pair of states in C communicates with each other. (It can be shown that communication in this sense is an equivalence relation). A communicating class is closed if the probability of leaving the class is zero, namely that if i is in C but j is not, then j is not accessible from i.

Finally, a Markov chain is said to be irreducible if its state space is a communicating class; this means that, in an irreducible Markov chain, it is possible to get to any state from any state.

[edit] Periodicity

A state i has period k if any return to state i must occur in some multiple of k time steps and k is the largest number with this property. For example, if it is only possible to return to state i in an even number of steps, then i is periodic with period 2. Formally, the period of a state is defined as

k = \operatorname{gcd}\{ n: Pr(X_n = i | X_0 = i) > 0\}

(where "gcd" is the greatest common divisor)

If k = 1, then the state is said to be aperiodic; otherwise (k>1), the state is said to be periodic with period k.

It can be shown that every state in a communicating class must have the same period.

An irreducible Markov chain is said to be ergodic if its states are aperiodic.

[edit] Recurrence

A state i is said to be transient if, given that we start in state i, there is a non-zero probability that we will never return back to i. Formally, let the random variable Ti be the next return time to state i (the "hitting time"):

T_i = \operatorname{min}\{n: X_n=i | X_0=i\}

Then, state i is transient if Ti is not finite with some probability:

Pr(T_i < \infty) < 1

If a state i is not transient (it has finite hitting time with probability 1), then it is said to be recurrent or persistent. Although the hitting time is finite, it need not have a finite average. Let Mi be the expected (average) return time,

M_i = E[T_i]\,

Then, state i is positive recurrent if Mi is finite; otherwise, state i is null recurrent (the terms non-null persistent and null persistent are also used, respectively).

It can be shown that a state is recurrent if and only if

\sum_{n=0}^{\infty} p_{ii}^{(n)} = \infty

[edit] Ergodicity

A state i is said to be ergodic if it is aperiodic and positive recurrent. If all states in a Markov chain are ergodic, then the chain is said to be ergodic.

[edit] Steady state analysis and limiting distributions

If the Markov chain is a time-homogeneous Markov chain, so that the process is described by a single, time-independent matrix pij, then the vector π is a stationary distribution if its entries πj sum to 1 and satisfy

\pi_j = \sum_{i \in S} \pi_i p_{ij}

An irreducible chain has a stationary distribution if and only if all of its states are positive-recurrent. In that case, π is unique and is related to the expected return time:

\pi_j = \frac{1}{M_j}

Further, if the chain is both irreducible and aperiodic, then for any i and j,

\lim_{n \rarr \infty} p_{ij}^{(n)} = \frac{1}{M_j}

Note that there is no assumption on the starting distribution; the chain converges to the stationary distribution regardless of where it begins.

If a chain is not irreducible, its stationary distributions will not be unique (consider any closed communicating class in the chain; each one will have its own unique stationary distribution. Any of these will extend to a stationary distribution for the overall chain, where the probability outside the class is set to zero). However, if a state j is aperiodic, then

\lim_{n \rarr \infty} p_{jj}^{(n)} = \frac{1}{M_j}

and for any other state i, let fij be the probability that the chain ever visits state j if it starts at i,

\lim_{n \rarr \infty} p_{ij}^{(n)} = \frac{f_{ij}}{M_j}

[edit] Markov chains with a finite state space

If the state space is finite, the transition probability distribution can be represented by a matrix, called the transition matrix, with the (i, j)'th element of P equal to

p_{ij} = \Pr(X_{n+1}=j\mid X_n=i) \,

P is a stochastic matrix. Further, when the Markov chain is a time-homogeneous Markov chain, so that the transition matrix P is independent of the label n, then the k-step transition probability can be computed as the k'th power of the transition matrix, Pk.

The stationary distribution π is a (row) vector which satisfies the equation

\pi = \pi\mathbf{P}\,

In other words, the stationary distribution π is a normalized left eigenvector of the transition matrix associated with the eigenvalue 1.

Alternatively, π can be viewed as a fixed point of the linear (hence continuous) transformation on the unit simplex associated to the matrix P. As any continuous transformation in the unit simplex has a fixed point, a stationary distribution always exists, but is not guaranteed to be unique, in general. However, if the markov chain is irreducible and aperiodic, then there is a unique stationary distribution π. In addition, Pk converges to a rank-one matrix in which each row is the stationary distribution π, that is,

\lim_{k\rightarrow\infty}\mathbf{P}^k=\mathbf{1}\pi

where 1 is the column vector with all entries equal to 1. This is stated by the Perron-Frobenius theorem. This means that as time goes by, the Markov chain forgets where it began (its initial distribution) and converges to its stationary distribution.

A Markov chain is said to be reversible if there is a π such that

πipi,j = πjpj,i

For reversible Markov chains, π is always a stationary distribution.

A Bernoulli scheme is a special case of a Markov chain where the transition probability matrix has identical rows, which means that the next state is even independent of the current state (in addition to being independent of the past states). A Bernoulli scheme with only two possible states is known as a Bernoulli process.

[edit] Scientific applications

Markovian systems appear extensively in physics, particularly statistical mechanics, whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time-invariant, and that no relevant history need be considered which is not already included in the state description.

Markov chains can also be used to model various processes in queueing theory and statistics. Claude Shannon's famous 1948 paper A mathematical theory of communication, which at a single step created the field of information theory, opens by introducing the concept of entropy through Markov modeling of the English language. Such idealised models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effective data compression through entropy coding techniques such as arithmetic coding. They also allow effective state estimation and pattern recognition. The world's mobile telephone systems depend on the Viterbi algorithm for error-correction, while Hidden Markov models (where the Markov transition probabilities are initially unknown and must also be estimated from the data) are extensively used in speech recognition and also in bioinformatics, for instance for coding region/gene prediction. Markov chains also play an important role in reinforcement learning.

The PageRank of a webpage as used by Google is defined by a Markov chain. It is the probability to be at page i in the stationary distribution on the following Markov chain on all (known) webpages. If N is the number of known webpages, and a page i has ki links then it has transition probability (1-q)/ki + q/N for all pages that are linked to and q/N for all pages that are not linked to. The parameter q is taken to be about 0.15.

Markov models have also been used to analyze web navigation behavior of users. A user's web link transition on a particular website can be modeled using first or second order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user.

Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions - a process called Markov chain Monte Carlo or MCMC for short. In recent years this has revolutionised the practicability of Bayesian inference methods.

Markov chains also have many applications in biological modelling, particularly population processes, which are useful in modelling processes that are (at least) analogous to biological populations. The Leslie matrix is one such example, though some of its entries are not probabilities (they may be greater than 1).

A recent application of Markov chains is in geostatistics. That is, Markov chains are used in two to three dimensional stochastic simulations of discrete variables conditional on observed data. Such an application is called "Markov chain geostatistics", similar with kriging geostatistics. The Markov chain geostatistics method is still in development.

Markov chains can be used to model many games of chance. The children's games Chutes and Ladders and Candy Land, for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares).

[edit] Music

Markov chains are employed in algorithmic music composition, particulary in software programs such as CSound or MAX. In a first-order chain, the states of the system become note or pitch values, and a probability vector for each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce and output note values based on the transition matrix weightings, which could be MIDI note values, frequency (Hz), or any other desireable metric.

1st-order matrix
Note A C# Eb
A 0.1 0.6 0.3
C# 0.25 0.05 0.7
Eb 0.7 0.3 0
2nd-order matrix
Note A D G
AA 0.18 0.6 0.22
AD 0.5 0.5 0
AG 0.15 0.75 0.1
DD 0 0 1
DA 0.25 0 0.75
DG 0.9 0.1 0
GG 0.4 0.4 0.2
GA 0.5 0.25 0.25
GD 1 0 0

A second-order Markov chain can be introduced by considering the current state and also the previous state, as indicated in the second table. Higher, n-th order chains tend to "group" particular notes together, while 'breaking off' into other patterns and sequences ocassionally. These higher-order chains tend to generate results with a sense of phrasal structure, rather than the 'aimless wandering' produced by a first-order system[1].

[edit] Markov parody generators

Markov processes can also be used to generate superficially "real-looking" text given a sample document: they are used in a variety of recreational "parody generator" software (see dissociated press, Jeff Harrison, Mark V Shaney or [1]).

[edit] History

Andrey Markov produced the first results (1906) for these processes. A generalization to countably infinite state spaces was given by Kolmogorov (1936). Markov chains are related to Brownian motion and the ergodic hypothesis, two topics in physics which were important in the early years of the twentieth century, but Markov appears to have pursued this out of a mathematical motivation, namely the extension of the law of large numbers to dependent events.

[edit] See also

[edit] References

  1. ^ Curtis Roads (ed.) (1996). The Computer Music Tutorial. MIT Press. ISBN 0-252-18158-4.
  • A.A. Markov. "Rasprostranenie zakona bol'shih chisel na velichiny, zavisyaschie drug ot druga". Izvestiya Fiziko-matematicheskogo obschestva pri Kazanskom universitete, 2-ya seriya, tom 15, pp 135-156, 1906.
  • A.A. Markov. "Extension of the limit theorems of probability theory to a sum of variables connected in a chain". reprinted in Appendix B of: R. Howard. Dynamic Probabilistic Systems, volume 1: Markov Chains. John Wiley and Sons, 1971.
  • Leo Breiman. Probability. Original edition published by Addison-Wesley, 1968; reprinted by Society for Industrial and Applied Mathematics, 1992. ISBN 0-89871-296-3. (See Chapter 7.)
  • J.L. Doob. Stochastic Processes. New York: John Wiley and Sons, 1953. ISBN 0-471-52369-0.
  • Booth, Taylor L. (1967). Sequential Machines and Automata Theory, 1st, New York: John Wiley and Sons, Inc.. Library of Congress Card Catalog Number 67-25924. Extensive, wide-ranging book meant for specialists, written for both theoretical computer scientists as well as electrical engineers. With detailed explanations of state minimization techniques, FSMs, Turing machines, Markov processes, and undecidability. Excellent treatment of Markov processes pp.449ff. Discusses Z-transforms, D transforms in their context.
  • Kemeny, John G., Hazleton Mirkil, J. Laurie Snell, Gerald L. Thompson (1959). Finite Mathematical Structures, 1st, Englewood Cliffs, N.J.: Prentice-Hall, Inc.. Library of Congress Card Catalog Number 59-12841. Classical text. cf Chapter 6 Finite Markov Chains pp.384ff.

[edit] External links

  • A Markov text generator generates nonsense in the style of another work, because the probability of spitting out each word depends only on the n words before it