Graphical model

From Wikipedia, the free encyclopedia

In probability theory and statistics, a graphical model (GM) represents dependencies among random variables by a graph in which each random variable is a node, and the edges between the nodes represent conditional dependencies.

In the simplest case, the network structure of the model is a directed acyclic graph (DAG). Then the GM represents a factorization of the joint probability of all random variables. More precisely, if the events are

X1, ..., Xn,

then the joint probability

P(X1, ..., Xn),

is equal to the product of the conditional probabilities

P(Xi | parents of Xi) for i = 1,...,n.

In other words, the joint distribution factors into a product of conditional distributions. The graph structure indicates direct dependencies among random variables. Any two nodes that are not in a descendant/ancestor relationship are conditionally independent given the values of their parents.

This type of graphical model is known as a directed graphical model, Bayesian network, or belief network. Classic machine learning methods like hidden Markov models or neural networks can be considered as special cases of Bayesian networks.

Graphical models with undirected edges are generally called Markov random fields or Markov networks.

Applications of graphical models include modelling of gene regulatory networks, speech recognition, gene finding, computer vision and diagnosis of diseases.

A good reference for learning the basics of graphical models is written by Neapolitan, Learning Bayesian networks (2004). A more advanced and statistically oriented book is by Cowell, Dawid, Lauritzen and Spiegelhalter, Probabilistic networks and expert systems (1999). See also belief propagation.

[edit] Reference

In other languages