Estimation of distribution algorithm
From Wikipedia, the free encyclopedia
The introduction to this article provides insufficient context for those unfamiliar with the subject. Please help improve the article with a good introductory style. |
This article or section is missing citations or needs footnotes. Using inline citations helps guard against copyright violations and factual inaccuracies. (January 2008) |
Estimation of Distribution Algorithms (EDA) are an outgrowth of genetic algorithms. In a genetic algorithm, a population of candidate solutions to a problem is maintained as part of the search for an optimum solution. This population is typically represented explicitly as an array of objects. Depending on the specifics of the GA, the objects might be bit strings, vectors of real numbers, LISP style S expressions or some custom representation. In an EDA, this explicit representation of the population is replaced with a probability distribution over the choices available at each position in the vector that represents a population member.
For example, if the population is represented by bit strings of length 4, the EDA for the populations would be a single vector of four probablities (p1, p2, p3, p4) where each p is the probability of that position being a 1. Using this probability vector it is possible to create an arbitrary number of candidate solutions.
In evolutionary computation new candidate solutions are often generated by combining and modifying existing solutions in a stochastic way. The underlying probability distribution of new solutions over the space of possible solutions is usually not explicitly specified. In EDAs a population may be approximated with a probability distribution and new candidate solutions can be obtained by sampling this distribution. This may have several advantages, including avoiding premature convergence and being a more compact representation.
Better-known EDAs include the Compact Genetic Algorithm, Population-based incremental learning, the Univariate Marginal Distribution Algorithm, and the Estimation of Multivariate Normal Algorithm.
The model may be found to fit an existing population or take on the role of the population entirely. Once the model is obtained, it can be sampled to produce more candidate solutions which are then used to adapt or regenerate the model. EDAs are typically classified according to the level of variable interaction that their probabilistic model includes - they can be classed as univariate (no interactions), bivariate (interactions between pairs of variables) or multivariate (interactions between more than two variables) (Pelikan, 1999).
[edit] References
The references in this article would be clearer with a different or consistent style of citation, footnoting, or external linking. |
- Larrañga, Pedro; & Lozano, Jose A. (Eds.). Estimation of distribution algorithms: A new tool for evolutionary computation. Kluwer Academic Publishers, Boston, 2002.
- Lozano, J. A.; Larrañga, P.; Inza, I.; & Bengoetxea, E. (Eds.). Towards a new evolutionary computation. Advances in estimation of distribution algorithms. Springer, 2006.
- Pelikan, Martin; Goldberg, David & Lobo, Fernando (1999), A Survey of Optimization by Building and Using Probabilistic Models, Illinois: Illinois Genetic Algorithms Laboratory (IlliGAL), University of Illinois at Urbana-Champaign.
- Pelikan, Martin. Hierarchical Bayesian optimization algorithm: Toward a new generation of evolutionary algorithms. Springer, 2005.
- Pelikan, Martin; Sastry, Kumara; & Cantu-Paz, Erick (Eds.). Scalable optimization via probabilistic modeling: From algorithms to applications. Springer, 2006.