Markov logic network

From Wikipedia, the free encyclopedia

A Markov logic network (or MLN) is a first-order knowledge base with a real number, or weight, attached to each formula, and implement a probabilistic logic. The weights associated with the formulas in an MLN jointly determine the probabilities of those formulas (and vice versa) using log-linear modeling. An MLN defines a probability distribution over Herbrand interpretations (sometimes referred to as "possible worlds"), and can be thought of as a template for constructing Markov networks. Inference in MLNs can be performed using standard Markov network inference techniques over the minimal subset of the relevant Markov network required for answering the query. These techniques include Gibbs sampling, which is effective but may be excessively slow for large networks, loopy belief propagation, or approximation via pseudolikelihood.

Markov logic networks have been studied extensively by members of the Statistical Relational Learning group at the University of Washington.

[edit] Resources

[edit] External links