Talk:Markov network
From Wikipedia, the free encyclopedia
Needs much more work - discussion of inference, the Hammersley-Clifford Theorem, links, etc. Will add more. Revise away. -- Soultaco, 22:50 24 Dec 2004 (UTC)
Could someone please elaborate on this sentence in the intro?: "It can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies); on the other hand, it can't represent certain dependencies that a Bayesian network can (such as induced dependencies)." —Preceding unsigned comment added by 18.243.2.30 (talk) 09:59, 2 December 2007 (UTC)
[edit] Potential functions
From the current version of the article:
- What's the relationship between the potential function and the Gibbs function?
- a set of potential functions
, where each
represents a clique in G, and
is the set of all possible assignments to the random variables represented by
. In other words, each clique has associated with it a function from possible assignments to all nodes to the nonnegative real numbers.
I find this needlessly complex and somewhat confusing. Here is how I would word it:
- a set Φ of potential functions
, where each ci is a clique in G, and
is the set of all possible assignments to the elements of ci. In other words, each clique has associated with it a function from assignments (to each element of the clique) to nonnegative real numbers.
It's not clear to me why the definition needs to mention both i's and v's, or why such schematic letters are even necessary for the definition at all. (Can't we just identify the vertices of the graph with random variables rather than invoking some "representation" relation between them?) However, I'm hesitant to just unilaterally edit the article, as I'm not very knowledgeable about these things. Thoughts? Dbtfz (talk - contribs) 02:21, 4 February 2006 (UTC)
- I went ahead and revised the passage mentioned above as well as much of the rest of the article. Anyone who is interested, please review and revise as needed. Dbtfz (talk - contribs) 06:02, 10 February 2006 (UTC)
-
- I am not convinced of the basis and need for the requirement of nonnegative real numbers. Is there a reason (ideally, with a reference) for such a restriction, or could the restriction not rather be omitted? See for example the antiferromagnetic Ising model (an Ising model being considered as a special case of a Markov network). --Chris Howard (talk) 16:46, 19:55, 4 May 2008 (UTC)
[edit] MRF vs. Bayesian Network
A Markov network is similar to a Bayesian network in its representation of dependencies, but a Markov network can represent dependencies that a Bayesian network cannot, such as cyclic dependencies.
Shouldn't it be such as cyclic independencies ? Also, this sentence gives the impression that MRFs are a generalization of BNs, when BNs can actually represent independencies that a MRF cannot [1, p. 393 -- chapter available online]
[1] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. -- 139.165.16.117 at 11:28, 17 July 2007
- I've fixed the latter problem. Took 15:00, 7 August 2007 (UTC)
- Hold up there; it's true that BN can represent conditional INdependencies that a Bayesian network cannot, but any Bayesian network can be converted into a Markov network representing the same dependencies (as well as new ones). --12.17.136.181 00:40, 11 August 2007 (UTC)
[edit] Log-linear model
Asterion85 (talk) 22:18, 4 May 2008 (UTC)
In practice, a Markov network is often conveniently expressed as a log-linear model, given by
it is not correct
absuming that:
φk = exp(wk * fk(x{k}))
it should be:
For references http://www.cs.washington.edu/homes/pedrod/kbmn.pdf