Talk:Matroid

From Wikipedia, the free encyclopedia

Contents

[edit] Absurd

I don't know anything about matroids, but the definition as stated is clearly absurd:

  • the empty set is independent
  • every subset of an independent set is independent
  • if A and B are two independent sets and A has more elements than B, then there exists an element in A which is not in B and when added to B still gives an independent set

Suppose there exists an nonempty independent set A. Then by the third property with B=\emptyset, there exists an element in the empty set (!) which is not in A. --Saforrest 23:20, Jun 13, 2005 (UTC)

No ... what the third property implies is that there exists an element of A which is not in the empty set, and which, when added to the empty set gives a (now non-empty) independent set. Michael Hardy 01:34, 14 Jun 2005 (UTC)
Yes, I'm an idiot. Thanks. --Saforrest 01:01, Jun 15, 2005 (UTC)

On a historical note: the current article makes it sound (to me) as if Edmonds single-handedly first showed that matroids have a connection with greedy algorithms. I'm fairly certain this is not true. For example, I think I read in Oxley (?) that Whitney's 1935 paper refers to the idea that in some sense matroids are exactly those structures for which the greedy algorithm does work. Maybe we should add a sentence to this effect (I'm not doing it myself though since I don't have any reference available to double-check exactly who should get credit first, etc.)

There is an article by Richard Rado, 1957 that show the connexion between matroids and the greedy algorithm, generalising an earlier result by Kruskal for graphs.
It certainly wasn't Whitney! Zaslav 01:59, 24 February 2007 (UTC)

[edit] Exchange property paragraph question

I have a question about this paragraph:

The first two properties are simple, but the motivation behind the third property is not obvious. The property implies the fact that given two independent sets of the same size, any element of one can be replaced by some element of the other; thus the name "exchange." The examples in the example section below will make its motivation clearer.

I will try to construct an counter-example using the tree/forest/graph example:

Imagine 2 graphs with 5 vertices and 3 edges each:

x---x             x   x
    |     and       \ |
x   x             x   x
 \                   /
  x                 x

Like this we should have two independent sets of the same size.

Now the above paragraph says: We can replace any elemnt between the two forests and they should stay indepent (means circular-free).

But what about this exachange though:

x---x             x   x
  \ |     and         |
x   x             x   x  
                   \ /
  x                 x

So now there is a circle in the left forest --> we don't have two independent sets of the same size anymore, or???

And like this the above paragraph should be wrong?!

Any hint / help / correction would be very apreciated!

Tormen 11:36, 27 January 2006 (UTC)

Right you are. An excellent counterexample. I have no idea what I was thinking when I wrote that - I guess I have no idea why it's called the exchange property after all. A better name would be like "the expansion property" or something. Deco 17:24, 27 January 2006 (UTC)

This is not a correct counterexample. The basis exchange property is valid. You are misunderstanding what it is saying. It says that if you choose an element from the first independent set, there exists an element in the second that you can swap with. It does not say that any two elements can be swapped. Lmdemasi 17:54, 28 January 2007 (UTC)

That is still not correct. Elements are not swapped. The exchange property says there is an element in the second such that, putting it into the first set and removing the original first-set element, you get an independent set. You do not expect anything nice to happen in the second set. Zaslav 02:02, 24 February 2007 (UTC)
True. My choice of the word swap was poor. I was only referring to what you have said. Lmdemasi 14:34, 9 March 2007 (UTC)

[edit] non-example

the following nonexample removed as incorrect:

Two maximal three-colorings of different sizes. The one on the left cannot be enlarged because the only remaining vertex is already adjacent to all three colors.
Two maximal three-colorings of different sizes. The one on the left cannot be enlarged because the only remaining vertex is already adjacent to all three colors.
On the other hand, consider this non-example: let E be the vertices of a graph, and let the independent sets be those that can be colored with three colors. The empty set can be three-colored, and any subset of a three-colorable set is three-colorable, but the exchange property does not hold, because it's possible to have two maximal three-colorable subgraphs of different sizes, as shown to the right.

The picture is wrong: the text speaks about colorable, but the pucture shows colored. The left graph is colorable in 3 colors (only in a different way), hence the (1,2,3) is not maximal colorable. `'mikka (t) 16:27, 30 April 2006 (UTC)

Right you are. I wrote this example and rewrote it to be correct, speaking of colorings instead of colorability. Dcoetzee 01:27, 11 July 2007 (UTC)

[edit] column matroid vs. vector matroid

The article currently (Sep 2006) claims that the column matroid of a matrix is the same thing as a vector matroid. This cannot be true. A matrix can have repeated columns, wheras a subset of a vector space can not have repeated vectors in it. So it seems to me that there is some overlap between the notions, but they differ. Can someone familiar with matroids please fix this? Thanks.--345Kai 04:15, 19 September 2006 (UTC)

Nothing to fix. Even if the matrix in question has a whole bunch of repeated columns, it will still give rise to the same matroid as if it had no repetitions. Or, at any rate, the one based on the matrix without repetitions will be the "simple matroid" of the one with repetitions (because the repeated columns will form "parallel classes" of the ground set E), and hence they are the same for most purposes relating to matroids.
OK, I don't think the article goes into simple matroids and all that. Think of it like this. If two columns are identical, then they form two identical members of the ground set E of the matroid (they are interchangeable when building up the set I of independent sets, which of course correspond to independent vectors in the vector space); So, we can effectively reduce the ground set of the matroid by dealing only with non-identical members -by picking one (arbitrary) member from each "parallel class", with a "parallel class" being a group of identical columns. So the vector space which is represented by the two matroids will be the same vector space (the isomorphism is best seen as being between the matroids, not between the matrix and the vector space).
Here's another way to think of it. Imagine each column to be a vector in the vector space (don't worry if two different columns describe the same vector, there's nothing wrong with that, as long as we realise they are describing the same vector). Now, find the maximally independent set of columns (i.e. the basis of the vector space). Take away the repeated columns. Does the vector space change? Of course not. So both matroids will be describing the same vector space, only the vector matroid will be the "simple matroid" of the column matroid (and identical if there are no repeated columns).
I do want to give the article a workover at some stage (it needs it), but thought I should allay your fears now. When I get the chance I'll add some blab about simple matroids (which are, of course, related to simple graphs) and then the troublesome statement can be meaningfully reworded. Byrgenwulf 10:59, 19 September 2006 (UTC)
This explanation is not quite right, but thank you both for bringing up the question. I added a comment to the article. The fact is that a vector matroid, strictly, does not have a repeated vector but it need not be a simple matroid because (a) it can have the 0 vector, which is a loop, and (b) it can have nonzero scalar multiples, which make 2-element circuits. Zaslav 02:22, 24 February 2007 (UTC)

[edit] Independent sets definition

The axioms given for "independent sets" is not the standard set of axioms. We have:

The empty set is independent.

Every subset of an independent set is independent; this is sometimes called the hereditary property.

If A and B are two independent sets and A has more elements than B, then there exists an element in A which is not in B and when added to B still gives an independent set; this property is called the augmentation property.

But the standard axioms are:

The empty set is independent

Every subset of the independent set is independent

For each X subset of E, every maximal independent subset of X has the same size.

The first two are clearly the same. I believe the third two are logically equivalent, but I have never seen it stated the way it is written here. Perhaps someone was confusing it with part of the Bases definitions. Also, I have never heard use of the terminology augmentation property. It seems like someone just decided to change to that because "Exchange" didn't fit.

The Bases Definitions are:

There is a basis.

If B1 and B2 are bases, and e is in B_1 \ B_2, then there exists f in B_2 \ B_1 such that B_1 \ e U f is a basis.

The second one is called the basis exchange axiom. Lmdemasi 18:18, 28 January 2007 (UTC)

No. The standard axioms are the ones currently on the page. With "standard" I mean the ones proposed by Whitney in his original paper, and those in Oxley's book. In fact, exercise 3 of Section 1.1 of Oxley asks to show that your set of axioms is equivalent to the first one. Also, Oxley mentions the name "independence augmentation property".Wandrer2 16:22, 12 February 2007 (UTC)
Let me add that any definition about "maximal independent sets" is a basis definition, not an independent set definition. The equality of size of two bases is usually a theorem, not an axiom, but its equivalence is important. (This is one more example of how many different equivalent ways there are to define matroids.)
The term "independent set exchange" is more common than "augmentation", even if it doesn't seem to make sense to some (understandably). I will add it. [Oops, I or someone else already added it!] Zaslav 02:07, 24 February 2007 (UTC)
I have recently picked up Oxley's book, and it defines and names things differently than what I have seen before. Lmdemasi 14:38, 9 March 2007 (UTC)

[edit] Definition of "coatom"

This article mentions "coatom" without defining what it is. Please fix. - 74.112.174.192 20:00, 25 February 2007 (UTC)

Thank you. It's done. Zaslav 09:12, 27 February 2007 (UTC)

[edit] Real matroids and forbidden minors

I didn't know that it wasn't possible to distinguish matroids representable over the reals by finitely many forbidden minors. Is this at all easy to see? Changbao 00:30, 23 March 2007 (UTC)

[edit] Matroid union vs Direct sum

The text states:

If E and F are disjoint, the union is the direct sum.

Should that be "the union is isomorphic to the direct sum" or some such? The Wikipedia definition of the disjoint union of two sets does not reduce to the union when the sets are disjoint, and I assume that matroids over two different underlying sets cannot be the same matroid. LachlanA 21:22, 10 July 2007 (UTC)

Minor point, in "non-examples" the picture overlaps the text slightly. Tiny point, but correcting it takes a step closer to perfection. (Unlike my grammer). —Preceding unsigned comment added by 130.216.33.81 (talk) 22:56, 29 January 2008 (UTC)

[edit] The Fano matroid

I could be wrong about this, but I'm pretty sure the circuits of the Fano matroid are the sets of three points with curves through them and their complements.

The description given appears to imply that only the sets of three points on curves are circuits. It doesn't necessarily imply this, but it does appear to at first sight.

Can someone tell me if, as I think, the complements of these sets are also circuits, as if so I'd like to edit the page for clarity.

Thanks! James mcl (talk) 16:30, 3 June 2008 (UTC)