Talk:Freiling's axiom of symmetry

From Wikipedia, the free encyclopedia

The page says:

Given a function f in A, and some arbitrary real numbers x and y, it is generally held that x is in f(y) with probability 0, i.e. x is not in f(y) with probability 1.

How is this probability counted? My intuity would be that x is in f(y) with chance 0.5, since my method of getting a random subset of real numbers would be by giving each real number chance 0.5 to be in the set. Apparently there is another way of defining 'random' set of real numbers being used here, but which is it? Andre Engels 14:11, 27 Jan 2004 (UTC)

f(y) is not a "random" set of real numbers at all (the article says nothing about f being random). However, it is countable; this means that f(y) covers a very small fraction of all possible real numbers and that a "random" real number will be in f(y) (or any other countable set) with probability 0. Cwitty 22:15, 27 Jan 2004 (UTC)
Oops... I missed that 'countable' in the definition. Thanks. Andre Engels 00:50, 29 Jan 2004 (UTC)

Contents

[edit] Reference?

I seem to recall that this appeared in the Journal of Symbolic Logic in about 1985. I'll add the reference if I find it, unless someone beats me to it. Michael Hardy 23:34, 23 Jan 2005 (UTC)

[edit] Stewart Davidson

The article claims the argument is based on "Stewart Davidson"'s intuition. Who is he? --Aleph4 18:04, 28 May 2006 (UTC)

Never mind. Stuart Davidson is mentioned in the abstract of Freiling's article. --Aleph4 19:13, 28 May 2006 (UTC)

[edit] smallest non-zero set

Define κ as the largest cardinal such that

  • For all sets C of cardinality less than κ it is virtually certain that a random x is not in C. Equivalently, κ is the smallest cardinal such that there is a set D for which the statement
a randomly selected x will be outside D
will not be almost surely true.

(In traditional mathematical language this is read as "κ is the smallest size of a set which is not of measure zero". This cardinal is usually called the "uniformity of the Lebesgue null ideal", unif(null) or non(null)).

Let B be the set of all functions mapping numbers in the unit interval [0,1] to subsets of the same interval of cardinality smaller than κ. Let A'X be the axiom stating:

For every f in B, there exist x and y such that x is not in f(y) and y is not in f(x).

Replacing "countable" in Freiling's argument by "of cardinality less than κ" now justifies the axiom A'X. From axiom A'X one can derive (using the function that assigns to each element of D the set of its predecessors, and to all other reals the empty set) that after throwing two arrows at the unit interval, it is virtually certain that not both arrows are in the set D. But as the two arrows are independent, we must be certain that both arrows land outside D. This contradicts the definition of D. -- June 10, 2006. Aleph4

It is well known that such κ = continuum. You seem to think that a proper subset of a set of given cardinality must have smaller cardinality - it is not so for infinite sets, so your argument fails. Leocat 17:44, 22 October 2006 (UTC)
No. He thinks (correctly) that any set can be well-ordered in such a way that the set of predecessors of any element has cardinal less than the whole set. Mind, I don't follow the proof; I don't see how to go from "there exist x and y such that x is not in f(y) and y is not in f(x)" to "almost every x and y are such that x is not in f(y) and y is not in f(x)" as his argument seems to require. The latter statement does of course follow from Freiling's "natural intuition". Ben Standeven 05:49, 8 April 2007 (UTC)

[edit] Where is the contradiction with the axiom of choice?

If we replace "countable" by any other statement which implies "of Lebesgue measure zero" we will still get probability = 1 that x is not in f(y) and that y is not in f(x). I do not see any contradiction with the axiom of choice. Leocat 21:26, 21 October 2006 (UTC)

Fix a well-ordering of the continuum <. Let f map its argument to the set of all smaller elements under <. Now for x and y, either x is in f(y) or y is in f(x). So by AXI it must be that there is some x for which f(x) is not of zero Lebesgue measure, even though its cardinal less than the continuum. Technically, I don't see any contradiction with the axiom of Choice, though. If we assume that f(x) is always a measurable set, we can repeat the above argument on f(x) and thereby set up an infinite descending sequence of ordinals. Ben Standeven 05:37, 8 April 2007 (UTC)

[edit] Countable Additivity of a Dart-Throw Measure

Since the discussion in the section "connection with random forcing" is intuitive, like much of Freiling's argument, it isn't possible to establish properties like countable additivity from axioms. The formal way to argue in set theory is to do something equivalent to Solovay forcing. But having said that, an intuitive property implies that the dart throw measure is countably additive.

The important thing to realize is that the dart throws can be used simultaneously to find the measures of a collection of disjoint sets and that of the disjoint union, by throwing the dart at the interval and seeing which of the sets it landed in. Either it landed in one of the sets, or else it landed in the complement, and since the sum of the probability of all the possibilities has to equal one, the measure of the complement of the disjoint union must equal 1 minus the sum of the measures of the disjoint sets. This is countable additivity. The probability of landing in the complement is 1 minus the probability of landing in the set.

Despite that, I just wrote a bogus argument! I believe it is easy, but I made a mistake just now.128.84.241.136 (talk) 00:05, 8 January 2008 (UTC)

The reason that the previous argument was bogus is because the probability axioms must properly deal with cases where there are an infinite number of events, each with nonzero probability. To deal with that, you need to include some version of this: if a sequence of positive events has probabilities whose sum is less than epsilon, the disjoint union of these events are rare.

The reason I am going to lengths here is that the axiom that the sum of the probabilities of infinitely many mutually exclusive events is equal to the probability of the "or" of all of them can be viewed as tantamount to the axiom of countable additivity.

A classical proof of these things assumes the compactness lemma that a countable collection of open intervals that cover [0,1] has a finite subset which covers [0,1].

The proof: assume not, then there is an x_k which is not in any of the I_r for r<k. The x_k have an accumulation point, and this point is in one of the I_s. This is a contradiction, since it implies that I_s includes infinitely many of the points x_k, in particular some with k>s, and the x_k were chosen not to be in any of the I_k.

The accumulation point sublemma can be proved by bisection: given an infinite sequence of points in [0,1], bisect the interval and choose the half which still has infinitely many points. Continue bisecting, choosing the half with infinitely many points. The limit of the bisections is an accumulation point.

Now Lebesgue's lemma: given a collection of intervals whose length adds up to less than 1, there is a point which is not contained in any of those intervals.

proof: Fatten up the intervals to open intervals by adding a length to each one which shrinks fast enough so that the sum of the lengths is still less than 1 even after fattening. The union of the open intervals is still the entire interval, the previous lemma guarantees that a finite number cover, contradiction.

From this it is possible to establish:

  1. No interval can be covered with any countable collection of intervals whose length adds up to less than the length of the interval.
  2. The lebesgue measure of an interval is its length.
  3. The lebesgue measure of a set is well defined and unique.

None of this required uncountable choice, so there is no contradiction with a Solovay universe, in which all sets are measurable. So this is still true in a more or less objective way.

These theorems translate to a probability theorem: The probability of the "or" of countably infinitely many events is less than or equal to the sum of their probabilities.

Now given disjoint sets Si with measures (probability for landing) pi, finite additivity guarantees that the sum of the p_i is less than or equal to the measure p of the disjoint union.

Choosing N large enough, the sum for i up to N of p_i is almost equal to the limit sum of all the p_i, leaving a small residual error e. The remaining sets S_i for i>N then together are of small measure, and therefore their disjoint union has a small measure. This proves countable additivity.

The thing is, the proof does need to use some nonconstructive assumptions, but they seem to me to be essential to probability. If you don't assume countable additivity, it is hard to see how to prove things as trivial as that a sequence of coin-flips has probability zero of eventually repeating itself in a cycle. This is equivalent to proving that a real number chosen binary digit by binary digit is irrational. Sorry for going on and on.128.84.241.25 (talk) 02:29, 8 January 2008 (UTC)