Talk:0.999.../Archive 3

From Wikipedia, the free encyclopedia

This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page.
← Archive 2 Archive 3 Archive 4 →

DO NOT EDIT OR POST REPLIES TO THIS PAGE. THIS PAGE IS AN ARCHIVE.

This archive page covers approximately the dates between 2005-12-07 and 2005-12-09.

Post replies to the main talk page, copying or summarizing the section you are replying to if necessary.

Please add new archivals to Talk:Proof that 0.999... equals 1/Archive04. (See Wikipedia:How to archive a talk page.)


Contents

The crux of the matter:

There is no proof that demonstrates 0.999... = 1 but there is solid proof that demonstrates 0.999... < 1. See the proof by induction if the sysops have not censored it yet. The number 0.999... is not a limit, it is an infinite sum. Even if you were to treat it as a limit, the limit is not equal to the infinite sum. Just try adding the following: 9/10 + 9/100 + 9/1000 + .... add as many terms as you like. You will notice that the terms keep getting closer to zero and the sum keeps getting closer to 1. However, no term ever becomes zero and no sum ever becomes 1. There is no such thing as an infinite sum that can be calculated. There is no definition that says the limit of an infinite sum is the infinite sum for if there were, it would be nonsense and thus untrue. This is a typical misconception.... Unlike this article by Ksmrq that is written in a trust me approach, I am challenging all thinkers and teachers to think for themselves. Examine the terminology, examine the definitions, think long and hard and if you find a flaw, discard everything and start all over again. This page contains proofs that debunk the myths stating the real number system collapses if 0.999... is not equal to 1. I am not asking you to believe me, I am asking you to think for yourselves. Please do not be fooled by arguments presented in group theory and the so-called definition of real numbers. Real numbers existed long before the concepts of groups and fields came into existence. You do not have to pass a course in real analysis or abstract algebra to figure this out. In fact, you do not need to know anything else besides high school math. Do not be intimidated by those who are able to write a lot of BS that is in the first place irrelevant and serves to confuse rather than enlighten. I have studied and passed all these courses and they are worthless. Finally, have a backbone and post your opinion.... When incorrect knowledge is propagated forcefully and the truth is rejected.... progress stops.

Who says it is not a limit, but an infinite sum? If I treat it as a limit, it doesn't matter whether it equals the infinite sum. You say, "There is no such thing as an infinite sum that can be calculated." That depends what you mean by "calculated", but even if we accept that statement, there is no consistent way that any infinite sum can be considered as a real number unless it is equal to the limit of it's truncated sums. So you have to choose between saying this infinite sum does not represent a real number, or it represents 1. High school maths tells me that if 0.999... is used to mean a real number, then it's greater than all numbers less than 1. High school maths also tells me that it's smaller than all numbers greater than 1, so there's nothing between it and 1. The most basic understanding of real numbers, from before group theory, analysis or anything like that, tells me that there is a real number in between any two different real numbers, so 0.999... and 1 must be equal. All that the modern definitions do is formalise the way in which we consider 0.999... to be a real number. And while you're at it, high school maths tells me that the induction above proves an infinite number of statements about finite sums, but doesn't prove anything about an infinite sum. JPD (talk) 15:34, 7 December 2005 (UTC)
0.999... is a number. It was first used in the same sense as 0.333..., 0.666..., etc. High school math does not tell you 0.999... is greater than all numbers less than 1. In fact this statement is not true. If you read the above post regarding numbers between 0.999... and 1 and understand it, you will see that both these questions are answered. There are infinitely many numbers between 0.999... and 1. Yes, induction does prove an infinite number of statements about finite sums but there is nothing else we can do: the only thing we can know of an infinite sum is what is its limit (provided it has one). To say that the number 0.999... is equal to 1 is absurd. If two numbers are equal, then their difference is zero. Even an elementary shool child can tell you this. What you can say for certain is that the difference between 1 and 0.999... is greater than zero even if it is very close. We don't know how close, we cannot determine how close and frankly we do not care how close it is to zero. The same reasoning applies to 0.333.. - this is not equal to 1/3 but it's used as an approximation in base 10 because 1/3 cannot be represented exactly in base 10. In metric spaces we have d(x,y) = 0 => x = y. This is also known as the identity of indiscernibles. If it is true (and it is believed to be true since the reals are classified as a metric space), then 0.999... must be the same as 1. However, it is not the same as 1. So either you discard the identity and have 0.999... = 1 or you keep the identity and note that 0.999... is not equal to 1. Therefore it must be greater or smaller. It can't be greater by the ordering of the real number system, thus it must be smaller. Hence 0.999... < 1. The only proof of equality when we represent numbers using series is that the two numbers are equal if and only if the sum of their partial series is equal term by term with terms in the right order. How do you determine which of the following numbers is larger: 3.14159265 or 3.14159165? In any radix system, two numbers are equal if and only if all the coefficients are equal in the polynomial representation.
If there are infinitely many numbers between .999... and 1, name one. (Heck, name .999... of them; same diff!) If the difference between .999... and 1 is greater than zero, what is it? You can go on all day about what numbers fall between them, or what the difference is, but unless you can name them, you don't really have a point. And any mathematician would tell you that, yes, .333... = 1/3 exactly, if and only if the threes repeat forever. --HeroicJay 17:03, 7 December 2005 (UTC)

In answer to your first question:

Let  X = 99
     X/10^2  +  X/10^4  +  X/10^6  + ....                     [Radix 100]
  B= .99     +  .0099   +  .000099 + ...     = .999999...
B is one such number

What is the difference? You cannot calulate the difference because you would have to compute an infinite difference. Arithmetic in radix systems works only on finitely represented numbers.... And 1/3 is not equal to 0.333... (never mind exactly).

Anon, nobody cares about these series being different. They have the same sum. Melchoir 19:21, 7 December 2005 (UTC)
In fact, scratch that, since the words "series" amd "sum" are confusing. Let me try this again, and let me repeat myself for emphasis: All of these sequences have the same limit. We don't care about the sequences. We care about the limits. When we write an equals sign, it is not between the sequences. It is between the limits. When we write a less-than sign, it is not between the sequences. It is between the limits. No one cares about the sequences! Scientists have no use for the sequences. Engineers have no use for the sequences. We only care about the limits. Who is we? Everyone you disagree with. What do we care about? Not the sequences! What is it, then? The limits! What do we compare? The limits! When we write "0.999... = 1", are we talking about the sequences? No! What are we talking about? The limits! Need I go on? Melchoir 19:31, 7 December 2005 (UTC)
Ooh ooh, I've got another one! When we write 1/3 = 0.333... I must raise a question! Q: are we saying that the number 1/3 is equal to the sequence (0, 0.3, 0.33, 0.333, 0.3333, ...)? A: no we aren't! What could we possibly mean then? Why, it's not the sequence that we care about at all; it's its limit! Wait a minute, the sequence never "gets there". Oh wait, I forgot, we don't care about the sequence. We care about... its limit! Melchoir 19:36, 7 December 2005 (UTC)

You may be talking about limits but most people do not talk about limits. They are thinking about the actual sum. Two different things! So although it is correct to say Limit of the partial sums of 9/10;9/100;9/1000,... = 1, it is not correct to say 0.999... = 1 for then it's like saying the sequence is equal to the limit!! Oops I forgot, we don't care about the sequence?

Most people don't talk about limits because most people aren't well-versed in Calculus. --HeroicJay 20:43, 7 December 2005 (UTC)
And I feel that there's something I should say about your .999... which apparently isn't the same as .999... but I think Melchior hit the most important points. I mean, is 4 unequal to 4 because you can obtain the sum in two different fashions? (1 + 3 or 2 + 2)? --HeroicJay 20:49, 7 December 2005 (UTC)
If "most people... are thinking about the actual sum", then they are thinking of a fiction. In mathematics there are only sequences and limits; anon, you seem to agree that sequences can't be compared with numbers, so we have no choice but to use limits. Melchoir 22:12, 7 December 2005 (UTC)

They are not thinking of fiction: Most numbers can be expressed as a sum/series and are used as approximations in calculations everywhere. You don't hear of computer programs finding the limit of .999... before it is used in a calculation. It is used as is in all calculations. When architects find the area of a circle with radius 2, they do not say it is 2*pi but 6.28 (if pi = 3.14). Limits were not around when numbers were invented and they did not have any need to use limits. Limits are useful in calculus and many other areas. However, when dealing with arithmetic, 0.999... is a number that is often used as an approximation - nothing more than this. Thus in this context it is less than 1. Anything else is absurd. It is wrong to write 0.999... = 1 when it is in fact less than 1.

A finite sum is defined recursively. An infinite sum is a fiction. You can't add up all the nines, and nobody is claiming that you can. But you just said "0.999... is a number". I thought you wanted it to be a sequence? Please make up your mind. Melchoir 23:58, 7 December 2005 (UTC)
If an architect says that the area of a circle with radius 2 is 6.28, then he is wrong. He would probably say so, and in architecture, the difference may be too small to matter, but it is still wrong mathematically. pi is greater than 3.141; we can probably all agree on that. Concerning computer programs, they either use only finite sums, or they indeed use limits. Back to the point at hand: In my calculus class, an "infinite sum" was defined to be the limit of a sequence of finite sums (if that limit exists). Now some seem to disagree with this definition of an infinite sum (and I don't mean Melchior or HeroicJay here). Maybe I missed it, but if an infinite sum is not the limit of a sequence of finite sums, then what is it? Please give a definition. --80.128.36.128 00:09, 8 December 2005 (UTC) (sorry for being another Anon)
First of all, clearly my patience is wearing thin, and I apologize. It isn't even relevant to this discussion what anyone "wants" things to mean. The string of glyphs "0.999..." is understood to represent a real number. In particular, it is the limit of a certain sequence of rationals, and that limit is 1. The string of glyphs "0.999..." is not meant to represent a sequence of numbers.
Now, to reply to 80.128.36.128: you can certainly give a meaning to the phrase "infinite sum", but it is inevitably misleading and should be avoided. "Infinite sum" suggests an "infinite" version of a finite sum; by contrast, "sum of a series" makes no reference to infinity. Note the linguistic shift: it is the series itself that has a sum, not its terms. Since this distinction is so subtle and so easily abused, it is better to avoid the word "sum" entirely. Melchoir 00:22, 8 December 2005 (UTC)

You are rambling on about a lot of things that don't quite make sense to anyone else but you. The discussion is about what things mean to people and not what anyone wants things to mean. You started out this discussion with a very fine attitude and allowed yourself to deteriorate to the level of the author and your colleagues. I'll say this: you have an admirable command of the English language and your logic is pretty good too. Now why don't you hold onto an open mind? I have far more respect for you than I do for Hardy. I don't agree that Hardy is smarter than you. He may be more experienced and qualified but this does not mean he is necessarily that smart. That Hardy is able to correct my English (or anyone else) means very little because this discussion is not about English or is it? Well, if 0.999... is not the defined in the same as 0.333..., this should be made clear in your article. I still maintain that by the generally accepted understanding of 0.999... (understanding is that it is a number, decimal, an approximation to 1 just as 0.333... is an approximation to 1/3), 0.999... is less than 1. If you want to make a confusing statement like 0.999... = 1, then you need to explain how you arrived at this. You never hear it said that 0.333... = 1/3 means the limit of the partial sums of 0.333... = 1/3. In fact you don't hear it said about any other number except 0.999... What..... Do they think that because they interpret this in a certain way, that everyone else will?...

One issue I believe people have is thinking of infinity. If atoms did not exist and things were just made of 'stuff', and we took a cake and kept dividing it into halves, we would never get a piece that had zero mass, right? But mathematicians say that we can, and that happens in an infinite time (which is practically impossible!). Which is what you have to assume when you look at this, and all its proofs. For now, I believe that 3\times 0.333...=3\times\frac{1}{3}=1 and there isn't much to disagree about on that. x42bn6 Talk 07:16, 10 December 2005 (UTC)

Recall

Many things have been written there, and most of them are sufficiently non-formal to allow any kind on diverging views.

First, I would like to recall what is the classical construction of real numbers: a real number is an equivalence class between infinite Cauchy series of rational numbers. According to the usual topology of this set, any Cauchy sequence actually converges and its limit is the real repreesenting the class. If you consider the infinite sequences (0,0.9,0.99,0.999, ...) and (1,1,1,1...), they both belongs to the same class and hence represent the same real number.

Then, I would also like to recall that 0,999... is not a decimal number (because a decimal number must have a finite number of decimals). Before stating whether 0.999...=1 or not you have to give any meaning to the writing 0.999...

If it means the limit of the series (0, 0.9, 0.99, 0.999, ...) in the set of the real numbers according to the usual topology (speaking of a limit has no meaning without a ground topological space) then it is of course equal to 1. If you try to expand decimal notation to infinite number of decimals (it has been done formaly in the past) then it is usual to forbid any infinite ending sequence of 9 as a natural condition for the representation unicity. This way, 0.999... is not a well-formed number.

It is also possible to consider other sets of "numbers" with different topologies, for instance non-standard sets. Anyway, untill the writing 0.999... will formaly get any proper meaning, I do not see what meaning could be given to the equation.

pom 00:42, 8 December 2005 (UTC)

Thanks for the info, Taxipom, but I have a problem with your very last sentence. Yes, 0.999... has several conceivable interpretations, but for all the reasonable ones, and certainly for all the standard ones, the equation is meaningful and true.
Except for standard infinite notation for which 0.999... is ill-formed: writing the first 9 assumes the number is \geq0.9 and <1. The point is that if it is the case no ending infinite sequence of 9 can occur. pom 10:09, 8 December 2005 (UTC)
At some point we have to agree not to purposefully misinterpret each others' equations. If I subscript the topology for every lim symbol, if I disambiguate all the less-than relations, if I write different symbols for 1 the natural number, 1 the integer, 1 the rational, and 1 the real, then no one will understand me anyway! Ultimately, it's okay to use imprecise notation as long as the ambiguities don't alter the truth of your statements.
On the other hand, it's all too easy to be cute with powerful notation. I've seen "1+1=10" (binary) and "1+1=11" (concatenation), but you know what, at the end of the day, that's not what "+" means, those equations are wrong, and 1+1=2. I can write "0.999... < 1" (comparing alphanumeric strings), but that's not what "<" means either, and that equation is wrong. Melchoir 02:45, 8 December 2005 (UTC)

An observation

You know, it occurred to me, the only reason some people go on about this is because they evoke a response. If you just ignore them, they'll soon get tired of tilting at windmills alone. I see little point in continuing dialogue with people who repeatedly demonstrate they don't understand what they're talking about. I suppose it does no inherent harm for a wikipedia talk page to descend to the level of pop math discussion lists, but it's certainly a waste of valuable time. The following fact is without dispute in the mathematical community:

  • In the complete ordered field of real numbers, assuming nonconstructive arguments (i.e. proof by contradiction), it is a true statement that (9/10) + (9/100) + ... = 1. Given me an epsilon, and I can find a N, (I think it should be something around − log10 epsilon), such that all the partial sums past N lie within epsilon of 1. That is the definition of the sum of an infinite series of real numbers. To dispute this, you're either ignorant of the definitions or don't understand them.

Now, it is true that there are other ways of interpreting the equation. But, given the above context, it is a true statement. If you interpret the statement in the system of hyperreals, where infinitisemals exist, then it is not true, if you "interpret" it the right way. But, that is a different question. If it's sunny out and I ask you if it's sunny out today, and you say "No", and I say "what do you mean", and you say, "because it's raining 5,000 miles away", that doesn't prove it's not sunny, just that you didn't answer the question properly.

As for the link to the constructivist, you are entering another realm. He doesn't even accept proof by contradiction. So, again, this isn't relevant. Revolver 05:15, 8 December 2005 (UTC)

I've lost track... Which "link to the constructivist" are you referring to? Melchoir 05:37, 8 December 2005 (UTC)
Even using hyperreal numbers, one still has 9*[0.999...]=10*[0.999...]-[0.999...]=[9.999...]-[0.999...]=9, so [0.999...]=1. But one can construct 0.999...9 with an i-large number of digits, which is really smaller than 1 and larger than any standard real<1. That is, it has a positive, but i-small distance to 1.--LutzL 09:46, 8 December 2005 (UTC)

Concerning "infinite sums" and series, I must retract my previous statements as 80.128.36.128 (before I created an account). Melchior is right; what I defined above is indeed not usually called an "infinite sum". I made a translation error since English is not my native tongue. I shall henceforth call it the "sum of a series", following Melchior. Nevertheless, I still believe the main problem of those doubting 0.999...=1 is confusion about the definition of 0.999... So to those who doubt the equality: Please give a definition of 0.999... For comparision: I would define it to be either the sum of the series \sum_{i=1}^{\infty}\frac{9}{10^i}, or, more formally, as the equivalence class of cauchy sequences containing the sequence (0.9, 0.99, 0.999, 0.9999, ...); with the latter definition, the identity to 1 becomes a statement about Cauchy sequences.--Huon 10:17, 8 December 2005 (UTC)


Same discussion on other sites

This exact same discussion can be found on many sites; programming forums, everything2.com, I've even seen it discussed by Amazon.com reader reviewers. The same valid proofs are offered by the mathematicians each time, and the same erroneous suggestions are given to explain why 0.999... does not in fact equal 1. What makes it interesting is that the demeanor of the two sides is always the same as well; those who refuse to believe the truth of the statement must resort to insults, changing the subject. Several proofs are offered here, along with a lengthy discussion which I'm sure at some point brings up the fact that the biggest problem is in the notation and the connotation of the statement. Even so, many people refuse to accept these proofs. To anyone who does not believe that 0.999... = 1, I challenge you: pick any one of the proofs and give a rigorous explanation of why it is false. Monguin61 11:18, 8 December 2005 (UTC)

I hope you are happy that you have contributed your two cents worth. Your comment is very one-sided (prejudiced).... Insults have been traded both ways. I still maintain that in the general sense, 0.999... < 1 unless you clearly define it as a limit. So what do you want? Do you expect people not to question this given it is used in exactly the same way as 0.333..., 0.666.. and is understood to mean 0.9 recurring? I challenge you to pick any of the opposing proofs, in particular the one by induction and show rigourously how it is false! .....please don't try to philosophize this whole thing away by placing the blame on those who are interpreting it in the only logical way they know. Besides, real numbers where defined long before Cauchy and Weierstrass. In many respects they did everyone a disservice by imposing their erroneous ideas and methods on the mathematics community. Far from real analysis not being open to debate, it is a very shaky subject..... Personally, I would love to see real analysis scrapped for something more rigourous and consistent. --anon

You're right, that was a biased comment, and it was unnecessary. My interest is in debating the question, and nothing more, so I will leave the insults out henceforth. As for the proofs, specifically the induction proof: I believe that the problem we have is in the notation. It is true that for any positive integer k, the partial sum is less than 1. The induction proof is completely valid for all positive integers. The problem is that 0.999... does not represent a number that involves any such positive integer. The ellipsis on the end looks like a simple suggestion, but the fact is, that ellipsis has a clear, precise mathematical interpretation. The ellipsis DOES define the number as a limit. It means that the number 0.999... is defined as the limit as k approaches infinity, defined with a very specific value. This is how real analysis works. What is the debate about here? Is it the merits of the specific number system that we're using? We could certainly reform the question in a different system of analysis, but if the debate is about the truth of the statement in standard real analysis, the problem is one of understanding notation. Out of curiosity, does anyone here, on either side of the debate, have some real credentials? I am an electrical engineering student with a strong interest in mathematics. Monguin61 21:09, 8 December 2005 (UTC)

I have "real credentials", but I don't think they're relevant to the issue, and statements of authority seem to be met with hostility from the anons anyway. Melchoir 21:35, 8 December 2005 (UTC)
Although I have no doubt about the "result", I should mention that fraction proof is non rigurous as the usual formal definition of multiplication is not the one used in the proof (allowing digitwise multiplication in a recurring decimal when no digits becomes grater than 9). Anyway, I persist in thinking that the main issue is that 0.999... should not be considered as a valid notation for a recurring decimal. This notation is actually derived as a limit of notations of numbers and not as the notation of a limit of numbers! pom 13:11, 8 December 2005 (UTC)ω
Anon, seriously, enough with the insults. You say "in the general sense, 0.999... < 1". If by "the general sense" you mean that "0.999..." would appear in a dictionary before "1", you're right. But in mathematics, we don't compare strings of symbols. "0.999..." is a name for a real number, and that real number cannot be other than 1. Melchoir 19:59, 8 December 2005 (UTC)
By the way, the "induction" argument, iirc, said that because the partial sums of a series are all less than 1, so is the sum of that series. This is wrong, and it has nothing to do with induction. It seems to me that we all agree that the sum of the series 0+.9+.09+.009+... is 1, so why are you bringing up that old mistake? Melchoir 20:02, 8 December 2005 (UTC)

Once again Melchoir, you are saying the sum is equal to 1 and strictly speaking it is not the sum but the limit of the sum. Why don't you call it what it is? --anon

Let me be very clear: the sum of a series is the limit of its sequence of partial sums. To speak of a "limit of the sum" is to speak of a limit of a limit, which makes no sense. Melchoir 07:04, 9 December 2005 (UTC)
Melchoir beat me to it, but since I already typed all this stuff, and since I give more details: Although it has been done before, the "induction proof" once more. Let me first repeat the proof, so there is no disambiguity. If you prefer me to show a gap in another proof, please give that proof first. It was said:
And yes, I am claiming that (\forall n: \sum_{i=0}^n a_i < x) \Rightarrow \sum_{i=0}^\infty a_i < x [...]
Simple Proof by induction:
We have that k is true:  \sum_i^k a_i < 1 
Is k+1 true?   Yes since   a_{k+1}+\sum_i^k a_i < 1  because no carry is possible.
Thus it follows that we can choose any k and always find that k+1 is true. Q.E.D.
Problem number one: This is not really a proof by induction, since what is used to show the statement for k+1 is more than the induction hypothesis for k (the "no carry is possible" statement is rather meaningless in the general context, where the a_i might be whatever I choose them to be, as long as the finite sums are all less than x). That's not a great difficulty; we might simply specialise to the interesting case, strengthen the induction hypothesis, and claim that \sum_{i=1}^k \frac{9}{10^i}=1-\frac{1}{10^k}<1.
Problem number two: The statement I just gave is indeed, for all natural numbers k, proved by the induction given above (more or less, and probably not rigorous enough to make a pure mathematician happy, but that pure mathematician should be able to fill in the remaining gaps himself.) Unfortunately, that was not what was claimed to be proven. The proof's author claimed to show the implication: Given that \sum_{i=1}^k \frac{9}{10^i}<1 (which we just showed by induction), then we have \sum_{i=1}^{\infty} \frac{9}{10^i}<1. Induction now looks useless, since the statement we want to prove does not even contain a k any more, or any indeterminate but the i which is used for summation only. We also cannot take k=\infty in the statement I just agreed to be true - \infty is not a natural number, and there is no k\neq\infty for which k+1=\infty holds. Thus, we did not show the relevant statement for a precursor of \infty, and induction fails.
Concerning the definition of 0.999..., I now heard it be called a "recurring decimal". To express that in a formula, if I'm not mistaken, it shall mean 0.999...=\sum_{i=1}^{\infty}\frac{9}{10^i}. But what is, to a mathematician, \sum_{i=1}^{\infty}\frac{9}{10^i}? By definition, \sum_{i=1}^{\infty}\frac{9}{10^i}:=\lim_{n\to\infty}(\sum_{i=1}^n\frac{9}{10^i}), and most anons who spoke here before agreed that limit is, indeed, 1. Thus, 0.999...=1. So if you disagree, you probably do not disagree with the proof that the limit equals 1, but either a "recurring decimal" is different for you than for me, or you use another definition for \sum_{i=1}^{\infty}\frac{9}{10^i}. Please specify which is the case, and give your alternative definition.--Huon 20:23, 8 December 2005 (UTC)

You are wrong about the induction proof. It is correct and we do not have to consider P(infinity). As for you saying that "no carry is possible " is meaningless, this is untrue. It is just as valid as any other mathematical statement. It is also false that numbers are used as the limit of their sequences/series. Real analysis may have defined numbers this way, but in no pratical application are numbers seen as anything else besides finite representations besides numbers were around long before real analysis and they were certainly not perceived in any way as the limit of their sequences/series. 0.333... is used as 0.3333 (with a finite number of 3s behind it). pi, e, sqrt(2) and any other irrational number is used in decimal computations with a finite number of digits following the radix. The article makes an outrageous statement: "...fact that the recurring decimal 0.9999… equals 1, not approximately but exactly" Nothing is said about the limit of a sum and the recurring decimal is definitely not equal to 1 but it is less than 1. 0.999... is generally perceived the same way as 0.333... and as any other number since the time of Archimedes and before I am certain. No one thinks of a limit of anything when dealing with numbers. The limit of 9/10+9/100+9/1000+... is equal to 1 but the actual infinite sum that cannot be computed is less than 1. —Preceding unsigned comment added by 71.248.129.246 (talk • contribs) 22:48, 2005 December 8


What exactly are the practical applications of the number 0.999...? We aren't discussing what happens when you sit around adding nines on paper forever, we are discussing the definition of the string of symbols "0.999..." Additionally, you can NOT define 0.999... as having a finite number of digits, because you do not provide the actual number of finite digits. Indeed, the "..." is an explicit statement that the number of digits is not finite. The limit of the sum as the number of terms approaches infinity is equivalent to the infinite sum. That is the definition of the "infinite sum," as there is no other definition that is useful, intuitive, and consistent. You say that the infinite sum cannot be computed. The sum cannot be found by continually adding nines, but the value of the infinite sum can be found. You mention decimal computations involving numbers with a finite number of digits after the decimal point. The number we are dealing with clearly does not fall into that category, nor do pi, e, or sqrt(2). There is a framework in place for dealing with certain types of numbers that have an infinite number of digits after the decimal, and that framework tells us that 0.999... = 1.Monguin61 00:52, 9 December 2005 (UTC)


Concerning "no carry is possible": Let me give an example. Choose a_i=1/(2^i), x=3. Then definitely both the finite sums over the a_i and their limit are less than x. But now let us have a look at one of the finite sums: \sum_{i=0}^3 a_i=1.875. Now when I add a_4=0.0625, is there no carry? Now it will probably be said that I was not allowed to choose a_i and x other than a_i=9/(10^i) and x=1 - but that was not part of the statement said to be proven above, and the sums and series in the statement to be proven even start at i=0. But that's a side issue; I just mentioned it to show there had to be some unstated restrictions on the a_i in order to give meaning to the "no carry possible" argument.
Besides, my main point of critique was not addressed: The proof does not show the assertion it claims to show, an assertion where a series appears, not only finite sums.
Now on to the real point: What is a number, and what number is 0.999...? I agree the concept of numbers is older than the precise definition of the real numbers. I also agree that approximation of a rational or irrational number by a finite decimal expansion may be useful in many, say, "real-world" problems, as small errors often do not matter (for example, the architect mentioned above probably won't care about errors of less than a tenth of a millimetre). But the statement that a recurring decimal, much less an irrational number, is thought of as a fraction of the type n/(10^m) for n, m natural numbers (that is what a finite number of digits yields), is rather strange. If truly 0.333... was to mean 0.3333, then why shouldn't one substitute the three dots for another 3? That would even be shorter to type. Instead, the three dots represent not the occurrence of some unspecified but finite number of 3's, but of infinitely many.
By the way, modern computer algebra programs are definitely able to handle irrational numbers (some, at least) without resorting to finite decimal approximations. I am no expert in that field, but as an example, I would point to Mathematica.
As another aside, even the ancient greeks knew of these problems; see Zeno's paradoxes. --Huon 00:09, 9 December 2005 (UTC)
The "proof by induction" is neither a proof neither an induction.
What is a proof by induction: it consists in the proof of the statement of the form \forall n P(n) in proving P(1) and then P(k)\Rightarrow P(k+1) for k\geq 1. Here, you want to prove (\forall n: \sum_{i=0}^n a_i < x) \Rightarrow \sum_{i=0}^\infty a_i < x which has not such a form (it is not the same as something like \forall n: (\sum_{i=0}^n a_i < x \Rightarrow \sum_{i=0}^\infty a_i < x) which is also obviously false). To make the point more intuitive: I can prove that for any integer n, n is finite. This does not imply that \infty is finite. The statement (\forall n: \sum_{i=0}^n a_i < x) \Rightarrow \sum_{i=0}^\infty a_i < x is definetely false: let x=\sum_{i=0}^\infty a_i. If the series ai only includes strictly positive values, then for any n, \sum_{i=0}^n a_i<x but you cannot deduce \sum_{i=0}^\infty a_i<x, i.e. x < x. In general, if you have a limit (any infinite sum is a limit) you cannot deduce a strict inequality for the limit from a strict inequality for each of the elements of the series. The only statement which is true is (\forall n: \sum_{i=0}^n a_i < x) \Rightarrow \sum_{i=0}^\infty a_i \leq x.
I would not like to be discourteous, but all of this is quite well explained in any basic mathematical course for undergraduates. pom 00:25, 9 December 2005 (UTC)


Aside from the already given rigorous epsilon proofs, here's one that appeals to common sense once more:
If it is the case that the inductive proof is correct then so too must
\lim_{n\to\infty}{1\over n}\ne 0 because \forall n\in\mathbb{R}\qquad {1\over n}\ne 0Guardian of Light 02:00, 9 December 2005 (UTC)


Either prove or disprove the statement(0.999...=1). People can talk forever about anything about math, but it means nothing. In addition, this page needs to be closed. This is like people claiming they can trisect any given angle. No you can't, period.

Sure you can. You just need a protractor to do it. You can't trisect an angle with only a straight edge and a compass (though you can bisect it), but that's not what you said. --HeroicJay 18:39, 9 December 2005 (UTC)

Melchoir and Anon Only Please!

Please do not respond to this section if you are not Melchoir.

Fine Melchoir, I am about to concede defeat but not without one final attempt. If 0.999... is a limit (i.e not a recurring decimal), then why is it not written that way? Why does the Wiki article not make this clear? In the article it states that 0.999... is a recurring decimal, not a limit. What meaning do we give to actual sums (whether finite or infinite) as opposed to their limits? Hardy stated in one of his responses that he could not understand an infinite sum unless it were defined in terms of its partial sums (there is some sense in this since nobody can understand or compute an infinite sum). It seems this is what you all understand. Well, let's say this is true. Why do you use terminology such as infinite sum and limits interchangeably? Don't you think this is confusing to any learner? Why not call something exactly what it is? See, originally I believe that mathematicians wrote Lim (n-> infinity) Sigma (i=0:n) 9/10^i = 1. You are saying that they then got lazy and started writing: Sigma (i=0:infinity) 9/10^i = 1 ? And then even lazier and started writing 0.999... ? Finally, if 0.999... is a recurring decimal, then it must be less than 1 otherwise if it is a limit, then its value is 1.

Who are you to say who can respond? JPD (talk) 16:39, 9 December 2005 (UTC)
You are not helping and for the very reason you answered the way you did, I asked you nicely not to respond. Same goes for LutzL - you are not telling me anything I don't know and you have not been following the conversation between Melchoir and me. Besides, I cannot keep responding to more than one person. It clutters up this space. If you feel you must respond, please respond by creating another section. I am asking you nicely.
I've been following the discussion between everyone. I am sorry you don't find what I said helpful, but I had serious questions. You say if it is a recurring decimal it is not a limit, and a recurring decimal represents a limit. Melchoir has indeed put it well below, but if you want to have a conversation with him, then do it on his talk page. This page is for discussion between all of us, not conversations with whoever you want to talk to. Please do not remove people's comments. JPD (talk) 14:41, 10 December 2005 (UTC)
What's a recurring decimal, if it's not a limit? Why not introduce simpler notation? It's only confusing if there's something else it could mean, which there isn't. And finally, most mathematicians don't write 0.999... except when trying to explain about how representing some numbers as recurring decimals (which are limits) ends up being consistent with what we know about real numbers, because 0.999...=1. JPD (talk) 16:39, 9 December 2005 (UTC)

Okay JPD: A recurring decimal is an attempt to represent a given number in any radix system where it can't be represented exactly. It is not the same as a limit and never was. Recurring decimals existed long before limits and real analysis.

Without context, 0.999... is just some sequence beginning with a tripple niner after the decimal point, so it is anything inside the intervall [0.999,1]. Here we talk about 0.\overline{9}=\sum_{k=1}^\infty 9\cdot (10)^{-k}=\left\{\sum_{k=1}^n 9\cdot (10)^{-k}\right\}_{n\in\mathbb N}=\left\{1-10^{-n}\right\}_{n\in\mathbb N}. This infinite series, which is the sequence of its partial sums, has, as any infinit decimal representation, a limit in the real numbers, and this limit is 1. Infinit digit sequences represent the real numbers that are the limits of the corresponding series. Please look up any analysis (science, not engineering) textbook to find exactly this symbolics and interpretation confirmed.--LutzL 17:19, 9 December 2005 (UTC)-typo-LutzL 17:23, 9 December 2005 (UTC)
Well, I don't think we have to make a choice between two exclusive viewpoints that 0.999... is either a recurring decimal or a limit, but not both. Instead, we could also say:
  1. The string of symbols "0.999..." represents a decimal expansion whose ones digit is 0 and whose every decimal digit is 9.
  2. That decimal expansion, in turn, represents the limit of a certain sequence, and that limit is 1.
Consider, for example, the conceptually simpler example:
  1. The string of symbols "1+2+3+...+9" represents the expression "1+2+3+4+5+6+7+8+9".
  2. The expression "1+2+3+4+5+6+7+8+9", in turn represents the result of a summation, and that result is 45.
Just because we can interpret the "..." in two steps doesn't mean we can't write 1+2+3+...+9 = 45. Likewise, whether you interpret 0.999... immediately as a limit of a sequence or you go through the middle step of a recurring decimal, ultimately it represents 1. Melchoir going through 85.195.123.22 20:42, 9 December 2005 (UTC)

Melchoir: I remain unconvinced but thank you for your feedback. I believe we have a problem of definition. Hardy stated that a limit is treated the same as an infinite sum. I can see that you are taking the same position. Terminology is very important in my opinion and even a non-mathematician can tell there is a difference between infinite sum and limit of an infinite sum. What I understand from our exhanges is that as far as you are concerned (and the math academia), 0.999... is defined as the limit of 9/10^i (from i=1 to infinity). --anon

Yes, it is. But I think it's also important that this definition as a limit is not arbitrary, and its usefulness is not limited to math academia. If you define decimal expansions as formal series, then the set of decimal expansions is not closed under any of the arithmetic operations; it lacks the capability of expressing 1/3; it treats positive and negative numbers differently; it places an unnatural emphasis on the number 10; it contains pairs of elements that can't be separated by rational numbers or by physical measurements; and I'm sure there are plenty of other terrible problems. Such "numbers" would not be useful to anyone! On the other hand, (the modern formulation of) the real numbers have all sorts of nice properties. So if we interpret decimal expansions as naming real numbers through (limits of) Cauchy sequences, then decimal expansions become useful. If you like, the definition is a matter of pragmatism. Melchoir 22:28, 11 December 2005 (UTC)