Talk:Mathematical analysis
From Wikipedia, the free encyclopedia
[edit] Measure
What about the measure-theoretic foundations of probability theory. The section on probability is very sorely lacking on this point!
- —The preceding unsigned comment was added by 156.40.36.34 (talk • contribs) 23:42, March 26, 2003 (UTC).
Also, the link to measure should go to measure_(mathematics).
- —The preceding unsigned comment was added by 131.215.167.105 (talk • contribs) 02:58, August 10, 2004 (UTC).
[edit] False friends
As a native French speaker who's had little exposure to English terminology, I wonder what the differences in meaning and use between "analysis" and "calculus" are. I know that "calculus" is more frequent than "analysis", while "analyse" is more frequent in French than "calcul" in the sense of "calculus", but I don't know much more. _R_ 00:23, 16 Oct 2004 (UTC)
- Briefly, calculus is beginning analysis, or analysis is advanced calculus. Calculus concentrates more on computation (primarily involving derivatives and integrals) and is taught to advanced high school students, or typically first or second year college students, especially engineering students. Analysis is more theoretical and concentrates more on proofs and is taught in a series of courses following calculus, beginning typically with undergraduate math majors, and continuing through graduate school. Whether these terms constitute "faux ami" I can't say, my is French is tres mal ;-) Paul August 02:28, Oct 16, 2004 (UTC)
Thanks! It seems then that "analysis" always translates to "analyse" and "calculus" (alone) also translates to "analyse" while "differential calculus" (resp "integral", etc.) translates to "calcul différentiel" (resp. "intégral", etc.) The translation problem is further complicated by the fact that "calcul" usually means "computation" or sometimes "counting". Interwikis to and from fr: may become quite a mess... _R_ 14:12, 16 Oct 2004 (UTC)
[edit] EUROCENTRISM: History of Mathematical Analysis
" Historically, analysis originated in the 17th century, with the invention of calculus by Newton and Leibniz. "
This is not only POV but false.
One reference is here: [1]
- While I agree there should probably be some mention of other origins of analysis on the page, giving Madhava priority over Newton and Leibniz doesn't seem very warranted (has his influence been verified?), and I think we should take down the POV sticker: have this debate elsewhere, like in the 'Indian contributions' section to the 'History of mathematics' page. Amp
- Interesting. See this MacTutor History article for a more balanced appraisal of Madhava's work, which does seem to be impressive. Perhaps the History section should start with a refence to Madhava, followed by saying that calculus was "independently invented" by Newton and Leibniz (I assume there is no suggestion that they knew of Madhava's work ?). Gandalf61 11:38, Jun 15, 2005 (UTC)
-
- Response:It is quite possible that it was developed independently by Newton and Leibniz, but I do not agree that Madhava should not be mentioned in History of Mathematical Analysis at all, and should only be limited to History of Mathematics, especially when some historians of mathematics have quoted things like:
" ...We may consider Madhava to have been the founder of mathematical analysis. Some of his discoveries in this field show him to have possessed extraordinary intuition. [GJ, P 293] " [G Joseph]
This page gives the wrong impression that mathematical analysis started with Newton and Leibniz, while Madhava pre-dated them by 300 years. I am restoring the POV tag.
-
- To clarify, when I mentioned "the History section" I meant the History section within the Mathematical Analysis article itself. I don't think anyone is suggesting that Madhava should not be mentioned in this article. Anyway, in an attempt to move things along, I have added a paragraph on Greek and Indian contributions to analysis that pre-date Newton/Leibniz, including Madhava. And I've removed the POV tag. Gandalf61 15:39, Jun 16, 2005 (UTC)
[edit] Mathematical/Real Analysis
This article is biased and inaccurate. There should be no mention of Dedekind, Weierstrass or anyone else besides Archimedes. Dedekind contributed nothing with his theory of cuts. He was considered an idiot by his fellow mathematicians. As for Weierstrass, his attempts to 'rigorize' limits using epsilon-delta proofs have resulted in more confusion rather than rigour. First, Weierstrass dismisses infinitesimals and then in the same breath introduces terminology such as "...as close as you like...". The common epsilon-delta definition of a limit is conceptually flawed: to say that epsilon and delta can be as small as you like but not exactly zero directly implies the existence of infinitesimals and is thus contradictory. How close is 'close'? The theory does not strictly conform to 'reality' for there is a number that follows 0 which is less than every positive number even though we can't find it. If this number does not exist, it implies a discontinuity in the real interval (0,1) which is of course untrue. Thus infinitesimals exist and Weierstrass's assumptions are false. To say that numbers can be as small as you like, as long as they are not exactly zero is neither rigorous nor precise. Archimedes did not believe in infinitesimals yet he used the idea of infinitesimals in his methods of exhaustion. Mathematics is no more rigorous today than it was in the time of Archimedes. --68.238.102.180 11:18, 29 October 2005 (UTC)
- No, not biased, not inaccurate. Weierstrass's concepts are easy to state precisely with quantifiers: for all ε there exists a δ. This has been in textbooks for well over 100 years. Charles Matthews 11:46, 29 October 2005 (UTC)
- This is wrong. The meaning of "as close as you like" always applies to sets or sequences of numbers, not numbers themselves. Definition of limit thus isn't flawed and doesn't need infinitesimals. Samohyl Jan 11:50, 29 October 2005 (UTC)
Yes, it's been in textbooks for well over 100 years but it's still wrong. "as close as you like" applies to numbers, not sets. Just look at how both of you responded: for one thing, you stated the definition incorrectly for it does not say for all epsilon there exists a delta. Read the definition carefully. Here is a common example where it fails (but there are others):
If and only if for each epsilon > 0 there exists a delta > 0 with the property that
|f(x) - L| < epsilon whenever 0 < |x - a| < delta
then
Lim f(x) = L x->0
This is the definition. Now consider the function f(x) = |x| and let's investigate L = 0.
Then |x - 0| < epsilon whenever 0 < |x - 0| < delta.
This implies that
Lim f(x) = 0 (Choose any epsilon you like and set delta equal to it) x->0
hence, by this non-sense Weierstrass definition, f(x) = |x| could essentially have any limit you like. The value of L in this case is quite irrelevant since we can always find a delta and epsilon greater than zero such that the above definition is true. Now it is taught that f(x) has no derivative at zero which is in total contradiction to the Weierstrass definition that is used! [In actual fact the derivative of this function is zero at x=0 but you cannot use Weierstrass' definition to prove it. You will require true rigourous mathematics to show this fact.]
-
- You are incorrect. The defintion:
-
-
- |f(x) - L| < epsilon whenever 0 < |x - a| < delta (1)
-
- becomes for the absolute value at 0 the following
-
-
- |0 - L| < epsilon whenever 0 < |x - 0| < delta
-
-
- Clearly this will fail to hold if L≠0, just choose &epsilon=|L|/2. You made an elementary mistake I would say. Oleg Alexandrov (talk) 14:33, 29 October 2005 (UTC)
Nonsense. You cannot have x=0. The above as I stated it is perfectly correct. By the definition you cannot say |0 - L| < epsilon. Seems you made an elementary mistake. -- unsigned by 68.238.102.180.
- Sorry, you are right. Let us start again:
-
- |f(x) - L| < epsilon whenever 0 < |x - a| < delta
- For the absolute value of x, and a=0, this becomes
-
- | |x| - L| < epsilon whenever 0 < |x - 0| < delta
- You say this still holds no matter what L is. Well, assume L=1. Let epsilon=0.5. Choose delta>0, and let x=min(delta/2, 0.4).
- One has 0 < |x - 0| < delta clearly, but it is false that
-
- | |x| - 1| < 0.5 . Oleg Alexandrov (talk) 15:11, 29 October 2005 (UTC)
-
- PS I will be going to bed soon. I might not see your reply for a while, but I think others will pick up this conversation. Oleg Alexandrov (talk) 15:04, 29 October 2005 (UTC)
I still see a problem with this: If you assume L=1, then you cannot let epsilon = 0.5 for then ||x| - L | is not less than epsilon. You in Europe/Russia? Good night. -- unsigned by 68.238.102.180.
- So, is it so hard to stick four tildas in there for your signature? Come on, give it a try. I am in South Korea now, and it is half an hour after midnight.
- Of course you don't have ||x| - L | less than epsilon. That's the whole point. That is not the assumption, that is the conclusion which you said above holds for any L. And now you admitted yourself it does not. Just to clarify: the definition of the limit being equal to L is:
-
- for any epsilon > 0 there exists delta>0 such that if 0 < |x-0| < delta then | |x| - L| < epsilon.
- So, You give me L which is not 0, say L=1, and I have the right to choose epsilon and x as I see fit. I chose it in such a way that the implication of that statement is false. That means that you can't have L=1 as you stated, actually, no nonzero L will work. Oleg Alexandrov (talk) 15:41, 29 October 2005 (UTC)
- Even if you don't like epsilons and deltas, I can't imagine taking them out of an article called "Mathematical analysis". Mathematical analysis means epsilons and deltas and such. If there is any significant opposition, it should say something like "A few object to mathematical analysis on intuitionist grounds." Art LaPella 17:01, 29 October 2005 (UTC)
Not suggesting that you remove it completely but rather that you state the facts correctly: Weierstrass's attempts were to rigorize calculus. I would say he failed and that his work is mostly in error. 68.238.102.180.
-
- That's a good way of saying it. One cannot find logical flaws in mathematical analysis (the above attempt failed as far as I can tell). One can object to analysis only on metaphysical/phylosophical grounds. Oleg Alexandrov (talk) 01:33, 30 October 2005 (UTC)
No. You cannot choose epsilon indiscretely. Why? Well, you have to start with an epsilon that makes the above statement true, i.e. | x - 0 | < e whenever | x - 0 | < d. Epsilon gives you some insight into the choice of delta. Your first epsilon must make the first part of the definition true for otherwise you have a false assumption and everything else that follows is also false. See? In the above example, all you need to do to make it all true, is set delta = epsilon. And then once you have done this, you will realize that you can essentially show it is *true* for any L which is of course *false*. You start with epsilon, not delta. It is a bit confusing that it states |f(x) -L| < e whenever |x-a| < d because you find a d using e and then turn the statement around so that it reads as it does above in the definition. There is most definitely a problem with this absolute value function when you try to use Weierstrass e-d proofs even though this tends to work correctly with most (but not all) other functions. 68.238.102.180.
- As was stated above, the definition you use is incorrect. It is | f(x) - 0 | < e whenever | x - 0 | < d, not | x - 0 | < e whenever | x - 0 | < d. No wonder you get it wrong. Maybe you should understand the definition correctly first before stating that mathematicians are all wrong. ;-) If you really want us to help you (and Oleg did above a pretty good job), you should admit this. As for the understanding, I would recommend you to understand limit of sequence first. Samohyl Jan 14:43, 30 October 2005 (UTC)
No, you are wrong. The way I stated it is perfectly correct. You evidently have no idea what you are talking about. Let me repeat: It is true that | x - 0 | < e whenever | x - 0 | < d. This is exactly the same as: | f(x) - 0 | < e whenever | x - 0 | < d. You obviously did not think about it eh? In this case f(x) = |x| and | |x| - 0 | = | x - 0 |. Thus I have stated it perfectly correct and it is you who needs to pay attention! 68.238.102.180.
-
- Yes indeed. You are making mistakes which would have you fail a real analysis course, and say that real analysis now is no bether than during Archimedes. :) Oleg Alexandrov (talk) 15:00, 30 October 2005 (UTC)
I passed my real analysis course a long time ago. Of course this does not mean I agreed with what was taught. The only one who is making mistakes is you! My posts here have one purpose: to *correct* you. I am not seeking your help. I know what I am talking about. I also know that I am completely correct. It is you who does not have the slightest idea of what you are talking about. You illustrated this by your first hasty response that showed you are still confused and I would bet that you passed your real analysis course a long time ago too. In fact, I would probably bet that you might even be teaching mathematics somewhere. Irony? Yes. The shame is that there are many others like you and they are too afraid to stand up against the establishment of academia who are as ignorant as they always were. 68.238.102.180
-
-
- I agree you made the mistake described by Samohyl Jan, and anyway the purpose of Wikipedia is to describe prevailing opinions, not to change them. See Wikipedia:No original research. Art LaPella 15:35, 30 October 2005 (UTC)
-
Never mind what you agree. What matters is what is correct. The purpose of Wikipedia is not to descibe prevailing opinions but to state subject matter *objectively*. It is both arrogant and fruitless to present topics such as this that have no absolute truth value. Again, I state with conviction that mathematics today is no more rigourous than in the time of Archimedes. 68.238.102.180
- Ok, I misunderstood. This is your words: "And then once you have done this, you will realize that you can essentially show it is *true* for any L." This is your mistake, because you in fact cannot show that, as Oleg have already shown you above. Samohyl Jan 17:19, 30 October 2005 (UTC)
Oleg did not show this. I showed that Oleg's logic is incorrect. And yes, you can show that it is true for any L as I demonstrated. Pick any L you like, then you can always find an epsilon to make |f(x) - L| < epsilon true. In this case, for the absolute value function |x| you are *done*. All you need to say then is that |x-a| < epsilon. It's true because |f(x)-L| and |x-a| are *exactly* equal in this case. Weierstrass's definition fails miserably for this function and cannot be true. Please, before you respond again, think about this carefully. 68.238.102.180
- No, |f(x)-L| and |x-a| are not equal. Because a=0 is fixed (this is the point where we take the limit). So for these terms are different. You are basicaly saying that limit of f(x)=|x| in a can have any positive value, but the point is, for *different* values of a (because |x| is everywhere defined continuous function, and hence its range are its limit points). But in the definition of limit, the a is completely fixed. Samohyl Jan 22:42, 30 October 2005 (UTC)
You responded again without thinking. You do not know what you are saying. I will try again to explain this to you:
I said: Lim |x| = L (as x approaches 0)
I also said that L can be anything you want (greater than 0 of course since you have an absolute value). So, you begin with ||x|-L| < e
=> -e < |x| - L < e
<=> -e+L < |x| < e+L <=> -e+L < |x|-0 < e+L
The next line is incorrect. It would be true only if you had -(e+L) < |x|-0 < e+L. 192.67.48.22
<=> ||x|-0| < e+L
But we want ||x|-a| < d that is: ||x|-0| < d So all we need to do is set d = e, then
||x|-0| < d => ||x|-0| < e+L
<=> -(e+L) < |x|-0 < e+L
Same problem in the next line: If you subtract L, you have -(e+2L) < |x|-L < e which does not lead to ||x|-L|<e. So L cannot be any positive number you like but it does seem that 0 can be a limit if you set d=e. This would imply that f(x)=|x| is differentiable at x = 0. Hmmm? Seems the derivative must be zero at x=o then? What do the rest of you think? 192.67.48.22
<=> -e < |x|-L < e <=> ||x|-L| < e And we are done!
You have such a hard head!!! Where did you learn anything? I don't mean to be nasty but you are a very ignorant individual. Everything I have written is 100% correct and logical. Please, think again before you post rubbish!!! And YES, |f(x)-L| and |x-a| are in this particular case *equal*(for L=0). This is one of the reasons why Weierstrass theory is unsound and does not cover every case. It fails because it rejects infinitesimals and then uses the same in its formulation of the definition. Again, how small is *small* and how *close* is *close* ? And, pray tell, what do *really small numbers* mean in mathematics Professor parlance?? 68.238.102.180
- Oh my, why I waste my time arguing. Of course you are wrong: -(e+L) < |x|-0 <=> -e < |x|+L and not -e < |x|-L. That's the problem. Samohyl Jan 23:45, 30 October 2005 (UTC)
What are you saying guy?! You read it wrong again!! It's not -(e+L) < |x|-0 , it is: -(e+L) < |x|-0 < e+L. It would help if you could read too I suppose... 68.238.102.180
- It would be better if your knowledge of algebra would be at least one milionth of your self-confidence. See Inequality#Chained notation to see what I mean. Samohyl Jan 07:38, 31 October 2005 (UTC)
You might want to seriously reexamine your knowledge. There is *nothing* wrong with what I have written. Your 'chained/inequality' link is irrelevant. If you have something you can disprove, then do it, otherwise you are just taking up space and showing everyone exactly what a klutz you are. 68.238.102.180
Once again, what matters isn't what is "correct" according to someone who thinks standard math books are wrong (I doubt if he'll believe us any more than he believes Weierstrass.) What matters is Wikipedia policy, including Wikipedia:No original research. Please read it. Art LaPella 04:43, 31 October 2005 (UTC)
It's not what is *correct* but what is presented *objectively*. I have proved to you very clearly and without error that in this instance, e-d proofs do not work. In fact, they introduce a serious contradiction since most academia agree that no derivative exists at x=0 for this function (if there is doubt about the lim |x|, there will be doubt about the derivative). If you can find fault with what I write, prove it. A lot of words are just useless wind. This article should include this example I stated because any one else who might read this may think it is 'devine law' when they get confused. The problem is probably not with their intelligence but rather with the establishment. Wikipedia's policy on original research is a contradiction of itself - all human knowledge and theory starts out as *original research*. To publish 'knowledge' that has the backing of the masses (ignorant in most cases) is fruitless. All human knowledge should always be subject to continuous correction, reproof and scrutiny. There is no such thing as knowledge that is set in stone. Real Analysis has failed miserably to do anything else but contradict, confuse and mislead. It is high time to search for something better. 68.238.102.180
- Your best moment of understanding Wikipedia policy was when you said "To publish 'knowledge' that has the backing of the masses...". Not exactly, but that's closer to the policy than to suggest we publish whatever you believe to be correct, objective, clear, or without error, or what you somehow think we haven't disproven, or that we would reject mainstream opinion because it STARTED as original research. If a 21st century Einstein tried to publish on Wikipedia he would be rejected. Hopefully, he would publish in scientific journals like the real Einstein. That's where "continuous correction, reproof and scrutiny" would occur. Only if he were generally accepted, would he then belong on Wikipedia. That is the policy, and it is enforced.
- You have also suggested that the policy be changed. A better home for that thought would be Wikipedia:Village pump, probably Wikipedia:Village pump (perennial proposals). I would prefer to keep the present policy. If Wikipedia weren't restricted to publishing mainstream opinion, it would theoretically publish my son's opinion that he doesn't have to go to the bathroom when his mommy says so. He seems to consider his opinion to be as objective, clear, without error, and undisproven as you do, despite his frequent "accidents".
- Finally, I don't always agree with the establishment. I consider my education in real analysis, and higher math in general, to be largely wasted. I know how to prove that no calculator can show the exact value of the square root of 2, but I can't remember a situation where 1.4142135 wasn't close enough. But I haven't published that heresy on a main Wikipedia article. The purpose of Wikipedia is to publish mainstream thought. Art LaPella 21:23, 31 October 2005 (UTC)
You are correct. Like all other encyclopedic content, I shall expect no more from Wikipedia. What was I thinking? What makes Wikipedia different from any of the other sources? Nothing. Generally that which is mainstream is controlled by those in authority. One of Wiki's claims is that it is a peoples' encyclopedia - evidently this is untrue. It is controlled by the few editors, sysops and their puppets. In the end it will be no better than World Book, Encyclopedia Brittanica or the lecture room of some ignorant mathematics professor. It's a real shame. 68.238.102.180
- OK. I think that's the best resolution we can realistically hope for. Art LaPella 23:27, 31 October 2005 (UTC)
[edit] Welcome to wikipedia 68.238.102.180
Hey there, 68.238.102.180, welcome to wikipedia. It's always nice to have people with unconventional views around. I haven't been following too closely. Could you please clarify? It sounds like you are claiming that according to Weierstrass's so-called "definition" of limits, the limit of |x| as x approaches 0 can actually be any (non-negative) number I like. Is my understanding of your position correct? Dmharvey Talk 02:39, 1 November 2005 (UTC)
I first stated that there is a limit as x approaches 0 and this limit is 0. However, as someone pointed out above, I made a mistake (twice) in thinking that the limit can be any positive real number. 68.238.102.180
Seems like 68.238.102.180 made an arithmetical booboo. However, the e-d proof in this case seems to imply a derivative exists at x=0? See also his postings on proof of 0.999... = 1 page. 192.67.48.22
- Just a comment on this: Since f(x)=|x| is well-defined and continous for all real x, .
- The discussion above seems to be confusing limits and derivatives. The usual definition of a differentiability is that a function f(x) is differentiable at x if and only if
has a limit for h->0. That is, if and only if exists.
- So using f(x)=|x| and examining the function at 0, we must look at the limit of . But that is the sign function, which is 1 for positive h and -1 for negative h. Using the ε-δ-method, we now have to decide whether we can find a limit L, so that for every ε>0 we can produce a δ>0 so .
- Let us prove that no such limit L can exist: Assume one does exist. We now choose ε=1. Assume we can find an appropriate δ. In that case we choose x as first and next . Obviously | x − 0 | < δ in both cases. But and . Looking at the first expression |1-L|<ε=1 only if 0<L<2, while looking at the second expression |-1-L|<ε=1 only if -2<L<0. But no L can fulfill both unequalities, so we have a contradiction, and f(x)=|x| is not differentiable at x=0.
- Rasmus (talk) 14:03, 15 November 2005 (UTC)
- The only confused person is the anon who was claiming that the epsilon-delta definition implies that the limit of |x| at x=0 is any number. He also claimed that |x| is differentiable at zero. Seems we were dealing with a person whose ego and beliefs were much stronger than the actual grasp of mathematical concepts. Oleg Alexandrov (talk) 18:55, 15 November 2005 (UTC)
-
-
- It is not that I couldn't see that everybody else clearly understood epsilon-delta-proofs. It was just that nobody had corrected 68.238.102.180 in claiming that f'(a) = limx − > af(x), and that it looked as if at least 192.67.48.22 were being confused by this. Rasmus (talk) 20:07, 15 November 2005 (UTC)
- Well, I did not correct 68.238.102.180 about the derivative of the absolute value, since we could not even agree on continuity of this function. Thank you for the clarification though. Nothing will make 68.238.102.180 happy, but at least 192.67.48.22 will not be confused. :) Oleg Alexandrov (talk) 00:33, 16 November 2005 (UTC)
- It is not that I couldn't see that everybody else clearly understood epsilon-delta-proofs. It was just that nobody had corrected 68.238.102.180 in claiming that f'(a) = limx − > af(x), and that it looked as if at least 192.67.48.22 were being confused by this. Rasmus (talk) 20:07, 15 November 2005 (UTC)
-
Yes, 68.238.102.180 was confusing limits with derivatives. However, the example he mentioned is interesting (f(x)=|x|) because even though the rh limit and lh limit are different, f'(x) is equal to 0 if f'(x) is defined as follows: f'(x) = Lim (h->0) [f(x+h)-f(x-h)]/2h This is every bit as valid as the classical definition. IMO it is more representative and meaningful to use this as the definition rather than Lim (h->0) [f(x+h)-f(x)]/h So, I still think that f(x) = |x| has a derivative at x=0:
[|0+h| - |0-h|]/2h => [|h|-|-h|]/2h => 0/2h => 0.
192.67.48.22
- While it is true, that the second definition is valid, too; you are a bit too quick in concluding that .
- It is true that if h>0, , but for h<0, . So the limit for h->0+ is 0, while the limit for h->0- is -2. Thus there is no limit for h->0. Rasmus (talk) 15:52, 22 November 2005 (UTC)
Poor Rasmus - you made a mistake again: |-h| = h *always*. You are probably confusing yourself (again) with the result that states |h| = -h iff h < 0. Your arithmetic in the second formula is wrong!! |0-h|-|0--h| = 0 and not 2h!! Just stay with the 0.999... topic. You can hardly cope with this and now you are trying to multitask? Unlike you Rasmus, I am in my late forties and I don't have a beautiful wife. Guess this would make me a lesbian if I did. Ha, ha. But I do have a lot of hot young guys. :-) Some like to call me a 'bitch from hell' but I just pretend they do it affectionately. 192.67.48.22
- Sigh, I wondered if this was you, but since you sounded polite and curious in phrasing your question above, I gave you the benefit of the doubt.
- By definition
- So selecting h=-1 we see that |-h|=|-(-1)|=1 ≠ h. And by selecting h=1 we see that |0-h|-|0-(-h)|=|-h|-|h|=|-1|-|1|=2. Rasmus (talk) 17:01, 22 November 2005 (UTC)
- Rasmus (talk) 17:01, 22 November 2005 (UTC)
You would be funny if you were not so pathetic. By the way, how is |-1|-|1|=2 ?? Okay, let's do this slowly:
Step A:
|-1| = 1
Step B:
|1| = 1
Step C:
|-1|-|1| = 1 - 1 = ? DoHHHHH!
I guess you are quite angry with me now and you are responding as a typical male would: like the proverbial bull in a China shop? 192.67.48.22
What my dear Rasmus, have I shaken up your world? You have been beaten into submission by an older woman? 192.67.48.22
- Sorry, bad argument. While it is untrue that |h|=h, obviously |-h|=|h|, so it is correct that . The conclusion obviously is that the two definitions of differentiable aren't equivalent after all. Obviously , while the other implication doesn't hold (as f(x)=|x| is a counterexample). A bit of investigation shows 'your' definition to be what is called the symmetric derivative (we could use an article). Since this is a weaker definition than the ordinary derivative, many results that are true for the ordinary derivative probably doesn't hold for the symmetric derivative. Do anyone know which? Kudos for bringing this up, and I apologize for mistakenly contradicing you. I will probably write a small stub, please go ahead and chip in. Rasmus (talk) 18:50, 22 November 2005 (UTC)
[edit] On defining the derivative
I agree that if one defines the derivative as
then it follows that | x | has limit at 0. But this definition is only a marginal improvement, for instance, the function
- f(x) = x for x > 0 and f(x) = − 1.2x for x < 0
would still not be differentiable. And the above definition of the derivative is not useful, it would follow that even a discontinuous function is differentiable, like f(x) = | x | for and f(0) = 5. Oleg Alexandrov (talk) 19:41, 22 November 2005 (UTC)
- You are mistaken about several things:
- Your understanding of 'continuous' and 'differentiable' is based on the notion that the classic definition of the derivative is correct. It is in fact only partly correct. Continuity and differentiability have never been properly defined. Real analysis is to blame for you thinking the way you do. Based on the above definition (that Rasmus calls 'symmetric'), differentiability would not imply continuity. The 'symmetic' defn describes the derivative as an average sum. See:
http://www.geocities.com/john_gabriel/avsum.jpg - I think this article describes fairly well what is meant by average sum. 192.67.48.22
-
- I think the classical definitions of continuity and the derivative make perfect sense. I'm not sure what you mean when you say that the definitions are only partly correct. How can a definition be correct or incorrect? I can see how a theorem or proposition could be incorrect, but not a definition. Do you mean perhaps that the classical definitions are not useful? Dmharvey 22:47, 22 November 2005 (UTC)
I think neither definition is that good and the 'symm' defn (actually it is better to call this a central difference and not a symm defn as Rasmus calls it) has some advantage in that one can calculate derivatives without an 'infinitesimal' factor. example: f(x) = x^2
[(x+h)^2 - (x-h)^2 ]/2h = [x^2 + 2xh + h^2 -x^2 + 2xh -h^2]/2h = 4xh/2h = 2x
Notice that it is not even required to take a limit here because there are no terms with h in them. Thus in this respect it is more precise (in fact it is *exact*). The classic form is only an approximation.
Differentiability is based on the classic form, i.e. if the classic limit exists. Had we started off with the central difference defn, we might have had a different definition of differentiability, i.e. provided the central difference limit exists, then the fn is differentiable. Furthermore, because the difference is central, there is no need to consider limits from 'both sides' but only one limit. Gabriel illustrates this by defining the limit in terms of w/n partitions and considering what happens when n approaches infinity. Continuity is *not* well defined. Using the current definition one can state that a function is continuous at *one* point and discontinuous everywhere else. For continuity, lh limit = rh limit and f(a) = L for some x. Example: f(x) = 1 for x = 3 and f(x) = 0 for all other x. Is f(x) continuous at x = 3? Yes. Why? lh limit = rh = limit and f(3) is defined, i.e. f(3) = 1. The function is continuous at that point (i.e. at that instant) even though f(3) is not equal to 0. In other words what is continuity at a point exactly? It is based on the notion of the lh limit = rh limit and f(x) = lh limit = rh limit which in turn is based on the understanding of the classical definition. One more example: Going back to f(x) = |x|. Is f(x) continuous at x = 0? Using classical definition, lh limit is not equal to rh limit but f(0) = 0. Do you then conclude that f(x) is not continuous at x = 0 ? This is what 68.238.102.180 demonstrated with his example of epsilon-delta proofs even though he was confusing it with differentiability. 192.67.48.22
-
-
- 192.67.48.22, I agree with you that the "symmetric defintion" of continuity
- is no good. That's what I was trying to say. The classical definition of derivative
- works just fine though and it implies continuity. Oleg Alexandrov (talk) 02:15, 23 November 2005 (UTC)
- 192.67.48.22, I agree with you that the "symmetric defintion" of continuity
-
-
-
-
- Again, your central difference scheme is not better, it is only advantageous for even functions, and no effect for other functions. Not worth the trouble. Oleg Alexandrov (talk) 20:00, 23 November 2005 (UTC)
-
-
-
-
-
-
- Yes, Oleg is right. For example, Gabriel's definition still requires a limit when the function is say x3. (Or even when it's just x itself.) In other words, in this case, the symmetric form is still "only" an approximation. (Are you Gabriel by the way?) Dmharvey 23:15, 23 November 2005 (UTC)
-
-
-
Yes, it's true for even functions only. No, I am not Gabriel. I don't agree however that the classical defn is better because it proves continuity. I believe the central difference version demonstrates continuity better. --unsigned by anon
- I'm confused about what you said earlier about "Example: f(x) = 1 for x = 3 and f(x) = 0 for all other x." Are you saying this function should be continuous? Are you saying that this function is continuous according to the mainstream definition of continuity? Please explain in more detail. Thanks. Dmharvey 14:51, 24 November 2005 (UTC)
-
- Anon, you are wrong. The central difference definition does not prove continuity. See f(x)=1 for x≠0 and f(0)=100. By your central difference defintion, this would be differentiable at 0, but it is not continuous there. Oleg Alexandrov (talk) 18:53, 24 November 2005 (UTC)
Interesting discussion. Anon may have a point here: I don't think he is trying to say the central difference (c.d) definition proves continuity (are you Anon?). Perhaps he is saying that the definition of continuity and differentiability would be different if the c.d definition was used. Well, the example of f(x) = |x| is puzzling - It is continuous at x=0 but not differentiable according to classical/traditional definitions/methods. Using traditional methods, you can show that it is not continuous, i.e. LH limit is not equal to RH limit is not equal to f(0). Oleg: how do you explain this? By definition of continuity, f(x) = |x| should not be continuous at x=0 ...—The preceding unsigned comment was added by 71.248.130.218 (talk • contribs) .
- The function |x| is continuous at 0. Just look at its graph, at absolute value. It is very easy to prove that it is continuous at 0according to the classical definition. Oleg Alexandrov (talk) 15:50, 25 November 2005 (UTC)
You are wrong. How do you reach this conclusion? The LH limit is not equal to the RH limit. Is this not a requirement for continuity in the classical definition?
- I think Oleg is correct. The LH limit of |x|, as x approaches zero from the left, is zero. The RH limit is also zero. They are both equal to value of the function |x| at x = 0, which is also zero. So it is continuous at 0 according to the classical definition. Dmharvey 19:17, 25 November 2005 (UTC)
- Dmharvey, just because you say that I am correct, and I say that you are correct, THIS DOES NOT MAKE THE THING CORRECT, BECAUSE WE ARE BRAIN DAMAGED BY THE WAY REAL ANALYSIS IS TAUGHT IN SCHOOLS !!?!!?!! Oleg Alexandrov (talk) 19:34, 25 November 2005 (UTC)
-
-
- Yes, and the saddest thing is that right now I am teaching a college calculus class. If all goes according to my evil plan, my students may yet become brain damaged too. Dmharvey 19:49, 25 November 2005 (UTC)
-
Sorry, you are correct. I got muddled up reading the stuff about the derivative. The derivative LH limit is not equal to the RH limit. He was using the derivative limit incorrectly to find the limit of f(x) = |x| at x = 0. Guess I din't study this page long enough. —The preceding unsigned comment was added by 71.248.136.206 (talk • contribs) .
- Anon, how about signing your posts? Use four tildas for that, like this: ~~~~
- By the way, I am tired chasing after you as you change IP address each time. What if you make an account? Oleg Alexandrov (talk) 22:06, 25 November 2005 (UTC)
There is nothing in the guidelines that says to sign comments with tildes so I was unaware of this. I just started using Wikipedia. Will try to remember if I make more posts. Don't know if I want to make an account. Prefer to post anonymously. 71.248.136.206 00:44, 26 November 2005 (UTC)
- Actually, you don't lose any anonymity by creating an account. All you need is a username and a password -- no email address, or real name, or anything like that. It just makes it easier for us to work out whether two posts come from the same person. Dmharvey 00:53, 26 November 2005 (UTC)
[edit] Subdivisions
Among the subdivisions, shouldn't classical analysis of metric spaces be listed? Daphne A 15:43, 5 February 2006 (UTC)
- Don't know. Is there indeed very serious analysis done on metric spaces? All I know is that people do the minimum necessary for studying topology. Anybody willing to write an article about analysis on metric spaces? Then we will see better. :) Oleg Alexandrov (talk) 16:37, 5 February 2006 (UTC)
- Aren't there some simple fixed point theorems (whose names I have forgotten) that apply to metric spaces in general, which are used for things like proving existence of solutions of DE's? Dmharvey 17:07, 5 February 2006 (UTC)
- Analysis of metric spaces contains important theorems that have wide applicability (a common text is Marsden's Elementary Classical Analysis). For the nonce, I've amended the opening sentence to indicate that the list of subdivisions is not complete. Hope that's okay— Daphne A 11:31, 6 February 2006 (UTC)
- Aren't there some simple fixed point theorems (whose names I have forgotten) that apply to metric spaces in general, which are used for things like proving existence of solutions of DE's? Dmharvey 17:07, 5 February 2006 (UTC)
[edit] Mathematical analysis removed from Wikipedia:Good articles
good article, but was removed from the listing because it lacks references, unfortunately. —The preceding unsigned comment was added by Worldtraveller (talk • contribs) 18:17, May 22, 2006 (UTC)
was formerly listed as a[edit] A branch or any branch?
The intro states: "Analysis is the generic name given to any branch of mathematics that depends upon the concepts of limits and convergence." Isn't it simpler to say that it is a branch that is based on these concepts, and that it has several subfields? --LambiamTalk 08:30, 18 June 2006 (UTC)
- I'm not sure I like the description in the article, but referring to it as "a branch of mathematics" is inaccurate, too.
- While this doesn't concern people so much these days, attempts have been made to separate the entirety of mathematics into analysis, algebra, and geometry (or a similar small number of superbranches). Today, while many mathematicians use little or no analysis, I'd wager a guess that most mathematicians are in fact familiar with it, and occasionally make use of it.
- Put slightly differently: it's not like a third of maths uses analysis, a third uses geometry, and a third uses algebra. It's more like three quarters use analysis and three quarters use algebra, with most of current mathematics falling in the overlap.
- RandomP 18:31, 18 June 2006 (UTC)
But is it in any way more inaccurate? You too use "analysis" as if it is a defined entity, rather than a generic designation like "religion" is. One doesn't say that someone "belongs to religion"; one says that they "belong to a religion". But one doesn't say that mathematicians "use an analysis", meaning it's harmonic analysis, or functional analysis, or X analysis for some other X. Of course most work in any present-day branch of maths uses concepts and results from other branches, but that does not imply such branches can't be usefully identified as such. --LambiamTalk 12:37, 25 June 2006 (UTC)
- True. I'd suggest going with "a branch" for now, until someone comes up with a better way to express this. RandomP 13:24, 25 June 2006 (UTC)
[edit] Indian primacy claim
What is going on here? Did India invent the calculus? Wow. That is great. But I think it is also inaccurate. While India is to be thanked for the invention of zero, no one can take seriously the assertions made in this article about the invention of the calculus. I implore our Indian friends not to falsify history and be content in knowning that while they have done many great things, the calculus is not one of them.—The preceding unsigned comment was added by 70.80.248.67 (talk • contribs) 08:48, June 25, 2006 (UTC).
- See Yuktibhasa for references on this. -- thunderboltza.k.a.Deepu Joseph |TALK04:44, 12 August 2006 (UTC)
-
- Are there any modern references? Texts on the history of mathematics? Thenub314 12:58, 27 September 2006 (UTC)
-
-
- There is an Easter egg in the sentence ending: "... with the possibly independent invention of calculus by Newton and Leibniz." The phrase "possibly independent" links to Kerala School#Possible transmission of Keralese mathematics to Europe. --Jtir 20:10, 29 September 2006 (UTC)
-
-
-
-
- I took a look at that section but most of the references I trust claim much less then is claimed here. Thenub314 00:26, 30 September 2006 (UTC)
-
-
-
-
-
-
- Further, the word possibly is a weasel word. --Jtir 13:01, 30 September 2006 (UTC)
- This edit introduced possibly and this one hid the Easter egg. --Jtir 13:30, 30 September 2006 (UTC)
- Further, the word possibly is a weasel word. --Jtir 13:01, 30 September 2006 (UTC)
-
-
-
The first reference cited in Kerala School begins: "It is without doubt that ...". With a lead like that, why read further? --Jtir 15:37, 30 September 2006 (UTC)
Maybe, but even with such a lead, the article does not to claim the things here. Yes, that article has a clear perspective on the issue. And the language they use is "the mathematicians of Kerala had anticipated some of the results of the Europeans on the calculus", which is not to say they had known calculus. Thenub314 16:20, 30 September 2006 (UTC)
- Good find. That is nicely nuanced, although "anticipated" is a bit vague. Thanks for adding it here. --Jtir 16:42, 30 September 2006 (UTC)
[edit] mentioning "infinitesimal analysis" in the history section
There is reason to mention the term "infinitesimal analysis" in the history section.
- "Of the new or infinitesimal analysis, we are to consider Sir Isaac Newton as the first inventor, Leibnitz, a German philosopher, as the second; ..."
- "The fluxionary and differential calculus may be considered two modifications [in the matter of notation] of one general method, aptly distinguished by the name of the infinitesimal analysis."
- Professor Playfair's "Dissertation on the Progress of Mathematical and Physical Science"
- as quoted by John Spare, The Differential Calculus, Bradley, Dayton and Co., 1865. [2]
- Infinitesimal analysis is "an archaic term for calculus." [3]
- "The name "mathematical analysis" is a short version of the old name of this part of mathematics, "infinitesimal analysis" ; the latter more fully describes the content, but even it is an abbreviation (the name "analysis by means of infinitesimals" would characterize the subject more precisely)."[4]
--Jtir 16:18, 29 September 2006 (UTC)
[edit] definition of numerical analysis
The definition read:
- "Numerical analysis, the study of algorithms for approximating the problems of continuous mathematics."
It now reads:
- "Numerical analysis solves problems of continuous mathematics by iteratively computing approximations until they converge to a numerical solution."
My comments:
- It is now a declarative sentence.
- I didn't understand the phrase "approximating the problems", so I replaced it with the "solving problems" formulation.
- It now identifies both the computational and numerical aspects of the subject.
- The word "algorithm" got dropped. Maybe it could be worked in.
- As I read somewhere on WP, some numerical analysts do not actually write or run computational programs themselves. However, running programs is the ultimate objective of the subject. My definition doesn't quite capture these nuances. Maybe saying "analyzes and solves problems" would be broad enough.
- The definition is longer.
--Jtir 17:20, 29 September 2006 (UTC)
- The new definition is wrong. Numerical analysis is not merely about iterative methods. Fredrik Johansson 17:39, 29 September 2006 (UTC)
-
- Thanks for pointing this out. The definition does not account for direct methods. I reverted. --Jtir 19:01, 29 September 2006 (UTC)
- Please stick to the formulation at Numerical analysis: Numerical analysis [is] the study of approximate methods for the problems of continuous mathematics. Concerning the use of the word "algorithm": Take for example the Runge–Kutta methods. It is strange and in any case unconventional to call these methods "algorithms". (And, while R-K methods are iterative, it is downright wrong to state that they iteratively compute approximations until they converge.) Likewise, interpolation and extrapolation are methods, not by themselves algorithms.
- —The preceding unsigned comment was added by Lambiam (talk • contribs) 18:57, 29 September 2006 (UTC)
(I have copied this discussion to Talk:Numerical analysis) --Jtir 19:34, 29 September 2006 (UTC)