Talk:Fréchet derivative
From Wikipedia, the free encyclopedia
Contents |
[edit] holomorphic
Hi Linas. I was hoping to get more comfortable with the idea of the Frechet derivative being called holomorphic, and you have a section here which purports to explain it. But as of now, I can't understand it. For starters, your notation seems off. x and y are members of different spaces, but you're adding them. By what right can you do that?
- Have as in previous para, so . Will try to clarify.
What are the domain and codomain of f?
- Domain is U (hhmm, ok, I guess domain has to be V, see below) and codomain is W, as per first paragraph.
-
- Ohh, I see the problem. Will need to ponder, as this is an important distinction, it would seem. I think the intent of what I was reading was that f is defined on V but might only be differentiable on U. Unfortunately, this distinction is, ahem, blurred. ... linas 18:49, 25 September 2005 (UTC)
Is L an element of W*, or an element of W**? The language "functional on W* suggests the latter, but that doesn't make sense.
- Right, sorry, meant to say L is elt of W*. Will clean up.
I guess the basic idea is clear though: if all linear functionals acting on the function yield holomorphic maps, then the function itself is called holomorphic, right? -Lethe | Talk 10:05, 25 September 2005 (UTC)
- Yes, and I assumed this to be the historical origin of the term, from what I can tell. However, its not entirely clear to me just how strong the relationship between h(z) being holomorphic and f being analytic. That is, if h(z) is holomorphic for all functionals L, does this imply that higher derivatives (in fact, all derivatives) of f exist? This would make a good homework problem. One would need to work through radius-of-convergence arguments, as a minimum. linas 18:28, 25 September 2005 (UTC)
-
- Err, I mean, that is essentially the content of the "Taylor's theorem" section. The homework problem is then to really go through the proof at the next level of detail, and convince oneself that it all really does work. I haven't gone through that level of detail; I'm taking this on faith. Thus, while it would be great to add a sentence that stated if h(z) is holomorphic for all functionals L, this implies that higher derivatives (in fact, all derivatives) of f exist to the article, I have not yet worked through this enough to be confident to make this statement. I suspect, though, that this is probably a "well-known" result of the theory. linas 18:39, 25 September 2005 (UTC)
-
- I am also skeptical. I'll try to think of a counterexample. Best, Silly rabbit 21:50, 16 November 2005 (UTC)
[edit] Nice article
Linas, thanks. I noticed a long time ago this article was missing, and that is now fixed! Hopefully it will help me understand the difference between Frechet and Gateaux derivative, for now they are all French to me. Oleg Alexandrov 22:57, 25 September 2005 (UTC)
- Thanks for the compliment. Not sure it will help you understand the difference, since they amount to the same thing on Banach spaces. They differ on other spaces, I guess, but I have no examples. The upshot seems to be that banach spaces are fairly "nice" and "well-behaved" as compared to general toplogical vector spaces.
-
- There are many examples in nonlinear analysis, the calculus of variations, and partial differential equations. Those that I know of relate to the Nash-Moser inverse function theorem. These can be found in the paper:
- Hamilton, R.S., The inverse function theorem of Nash and Moser, Bull. Amer. Math. Soc. 7 (1982), no. 1, 65-222.
- Don't worry, the paper is actually easy to read and is replete with examples. I highly recommend it for anyone interested in this topic. (And that means both of you, Oleg Alexandrov and linas.) Cheers, Silly rabbit 19:08, 15 November 2005 (UTC)
-
- What if you write a nice paragraph or two about that? Enough of running around chaising green grass and telling others what to do. :) Oleg Alexandrov (talk) 19:33, 15 November 2005 (UTC)
- Thanks for the invitation. I would love to except for the fact that: (a) I haven't read the paper in quite some time, (b) I'm not much of a non-linear analyst, and (c) the paper really is one of the classics of mathematical literature and so deserves to be read by a wider audience. Besides, I already have a full plate as it is. :) Silly rabbit 23:19, 15 November 2005 (UTC)
- Will I have to go to the math library to find this paper? I couldn't find a copy online. -lethe talk 19:57, 16 November 2005 (UTC)
- Nope. The Bull. Amer. Math. Soc. is online only since 1996, so no luck. Now, my butt hurts because of sitting on the chair all this morning, but that is not enough of a motivation to go up to the library. :( The famous Silly rabit will probably be shocked at such a laziness from the young generation. Oleg Alexandrov (talk) 20:04, 16 November 2005 (UTC)
- Will I have to go to the math library to find this paper? I couldn't find a copy online. -lethe talk 19:57, 16 November 2005 (UTC)
-
- There are many examples in nonlinear analysis, the calculus of variations, and partial differential equations. Those that I know of relate to the Nash-Moser inverse function theorem. These can be found in the paper:
-
-
-
-
- Oh yes, I should have mentioned that little problem as well. You would think that the AMS would be pioneering the digitization movement. Sadly, they aren't. Otherwise I would gladly do a little writeup of it but for the two facts mentioned above. In addition, I'm probably as lazy as Oleg when it comes to such things. I do have a reprint in my possession, but it's burried deep in a storage container in a disused cleaning closet in the basement. The stairs are missing. The key is broken off in the lock. And there's a sign saying "Beware of the leopard" on the door. On the upside, at least my stored papers are all carefully alphabetized and chronologically sorted. No one is ever likely to see them again until armageddon, but at least I'll be wearing clean underwear when judgement day arrives (so to speak). :-D Silly rabbit 21:28, 16 November 2005 (UTC)
-
-
-
- My observation is that generic (physics) academia is not aware of the differences. When I was in school, 99% of the cases in physics were "well behaved" e.g. hilbert spaces with trace class representations and so on. And the students get told "never mind the math, you don't need all that math crap, just these few rules are enough". And then, invariably, there's some exceptional case, at which point the professor throws up his hands and says "this is called the anomoly and so-and-so has researched it but no one knows why, its weird and different and doesn't follow the rules". Well, now that I'm digging around, I'm starting to recognize these various exceptions as examples of things that were not in the "well-behaved" categories, but instead are part of the generalizations. e.g. my thesis (ostensibly about QCD) had an operator whose trace depended on the basis (which is an insane notion for a hilbert space, and so was very confusing). I had no clue (nor did any of my advisors) that mathematicians had already named and studied such things. Knowing this might have eased my pain (esp. since I had a paper rejected by a referee who didn't know this either), and might have changed the results of the research. linas 01:30, 26 September 2005 (UTC)
-
- My apologies. The cited examples of the difference between Fréchet and Gateaux derivative may or may not be in the Hamilton paper. Nevertheless, it is still a fundamental paper for those interested in such functional-analytic notions of differentiation. It later goes into great detail about the inverse function theorem. The theorems and proofs of this paper may be of more interest to mathematicians than they are to physicists, but the examples keep the paper moving, even if you can't quite follow all of the complicated conditions in the theorems. So it truly is a paper "for the people" (those that are mathematically literate at least), despite it's intense rigour in some places. Anyway, it does give a number of precise conditions such that the "hand-waving" of your physicist colleagues is unnecessary. Best Regards, Silly rabbit 21:48, 16 November 2005 (UTC)
[edit] Nice article
It would be nice to talk about the link between fréchet derivative and partial derivative for function defined on Rn. A few weeks ago, I was tired of the different notations for derivative of functions on vector spaces (such as the derivative of the inverse of matrix with respect to a matrix), so I was looking for a mathematical definition of the derivate. Finding this article was not easy, and writing a part on partial derivative would get more people at this page, I think. I would do it myself, but even if I have a strong interest in maths, I didn't specifically study maths above undergraduate level. Ashigabou 02:32, 1 February 2006 (UTC)
- The importance of the Frechet derivative is that it defines a derivative on infinite-dimensional spaces. Now, of course, a Banach space can be finite-dimensional as well, and in this case, the Frechet derivative should be identical to the partial derivative. The homework exercise for the reader trying to learn this is to convince themselves that they really are the same thing. linas 03:03, 1 February 2006 (UTC)
-
- I think this needs to be precised a bit, because in some cases, the link is not that obvious. The link between partial derivative and Frechet derivative is obvious in theory, but people who know only about partial derivative won't be able to grasp a concept such as the derivative of |A|, A-1 when A is a matrix; at least, I had a hard time to conciliate all the different notations. The often used notations for functions of matrices are awful: for example, the derivative of tr(A) with respect to A is defined as I (identity matrix) everywhere I looked at, but using Frechet derivative, the derivative at a point A0 is the linear map tr itself. In the first case, I is a somewhat ambiguous notation for vec(I), which is the representation matrix of tr in the canonical basis of the square matrices. There should be an article somehwere linking the usual notation in physics, statistics, etc... and the definition here. As I did the work to understand myself, I am willing to write the article, but I would need someone to check it (also, I am not a native English speaker, and as my field is not math but signal processing, I don't know the precise vocabulary). Ashigabou 06:33, 1 February 2006 (UTC)
-
-
- It's not really just partial derivatives that this generalizes. It's directional derivatives. You have to take the derivative of a matrix function in the direction of some other matrix. The restriction to the finite dim case is then straightforward. note that tr(A) is constant if you change it in off-diagonal entries, so the derivative of tr(A) in the direction of an off-diagonal entry is zero. The derivative in the direction of a diagonal entry is 1. -lethe talk + 07:09, 1 February 2006 (UTC)
-
-
-
- But directional derivatives is much less common than partial derivative in many fields, such as statistics. The point I made before should be taken in the context of applying this definition to the derivate of common formula found in statistics (such as the determinant and the inverse of the covariance matrix, etc...), and other fields. Right now, I don't see any link between Fréchet derivative, partial derivative and classical formula such as dA-1/dA = -A-t * A-1 (* being the Kronecker product) in wikipedia. I started working on that in the matrix calculus article, but I am still not sure where to put what. Ashigabou 09:44, 1 February 2006 (UTC)
- Alright. I take your point. We need a more down-to-earth place for matrix calculus. I'm interested to know how those matrix derivatives fit in with the more abstract derivatives. I don't think that formula you gave with the Kronecker product holds for the Fréchet derivative. So I guess those derivatives aren't finite dim cases of Fréchet. That's interesting. Anyway, if you need help on that article, you might ask at Wikipedia talk:WikiProject Mathematics. It sounds like an interesting topic. Maybe I'll see if I can find it in some books tomorrow. -lethe talk + 11:21, 1 February 2006 (UTC)
- But directional derivatives is much less common than partial derivative in many fields, such as statistics. The point I made before should be taken in the context of applying this definition to the derivate of common formula found in statistics (such as the determinant and the inverse of the covariance matrix, etc...), and other fields. Right now, I don't see any link between Fréchet derivative, partial derivative and classical formula such as dA-1/dA = -A-t * A-1 (* being the Kronecker product) in wikipedia. I started working on that in the matrix calculus article, but I am still not sure where to put what. Ashigabou 09:44, 1 February 2006 (UTC)
-
-
-
-
-
- Actually, the formula I gave above is slithly wrong (I corrected it), but I think it coincides with the Fréchet derivative. The inverse matrix is ackward to write, but take for example the application f(A) = AtA which is found in the same formula manuals (see for example the links at the end of Matrix calculus. Then the frechet derivative is the linear map which associates AHt + AtH to H (H and A matrices), which is exactly the formula given in the above manuals. I think this is true for all other ones (I didn't check them all). Ashigabou 12:46, 1 February 2006 (UTC)
-
-
-
Hmm. Well, Frechet derivatives are not about derivatives with respect to matricies, anyway. I think that there seems to be considerable confusion over this point. Let me try to clarify, if I can. Let be a function from real m-dimensional to real n-dimensional space. It can be thought of as n individual functions with the index i running from 1 to n. To take the derivatives of this f, one takes the partial derivatives with respect to each direction xj where j runs from 1 to m. That is, the derivative of f is given by . Since this thing has two indecies, i and j, it can be though of as a matrix. Indeed, the matrix A of this article is nothing more and nothing less than that: it is the matrix with matrix elts . In particular, to understnd this article, try using this A in the definition given in the article, and remember that h is a vector, so that A(h) is actally a matrix times a vector. Does this make sense now? Hopefully, its now obvious that Frechet deriviatvies are not "matrix derivatives", right? linas 01:47, 2 February 2006 (UTC)
- But linas, you're ignoring the obvious fact that matrices are vectors and indeed the space of matrices has a norm, so it's a Banach space. Thus it makes sense to ask whether the matrix derivative is an example of a Frechet derivative. The answer is no, but it's not obvious, and it still makes sense to ask. -lethe talk + 02:11, 2 February 2006 (UTC)
-
- I still don't understand why you say that matrix derivative are not Frechet derivative; I am now sure that it actually is after having proved all the formula in Matrix calculus with the Frechet derivative and the product/composition rules available in the Fréchet context.
[edit] Fréchet space
According to the article functional derivative, Fréchet derivatives are defined on Fréchet space. I suppose we could use the translation invariant metric to define a derivative. Would that qualify as a Fréchet derivative? -lethe talk + 01:36, 2 February 2006 (UTC)
- I've opened an article on Differentiation in Fréchet spaces. The metric doesn't come into play much, since a metric isn't really part of the data specifying a Fréchet space (which only needs to be metrizable -- an important distinction). Actually, the derivative resembles the Gâteaux derivative more than the Fréchet derivative. This is probably the weakest notion of differentiation available which satisfies many of the familiar properties from calculus. Silly rabbit 18:07, 8 June 2006 (UTC)
[edit] contradiction? and conditions on partial derivatives
im confused: (on the relation to the gateaux derivative)
first it is stated that if a function has a gateaux derivative which is linear and bounded then the function is frechet differentiable. also, in a finite dimensional space, all linear maps are bounded.
but then an example is given of a function whose gateaux derivative is zero (thus linear and bounded), but the function is not frechet differentiable.
whats going on? thanks. -erlend
by the way, what are necessary and sufficient (specially necessary) conditions on the partial derivatives to imply differentiability? this would seem to be answered by the above, except for the inconsistency. but also what i had in mind is probably some weak version of continuity on the partial derivatives.
further still, while we are at it, it would be nice to know necessary and sufficient (specially necessary) conditions on the mixed partials to commute. :-) --unsigned
- Thanks! That was a mistake. If the Gateaux derivative exsists and is a linear operator, that is not enough to gurantee Frechet differentiability, and the appropriate counterexample is indeed in the text.
- Now, to answer your other question, I think that if the partial derivatives are continuous, the function is indeed Frechet differentiable. This is sufficient, but I think not necessary (I don't know an exampele though). Oleg Alexandrov (talk) 01:00, 8 February 2006 (UTC)
-
- This actually is a necessary and sufficient. The necessary part is actually pretty trivial to prove; that's why the linear operator has to be bounded (thus continuous). After having checked several references, I am now sure that Frechet derivative is the usual derivative to consider in vector spaces of finite dimension (see analysis on manifold by Munkres for example). I will complete a bit this article with partial derivative. Ashigabou 01:49, 9 February 2006 (UTC)
-
-
- If you are interested in a proof on the equivalence between continuous partial derivative and Frechet differentiability, see http://www.probability.net/PRTjacobian.pdf. Ashigabou 03:01, 9 February 2006 (UTC)
-
-
-
-
- Could you please give a more exact reference? That is an 80-page document. Similarly, in the external links, you shouldn't link to probability.net as it is not clear at all where to go from there to find out something about Frechet differentiability.
- I find it very hard to believe that continuous partial derivatives and Frechet differentiable are equivalent. Indeed, in R, continuous partial derivatives means that the function is continuously differentiable, and Frechet differentiability means that the function is differentiable, and surely these are not equivalent. (By the way, is there an article somewhere with a counterexample?). -- Jitse Niesen (talk) 17:40, 9 February 2006 (UTC)
- Agree with Jitse. Continuity of partial derivatives may not be necessary for the function to Frechet differentiable (it is suffient though, at least for finite dimensional spaces). I think it should be rather easy to tweak one of the existing examples in the article to find a Frechet differentiable function which does not have continuous partials. Oleg Alexandrov (talk) 23:19, 9 February 2006 (UTC)
-
-
-
-
-
-
-
- Actually, I was wrong, sorry for my mistake... I will try to find a counter example. I corrected the relevant sections in partial derivative, and added the C1 definition to make more clear the link between partial derivative and the differential Ashigabou 10:48, 10 February 2006 (UTC)
-
-
-
-
-
-
-
-
-
-
- An example which proves I was wrong: (you extend the function by 0 for (x,y) = (0, 0) by continuity, the partial derivative exist and are continuous in 0, the function does not have Fréchet derivative at 0). This is just an adaptation to the only example I know of function which has derivative everywhere but derivative is not continuous everywhere (it would be interesting to see other examples like this, I always wondered about this kind of things in undergraduate courses, and the examples are generally not trivial, specially for function continuous everywhere and which do not have derivative at any point Ashigabou 10:58, 10 February 2006 (UTC)
-
-
-
-
-
From what I see, this function does have Frechet derivative at 0, and it is zero. I mean, just divide the above by the norm of (x, y) which is and you will get zero. Oleg Alexandrov (talk) 16:00, 10 February 2006 (UTC)
- You are of course right, and this is why this function works... This is meant to be an example of Frechet derivative at 0 without continuity at 0 of the partial derivative. I wrote the contrary of what I meant, sorry Ashigabou 16:40, 10 February 2006 (UTC)
-
- So, are you still sure that continuity of partial derivatives implies Frechet differentiability? Can you (or Oleg) give a reference? I think I was taught this in my real analysis course, but you might need continuity of partial derivatives in a neighbourhood. I guess it is not true in infinite dimensions? -- Jitse Niesen (talk) 19:01, 10 February 2006 (UTC)
-
-
-
- Actually, here's a simple example which shows that continuity of the partial derivatives is not sufficent to guarentee the existance of the Frechet derivative: let except at 0, where we define f(0,0) = 1. The partial derivates exist,
are continousand equal to 0 at the origin, but there is no linear approximation to the function there. The problem in this case of course is that the function is not continuous at 0. I'm not sure if adding the restriction that f be continuous is sufficient to imply the existance of the Frechet derivative or not. C42f 01:45, 22 June 2006 (UTC)-
- I meant to say continuity of partial derivatives in a neighbourhood. Oleg Alexandrov (talk) 01:55, 22 June 2006 (UTC)
- Oops, these partial derivatives exist, but are of course not continuous at the origin. My apologies. C42f 02:09, 22 June 2006 (UTC)
- That was me who was not being specific. Something about that should find its wauy in the article if it is not there already. Oleg Alexandrov (talk) 02:47, 22 June 2006 (UTC)
- Oops, these partial derivatives exist, but are of course not continuous at the origin. My apologies. C42f 02:09, 22 June 2006 (UTC)
- I meant to say continuity of partial derivatives in a neighbourhood. Oleg Alexandrov (talk) 01:55, 22 June 2006 (UTC)
-
- Actually, here's a simple example which shows that continuity of the partial derivatives is not sufficent to guarentee the existance of the Frechet derivative: let except at 0, where we define f(0,0) = 1. The partial derivates exist,
-
-
-
-
- I am not sure to understand the definition of partial derivative in infinite dimension: the relation between one partial derivative and df I gave in the article still make sense I guess (any vector space have a basis, right ? Unfortunately, in undergraduate, we mostly deal with finite-dimension vector spaces...), but for the relation between df and the sum of partial derivative, you have a problem of convergence. In finite dimension spaces, it looks like you don't need continuity in a neighborhood: http://www.probability.net/PRTjacobian.pdf (page 7; partial derivative are defined when the domain is a subset of Rn only on this document). The partial derivative are supposed to exist on a neighborhood, though; you have equivalence between Frechet differential of class C1 and continuous partial derivative, which is very useful in practical cases. Ashigabou 02:08, 11 February 2006 (UTC)
-
Yes, in infinite dimensional spaces the notion of total derivative, as : does not make sense. A partial derivative is then just the derivative in a given direction, see Gateaux derivative. The Frechet derivative applied to a vector is the partial derivative in that direction. Oleg Alexandrov (talk) 02:52, 11 February 2006 (UTC)
- I rewrote the section on partial derivatives and retitled it "Finite dimensions", as that seems to be what it is about. -- Jitse Niesen (talk) 14:28, 11 February 2006 (UTC)
-
- Should I add the example I gave above of function which has frechet derivative without continuous partial derivative ? Ashigabou 15:36, 11 February 2006 (UTC)
-
-
- There is also a small reason why I prefer partial derivative to finite dimension title: I looked for the generalization of derivative and a motivation for the definition of gradiant and jacobian, and I had a hard time to find it (I found it only recently by accident). Everybody knows partial derivative, and this article should be clear for people who want to understand gradient, jacobian definition and matrix calculus (this last article, I am still not satisfied with the general presentation, because of the lack of motivation). Ashigabou 15:44, 11 February 2006 (UTC)
-
- I think x2sin(1 / x2) is a better example. I changed the "partial derivatives" title because partial derivatives correspond to Gateaux while the Jacobi matrix corresponds to Frechet. If you dislike "finite dimensions" as a title, then how about something mentioning the Jacobi matrix? I don't see how you can use Frechet as a motivation for the definition of the Jacobi matrix; indeed, I'd say it is the other way around: the Jacobi matrix is the easier concept and this is generalized by the Frechet derivative in infinite dimensions. -- Jitse Niesen (talk) 16:19, 11 February 2006 (UTC)
-
- x2sin(1 / x2) is 1 dimension, but of course, it also works... There should be an article on 'weird' functions anyway (function continuous everywhere, without any derivative anywhere, etc...). For the link between Frechet and jacobi: the definition of Jacobi is really ad-hoc for me. Just defining the matrix of partial derivative, you then have the problems: what does it mean, why is it useful, how come is it a generalization of derivative for function from R to R ? For a long time, I didn't understand anything about Jacobian, gradient, derivative of matrices with respect to matrices, I just used formula I didn't understand. And a few weeks ago, I found the pdf mentionned above, which was enlightning for me. Then, the link was clear: function Frechet differentiable are continuous, are linear approximation, you have composition and product rule (proving it with partial derivative is everything but elegant), etc... The article matrix calculus is a mess right now, in my POV: the product rule comes from nowhere, whereas if we agreed on using Frechet context, the rules become really clear (it is simply composition of linear map, etc...). In the book I am readinmg right now to get an idea about manifolds, (analysis on manifolds, Munkres), the Frechet derivative is defined, even if the only spaces considerer after are finite-dimension. Ashigabou 01:47, 12 February 2006 (UTC)
I would like to restate a question that was already asked earlier in this section (but not answered). Do the partial derivatives still commute in the infinite-dimensional case? Or in other words (I think), if you identify a second order Frechet derivative with a bilinear map, is that map always symmetric? marwie 23 Mai 2006
- Yes. They commute. Silly rabbit 11:54, 22 June 2006 (UTC)
- More precisely, the k-th order "partial derivatives" (regarded as directional derivatives) of a Ck function are symmetric. Silly rabbit 17:36, 22 June 2006 (UTC)
[edit] A confusing nonsequitur
- FTA: If all partial derivatives of f exist and are continuous, then f is Fréchet differentiable. The converse is not true: a function may be Fréchet differentiable and yet fail to have continuous partial derivatives.
Well, of course not! You haven't imposed any requirements on the continuity of the Fréchet derivative. If you further add the condition that the function is Fréchet C1, then the partial derivatives are certainly continuous. Perhaps this resolves some of the confusion in the preceding discussion. Anyone reading this statement without a thorough knowledge of total differentiability versus partial differentiability would certainly walk away with a rather skewed view of the true picture. Silly rabbit 18:37, 22 June 2006 (UTC)
-
- Well, I don't agree with your point at all, but I kind of gave up trying to make a point about the link between Frechet derivative, matrix calculus and partial derivative on wikipedia, since most editors don't agree with me... That is said, I don't understand what is confusing with the remark, though: it juste gives one condition for which partial and total derivative coincide in finite dimension. My wording may not be the best, since I am neither a mathematician or a native English speaker, but the goal of the point you are refering to is to show that you can interpret partial derivative as the coordinates of the linear approximation from Frechet definition when the Frechet derivative exists, and that is gives an intuitive interpretation of the Jacobian, gradient, etc... when it makes sense. You can check the Munkres and the website http://www.probability.net (jacobi formula) for a better explanation of my point (references given on this article). Ashigabou 13:00, 6 July 2006 (UTC)
-
- But the statement from the article does not illustrate this point. It is correct as stated, but to a casual reader, it seems to suggest that Fréchet differentiability is a weaker condition than partial differentiability. So I would say it should be fleshed out a bit. For instance,
- If all partial derivatives of f exist and are continuous at a point, then f is Fréchet differentiable at the point.
- However, if the partial derivatives of f exist at a point, but fail to be continuous, then f may fail to be Fréchet differentiable. (Note: This seems to illustrate your point better than the statement from the article.)
- If the Fréchet derivative exists at a point (but is not necessarily continuous in any neighborhood of the point), then all partial derivatives exist at the point (but may fail to be continuous).
- Furthermore, the Fréchet derivative exists and is continuous in an open set if and only if the partial derivatives exist and are continuous in that set.
- All of these statements apply, of course, only to the case of finite dimensions. Silly rabbit 13:45, 6 July 2006 (UTC)
- But the statement from the article does not illustrate this point. It is correct as stated, but to a casual reader, it seems to suggest that Fréchet differentiability is a weaker condition than partial differentiability. So I would say it should be fleshed out a bit. For instance,
[edit] Gâteaux versus Fréchet
I think we need to come to some closure on the issue of Gâteaux versus Fréchet differentiability. I vaguely remember that, under some relatively weak auxilliary assumptions, Gâteaux differentiability implies Fréchet differentiability. Anyway, here is a quick and dirty off-the-cuff, exercise-for-the-reader sort of theorem:
- Theorem. Let f : U → Y be a mapping of a convex open subset U of a Banach space X into a Banach space Y. Suppose that f is C2 in the Gâteaux sense. Then f is (at least) C1 in the Fréchet sense.
It's definitely possible to do better than this. Silly rabbit 03:35, 23 June 2006 (UTC)
- I have a text which states that a function has Gâteaux derivative which is linear and continuous iff it is Fréchet differentiable. There was some disagreement about this, and it didn't stay around. I've been meaning to sit down and figure out exactly what the theorem is and when it holds and restore it.
(Unsigned comment from User:Lethe)
-
- Sounds about right to me (the theorem I stated was just off the top of my head). On a Banach space, Gâteaux C1 (on an open set) already implies linearity of the Gâteaux derivative, so you don't need the additional requirement. Silly rabbit 12:40, 23 June 2006 (UTC)
-
-
- The theorem was removed with the third example in the Relation to the Gâteaux derivative cited as a counterexample. That example does indeed show that a function can have a continuous linear Gâteaux derivative and yet fail to have a Fréchet derivative. And yet I did read that theorem in a book. I'm going to try to remember what book I read the theorem from. At the moment, if I had to guess, I would say the problem is that the book assumes differentiability in the whole domain, while the article is talking about differentiability at a single point. So in the mean time, am I supposed to be proving your theorem for homework? -lethe talk + 15:05, 23 June 2006 (UTC)
-
-
-
- Neither of the two examples in the Relation... section is Gâteaux C1 in any neighborhood of 0. The theorem above applies, of course, when f is continuously differentiable in the neighborhood U over which it is defined. I thought that was sort of implicit in the statement. As for homework, no. I will try to post a proof, but I've been working on other things at the moment. Silly rabbit 15:56, 23 June 2006 (UTC)
-
-
-
- Aha...! You said that the theorem was that a function which is Gateaux differentiable whose derivative is continuous and linear is Frechet differentiable. I think maybe the theorem you're digging for is a continuous function which is Gateaux differentiable whose derivative is linear is Frechet differentiable. I'm not sure if this putative theorem is an actual theorem. One or both requirements of continuity and linear differentiability should probably hold in a domain. I'm also familiar with a theorem of this sort, due to A. E. Taylor, but I believe his version only holds in the complex case. Silly rabbit 19:44, 23 June 2006 (UTC)
- I'm pretty sure there were both those criteria. Maybe then the theorem was if the function has a linear and bounded Gâteaux derivative then it is continuously Fréchet differentiable? -lethe talk + 20:04, 23 June 2006 (UTC)
- I'm not disagreeing with you. I'm just aware of a theorem in which the continuity requirement applies to the function itself, not it's derivative. For all I know, either version is true... or false for that matter. ;-) Silly rabbit 20:11, 23 June 2006 (UTC)
- Aha...! You said that the theorem was that a function which is Gateaux differentiable whose derivative is continuous and linear is Frechet differentiable. I think maybe the theorem you're digging for is a continuous function which is Gateaux differentiable whose derivative is linear is Frechet differentiable. I'm not sure if this putative theorem is an actual theorem. One or both requirements of continuity and linear differentiability should probably hold in a domain. I'm also familiar with a theorem of this sort, due to A. E. Taylor, but I believe his version only holds in the complex case. Silly rabbit 19:44, 23 June 2006 (UTC)
-
I'm starting the proof here:
- Lemma 1. Let X and Y be Banach spaces, and U ⊂ X an open set. Let Q : U × X × X → Y be a mapping which is continuous in the product topology and linear in its last two arguments. Then for every x0 ∈ U, there is an ε > 0 and C > 0 such that
- for all h, k ∈ X.
- Proof. By continuity of Q, there is a neighborhood of x0 on which ||Q(x,h,k)|| ≤ 1 provided ||h|| ≤ δ and ||k|| ≤ δ (for some δ). If h is any given element of X, let h* = δ h/||h||. Then h* and k* are always equal to δ in norm. So ||Q(x,h*,k*)|| ≤ 1. By bilinearity,
- for all h and k in X, and all x within a neighborhood of x0.
- Lemma 2. Let f : U → Y be C2 in the Gâteaux sense, where U is a convex open set in a Banach space, and Y is a Banach space. Then
- Proof. This is essentially proven in the standard way, using the fundamental theorem of calculus (which holds for Gâteaux continuously differentiable functions).
- Proof of theorem. Let x0 ∈ U. Suppose that the segment [x0,x0 + h] lies completely within U (note that U is convex). It suffices to show that there exists ε > 0 and C > 0 such that
- .
- By Lemma 2,
- (1)
- For fixed t, let x = x0 + t h. Let Q(x, h, k) = D2f(x)(h⊗k). Choose ε and C so that Q satisfies the estimate guaranteed by Lemma 1. Then if ||h|| ≤ ε,
- . (2)
- Comparing (2) with (1) gives the desired result. Silly rabbit 02:27, 24 June 2006 (UTC)
I doubt this proof belongs in the article though. Oleg Alexandrov (talk) 19:04, 24 June 2006 (UTC)
- Certainly not. It does illustrate that there are relatively weak conditions ensuring the equality of the two sorts of derivative. There ought to be some sort of theorem(s) in the article. We might consider restoring Lethe's theorem. Cheers, Silly rabbit 22:56, 24 June 2006 (UTC)
[edit] Inverse of Frechet derivative?
HI, my question is is there an analogue to 'Frechet integration' as an inverse operator to Frechet derivative?,it feels a bit strange for me that there're no analogue of Gateaux or Frechet integrals. --Karl-H 09:36, 21 January 2007 (UTC)
- That's a question hard to answer. Things are very simple on the real line, where the derivative of a function is another function , so it is easy to conceptualize and define the inverse operation, which is indefinite integration.
- In the case of Frechet derivative, even in the case, the derivative of is , so the question of inverse is much more complicated, and the inverse, if it exists, won't look so simply as just an integral.
- In short, the world beyond the real line looks to different to be able to extend the analogy of the integral easily. Oleg Alexandrov (talk) 16:18, 21 January 2007 (UTC)
[edit] New section with the idea?
What do you think of my section User:Saippuakauppias/Fréchet (in fact, I saw this article much after having written my article. However, I think, we could copy the explication part into this article ("Idea"). What do you think of? - Please post your comments to the articles Talk site. Thanks for your comments! --Saippuakauppias ⇄ 19:27, 28 April 2008 (UTC)
- Maybe one could add half a sentence under "see also", saying that is the Frechet derivative in case V=R^n and W=R. .. Hm, think about it, maybe it would really help someone who knows the gradient if he understood the difference. ... But I don't think it really gives a complete section, but rather something like two sentences. -- JanCK (talk) 07:52, 29 April 2008 (UTC)