Talk:Sobolev space
From Wikipedia, the free encyclopedia
Contents |
[edit] Proofreading
I'm checking the article. I was reading the "Examples" section, and it was defining H^k(R) but using Fourier series, which isn't quite right. I hadn't noticed that the previous example involved periodic functions, so I changed it to Fourier transforms. I was going to change it back, but I think this way it gives two different examples (periodic functions and functions of R) so it's better. Loisel 22:53, 6 Sep 2004 (UTC)
- 1) After Charles Matthews remark I did a little "survey" and realized that lots of non analysts prefer Fourier series to Fourier transform. So I started the technical text "We start by introducing Sobolev spaces in the simplest settings, the one-dimensional case on the unit circle." Give it a second thought.
- 2) I also noticed that you added another piece of text at the bottom. You quote two theorems: the second is extremely interesting, and I would love to have some intuition. The first seems formal: why is it interesting/important? Is it a prerequist of the second? I assume "half integer" includes the integers, right? Finally, wouldn't it be better as a subsection of the "Extension operator" section than having the traces section in the middle?
- 3) Oh, and what is the "interpolation inequality" that "still holds" in the "extension operators" section?
- 4) One last question, while I'm in the mood: do complex interpolation and differentiation of fractional order really give the same spaces even for p different from 2? Gadykozma 03:33, 7 Sep 2004 (UTC)
I numbered your paragraphs for easy reference.
1) Okay, you can change it to Fourier series.
2) The first theorem isn't necessarely obvious. Once you have that the trace map is continuous, and the definition of H^s_0 as the closure of C^∞_c in H^s, you write an arbitrary element of H^s_0 as the limit of compactly supported functions. Continuity lets you switch limits, getting that the trace is zero. However the converse isn't necessarely easy to show, at least when s isn't an integer. You have to show that if Pu=0, then u is in H^s_0; in other words, it is the limit in H^s of functions in C_c. I'm in the middle of moving to Geneva and all my books are away and I don't remember how to prove it, but Lions & Magenes has that theorem, as well as the second one. Regarding the second theorem, I have no intuition; I never looked carefully enough at the proof (which is also in Lions & Magenes); I guess the fact that e is discontinuous when s is a half-integer is the curious bit. By half integer, I mean n+0.5, n an integer.
3) I thought I had written it, but it's
Here, L is a linear operator continuous from X+Y to A+B where {X,Y} and {A,B} are interpolation pairs, such that L:X→A and L:Y→B are continuous. C is independent of L and s. This inequality is crucial for proving, for instance, that the trace map is continuous on H^s, starting from the fact that it is continuous on the H^m spaces. It's also often a tight estimate of , which is otherwise hard to compute. I've tried computing the H^s([0,1]) norm of a function. First I found an extension operator, then I calculated the extension of my function, then I tried to compute its Fourier transform and lastly the H^s norm. It didn't work. But the interpolation inequality turned out to be good enough for me.
4) I think so, that is why complex interpolation is used to give the W^{s,p} spaces. To make sure, I'd check in Adams & Fournier, but it's in a box somewhere. There's also real interpolation, that's used for obtaining the trace spaces. The trace spaces are contained in W^{s-1/p,p} or something, but they are not all of W^{s-1/p,p}. To obtain the exact trace spaces, you need real interpolation. In the special case of H^s spaces, it just happens that the trace spaces are exactly H^s-1/2.
Sorry for switching the s and p again.
Loisel 10:04, 7 Sep 2004 (UTC)
- Ah... September. I just moved to Princeton myself and all my books are in boxes ;-) Would you like for me to do those various corrections and clarifications that we discussed now, or would you prefer to do them yourself? Gadykozma 13:57, 7 Sep 2004 (UTC)
- Oh, and two other things: why is there a constant in the interpolation inequality? At least in Riesz-Thorin this constant is 1. And is it possible to do complex interpolation over p, like in Riesz-Thorin, or only over k? Gadykozma 04:12, 8 Sep 2004 (UTC)
I'll let you do the changes. If I do it, I'll wait to get my books back first. I believe that the complex interpolation inequality has a constant, unlike the Riesz-Thorin theorem, but I'd double-check. Loisel 18:55, 10 Sep 2004 (UTC)
- OK, I'm done, your ball. Gadykozma 23:48, 16 Sep 2004 (UTC)
[edit] Lp versus Lp
Mat, while I agree that normally the notation would be Lp, the notation Lp is also acceptable. In the case of the article about Sobolev spaces, we must adapt the less standard notation due to consistancy issues: using and Lp in the same article is confusing. I don't have time to revert your change right now (and I want to make more massive changes to this article anyway), but I will at some point, unless you convince me otherwise.
--
The notation I use is not generally Wpk but usually Wk,p. I wrote large chunk with the W_p^k notation because the initial stub used that notation, but I must say I prefer to have both p and k in superscript. Loisel 06:08, 19 Aug 2004 (UTC)
If nobody minds, we can switch to this notation. It will also save us many latex formulas, which don't really look very good due to font size problems. Gadykozma 06:25, 19 Aug 2004 (UTC)
Folks, I am strongly in favor of the Wk,p notation, since it avoids mixing up with H^k_0, sometimes even Wk,p0, the spaces with zero traces. -- 84.177.140.127 00:43, 1 November 2005 (UTC)
Was it really necessary to remove the unit circle example?
Charles Matthews 20:00, 18 Aug 2004 (UTC)
- Well, em, no, of course, but since the whole article is about the line, I thought it would be more clear if the examples were on the line too. So I changed sum to integral, basically. Or did I miss some subtlety? Gadykozma 22:04, 18 Aug 2004 (UTC)
I just think it's harder to think about Fourier transforms and integrals. I'm not a 'professional' when it comes to analysis - my view might be shared by others.
Charles Matthews 08:34, 19 Aug 2004 (UTC)
- I am actually a "circle person" myself (out of perhaps 8 papers I have in analysis, only one is on the line). Maybe it's worth to add a paragraph there about Sobolev spaces on the circle? What do you think? Should such a paragraph appear before or after the discussion of R? Gadykozma 13:38, 19 Aug 2004 (UTC)
I cannot recommend writing a definition by example. Actually, what do we gain? Integer order Sobolev spaces can be defined without reference to Fourier transform or series at all. --- 84.177.140.127 00:43, 1 November 2005 (UTC)
[edit] Fractional Calculus
Hi Sobolev guys. Could you take an interest in fractional calculus - see the talk page?
Charles Matthews 21:42, 24 Sep 2004 (UTC)
I just did, it looks right. I think the most general definition is the one involving spectral calculus (it applies to subdomains of R^n; the Fourier transform doesn't work then.) It would be good if it were expanded a bit with some examples (but I'm not that good at spectral calculus.) I'll think about it.
Loisel 17:04, 5 Oct 2004 (UTC)
Actually, there are a few typos, I'll fix them later.
Loisel 17:22, 5 Oct 2004 (UTC)
[edit] Traces then extensions
I've reorganized for increased logical structure. Traces are required to state some of the theorems of extension by zero. In a previous version, it was extension operators, then traces, then extension by zero. This made sense because extension operators can be used to define H^s (although we use complex interpolation first in this article.) The definition using extension operators is more tangible (at least to me) so it makes sense to do it first. Traces must precede extension by zero because some of the theorems about extension by zero require traces to be understood. (In particular, we need to have H^s_0, and one crucial theorem about H^s_0 is that it is the kernel of the trace operator.)
Someone moved extension by zero under extension operators, which makes sense (although I think extension by zero is a more advanced subject) but if we want to keep that organization, traces have to precede extension operators, which is how I made it now. The disadvantage is that the reader must wait some more before reading about the "more natural" definition of H^s involving the extension operators.
Loisel 17:22, 5 Oct 2004 (UTC)
- I was the one who put the "extension by zero" into the "extension" part. I missed the point that it depends on the trace part. If you want, you can return it to the original order (extension - trace - extension by zero). Actually, I probably prefer that order on the current.
- Thanks. Gadykozma 01:02, 6 Oct 2004 (UTC)
[edit] Text removed from introduction
This has been in the article a long time, so it occurred to me after I deleted it that I should copy it here. The problem is it is completely ahistorical. Sobolev spaces were invented to solve PDEs, and that is still their major application today. Of course things like stability and error estimates were and are important, but that is not the same thing as 'the butterfly effect'. Brian Tvedt 02:24, 11 January 2006 (UTC)
Many physical problems, such as weather prediction or microwave oven design, are modelled by partial differential equations. In such problems, there are some data (such as today's weather, or the shape and water distribution of the food in the microwave oven) and there is a prediction (such as tomorrow's weather, or the time required to cook the food in the microwave.) In some cases, it is difficult to do an accurate simulation. The butterfly effect makes it so that long term weather predictions are extremely difficult to make. Scientists need to be able to estimate the accuracy of their simulations. This can be turned into the a mathematical question of sorts:
By turning to this question, mathematicians eventually gave precise descriptions of "slightly wrong data" and "wrong prediction". In so doing, it became apparent that the natural space of C1 functions was inadequate. As mathematicians found what the meaning of "slightly wrong data" and "wrong prediction" ought to be, it became obvious that sometimes the "predictions" would not be C1. This required a careful investigation of the meaning of a differential equation, when the solution is not even differentiable. The Sobolev spaces are the modern replacement for the space C1 of solutions of partial differential equations. In these spaces, we can estimate the size of the butterfly effect or, if it cannot be estimated, we can often prove that the butterfly effect is too strong to be controlled.
- If the initial data and/or the model are slightly wrong, how wrong can my prediction be?