Talk:Differentiation under the integral sign

From Wikipedia, the free encyclopedia

WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, which collaborates on articles related to mathematics.
Mathematics rating: Start Class Mid Priority  Field: Analysis

Contents

[edit] Beginning Quote

Is there really a point to having the Feynman quote at the top of the page? If this was not a reference work, but instead a more literary medium (like a math book), it would be acceptable, but as it stands, I believe it is horribly out of place.


Consider the function defined by f(x,t) = 0 if t = 0 and by

 f(x,t) =  \frac{1}{\sqrt{t}} sin \frac{x}{\sqrt{t}}

if  0< t \le 1. This is integrable with respect to t for  t \in [0,1] so the function

 F(x) = \int_0^1 f(x,t) dt

is at least defined. But  \frac{\partial f}{\partial x}(x,t) = \frac{1}{t} cos \frac{x}{\sqrt{t}} isn't integrable, so one of the integrals appearing in the result is undefined. And in fact F is not differentiable.


So at the least we must add the hypothesis that F be differentiable. But I doubt if this will be enough to make the result true. If we allow t to vary over an infinite inteval then there is the following counter example. Let

f(x,t) = x3exp( − x2t)

for all x and for  t \in [0,\infty). Then

 F(x) = \int_0^\infty f(x,t) dt = x,

so F is differentiable, with F'(0) = 1. But

 \int_0^\infty  \frac{\partial f}{\partial x}(0,t)  dt = \int_0^\infty 0 dt = 0

so the derivative of F at 0 is not given by differentiating under the integral sign.

88.105.188.214 05:23, 1 December 2006 (UTC)

[edit] need some conditions

I agree with the previous anonymous comment that we need some conditions, other than mere differentiability. Here's a really, really simple example: let f \equiv 1, then  \frac{d}{dt} \int_{-\infty}^{\infty} f dx = \frac{d}{dt} \infty = ??

while

\int_{-\infty}^{\infty}\frac{d}{dt}f dx = \int_{-\infty}^{\infty} 0 \,dx = 0

so we need the integral to converge. Anyone know the exact necessary conditions? --Lavaka 20:25, 9 May 2007 (UTC)

http://planetmath.org/encyclopedia/DifferentiationUnderIntegralSign.html lists several versions of the theorem including those where the compact interval $[a(x),b(x)]$ is replaced by an measure space $\Omega $ as $(-\infty , \infty ) $ is one. However atleast Theorem 2 and Theorem 3 have as condition in your example that \int_{-\infty}^{\infty} f dx converges, so I don't know whether you are happy with these theorems.
Or one could consider a and b as further parameters. In that case one has to decide whether \int_a^bf dx converges to \int_{-\infty}^\infty - separately in a and b within in C^1 locally around t, hasn't one? -- JanCK (talk) 14:54, 18 March 2008 (UTC)

[edit] mean value theorem error?

I think this might be an error. Shouldn't this text:

actually be:

ssepp(talk) 14:10, 1 October 2007 (UTC)

I made the change. ssepp(talk) 18:31, 4 October 2007 (UTC)

[edit] Simple example earlier in the article

I would suggest that the example of integration with limits not depending on 'x'(for fixed real numbers) be treated in the beginning. The present theorem is too general I believe for the beginning section. Perhaps the article can describe the different formulations in increasing generality? Ulner 20:28, 10 November 2007 (UTC)

[edit] Lightning Bolt or Ninjas Missing for the Intregal Step in Example 1 ?

I have a degree in Physics, minor in math and I'm trying to learn this technique. I'm trying to do Example 1, and I follow it up to the integral sign where it suddenly integrates it in 1 step. I'm not seeing how that step follows, sorry maybe I'm just slow, but I really don't get how that step actually worked. Did the person use a table? Mathematica? Is there some technique being used that I'm not seeing? Is there a lightning bolt or a ninja sneaking in and doing the work to finish the integral? —Preceding unsigned comment added by 24.117.47.160 (talk) 22:43, 26 December 2007 (UTC)


If you are talking about what I think you are talking about, you are stuck on the part where the differentiation and integration takes place on basically one line. I can confirm indeed that the derivative of


\frac{\alpha}{x^2+\alpha^2}

is


\frac{x^2 - \alpha^2}{(x^2 + \alpha^2)^2}.

The differentiation can be done with the usual product rule and some simplification. The integral of it can be done by the method of partial fractions, namely that 
\frac{x^2 - \alpha^2}{(x^2 + \alpha^2)^2} = \frac{1}{\alpha^2 + x^2} - \frac{2\alpha^2}{(x^2+\alpha^2)^2},
which can be integrated with the substitution u = x / α. Then make the additional substitution of an arctangent function... but actually now that I do the integration, I have run into the same problem! I have no idea how the integration is done in one step! When I do it, I get several arctangent functions and the polynomial (albeit with an absolute value in the denominator...) that the article states. Perhaps someone else can shed light on this subject. Dchristle (talk) 23:46, 22 February 2008 (UTC)

[edit] Lead Definition

I am not familiar with the subject, but I was trying to make a careful read of the definition and I don't understand the constant use of x_0\leq x\leq x_1. I suspect that is supposed to be a(x)\leq x \leq b(x)? Is that right? 66.216.172.3 (talk) 15:22, 25 March 2008 (UTC)