Talk:Convolution theorem

From Wikipedia, the free encyclopedia

WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, which collaborates on articles related to mathematics.
Mathematics rating: Start Class Mid Priority  Field: Analysis

Where do these 2pi's come from? As far as I know, it's just F(f * g) = F(f)F(g), F(fg) = F(f) * F(g)

I believe the 2pi's are applicable depending on which school of thought you come from. Mathematicians use fourier transforms with a 'frequency' term k. Engineers prefer to use the symbol omega for natural frequency, but call it k anyways. I'm not entirerly sure about this (which is why it's in discussion and not on the page) and I don't feel like looking it up. But check out the wikipedia page on Fourier transforms.


The 2pis probably come from what definition of the Fourier transform you choose to use. Wikipedia likes to put a 1/sqrt(2pi) in front of both the transform and the inverse transform whereas others choose to put a 1/2pi in front of the inverse transform alone. This is probably the same thing as the person above mentioned but I'm not sure. I believe that the latter is the form that leads to a convolution theorem with no 2pis.

Is there any chance of getting a derivation of the theorem on here so that we can see what's happening?

--Zapateria 14:43, 7 May 2006 (UTC)

Most authors try do be in how they choose their constants. So if the you chose to use a 1 / sqrt in the definitions of Fourier transform and inverse transform then often people like also to put the same factor infront of the definition of convolution to avoid the constant croping up in this theorem. 128.135.197.2 18:21, 17 July 2006 (UTC)

One possible derivation starts from an exponential Fourier series of an arbitrary periodic function with period T, and then by a limiting process finds the formula for the coefficients as T \rightarrow \infty and the function becomes aperiodic. The Fourier series expression then becomes the inverse transform, and the coefficients become the Fourier integral. If you use t, \,\omega as the "variables" then the coefficient in front of the inverse should turn out as \frac{1}{2\pi} and the coefficient in front of the direct transform should turn out as 1. This makes the convolution theorem turn out to be
\mathcal{F}\lbrace (f*g)(t)\rbrace = F(\omega)G(\omega) This heuristic derivation would be more suited for the main article on the Fourier transform than for the article on the Convolution theorm. DivisionByZer0 19:29, 14 June 2007 (UTC)

[edit] Integration over \mathbb{R}^n?

Why are the integrations shown here as being carried out over \mathbb{R}^n? I see no indication of more than one dimension; so, shouldn't they just be over \mathbb{R}?

If I recall correctly the multidimensional FT should look something like F(k) = \int_{\mathbb{R}^n}f(\mathbf{x})e^{-i(\mathbf{x}\cdot\mathbf{k})}\,\mathbf{dx} DivisionByZer0 20:00, 14 June 2007 (UTC)

My mistake: I now see the inner products in the argument of the exponentials.

[edit] "Diagonalization"?

Would it be accurate to say that, in a way the convolution theorem says that convolution is a diagonal operation in a Fourier basis? —Ben FrantzDale (talk) 22:15, 17 January 2008 (UTC)

Whilst I know what you're getting at, I'm not sure that "diagonal operation" is a commonly-used term (at least in this sense). I could well be wrong though. Oli Filth(talk) 22:24, 17 January 2008 (UTC)
I'm glad you know what I'm getting at. I didn't think that was conventional lingo (hence the quotes). If there is a word for a generalization of the idea of a diagonal matrix, I would like to know it. —Ben FrantzDale (talk) 01:25, 18 January 2008 (UTC)