Wikipedia:Reference desk archive/Mathematics/2006 August 21

From Wikipedia, the free encyclopedia

< August 20 Mathematics desk archive August 22 >
Humanities Science Mathematics Computing/IT Language Miscellaneous Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions at one of the pages linked to above.


[edit] Transforms

I'm having trouble understanding transforms. Take the Fourier Transform as an example. I understand the Fourier series, that a periodic signal can be constructed or approximated from a set of sinusoids at different amplitudes and frequencies. I also understand that the frequency components of signals can be determined through the use of a Fourier Transform. But how does this work?

Say I have a real-valued time-domain function, s(t), with an range between -1 and 1. I could plug s(t) into the Fourier Transform function, but what would I receive? A complex number? How would I interpret or handle the data so I could determine the frequency components of the signal?

What about Laplace transforms? They look similar?

82.46.89.15 01:49, 21 August 2006 (UTC)

The answer to this depends really on your background, and how rigorous of an answer you need. You could think of Laplace transforms as giving something like a bunch of damped sinusouids, whereas Fourier transforms give you just sinusoids. There is no "one way" of describing how these look, because there are different physical interpretations you can assign to these transformations. But here's an attempt...
For me, I interpret s(t) as being a windowed function, so it would be the function s(t), times a rectangular pulse which goes from -1 to 1. The Fourier transform of this, would be the convolution of S(ω) and the sinc function. As for what this would look like, well, for simplicity sake, let's assume s(t) is a constant. Then, the Fourier transform of this would tell you where most of the frequency components are located. In this case, it would be within a prominent hump that is centered around the origin.
I've rarely seen the Laplace transform of a function actually plotted anywhere - very often, the function that you take a Laplace transform with, results in something that you can rewrite as a bunch of polynomial fractions. The poles and zeros of these things tell you whether the system you are attemping to describe is stable or not. Very useful for solving differential equations with initial value conditions, and especially fundamental in control theory.
The z transform is sort of a simply a discrete version of the Laplace transform. I encourage you to read more about these transforms...extremely useful and beautiful mathematics relates all these things together. --HappyCamper 01:59, 21 August 2006 (UTC)
A function maps values to values; a transform maps functions to functions. Beyond that, we need to get into the details of the specific transform, just as it's difficult to say much about "functions" in general.
One very simple transform maps the function f(x) to the function βf(x). Another maps to f(|x|β).
Many of the commonly used transforms are linear, where the space of all possible input (and output) functions is given the structure of a vector space, usually a Hilbert space. The first example qualifies; the second does not.
Two of the most fundamental linear transforms are already familiar: the derivative and the antiderivative.
The Fourier transform is also linear. If we use Dirac delta functions (technically, distributions rather than functions) as the basis vectors, the Fourier transform is merely a change of basis. The new basis uses (complex) sinusoids rather than delta functions. Therefore the input function is analyzed as a weighted sum of sinusoids, rather than as its output values at each input value t. This point of view is especially transparent with the discrete Fourier transform, where the vector spaces are finite-dimensional.
There is a broader view of Fourier transforms in terms of harmonic analysis (a fancy technical term), but at this point in your learning that's probably an unhelpful distraction.
As for the Laplace transform, in this view it is quite similar to the Fourier transform. The hidden magic in both cases has to do with the eigenvectors — or properly the eigenfunctions — of the derivative. Recall a familiar fact:
 \frac{d}{dt} e^{\beta t} = \beta e^{\beta t} .
In other words, the output function is just the input function, eβt, scaled by a constant, β. So if we can express an input function in terms of an exponential basis, operations involving calculus become trivial. The connection to sinusoids is, of course, Euler's formula:
 e^{i z} = \cos z + i \sin z . \,\!
However, the "change of basis" view applies to many transforms, where in each case the specific basis is chosen for its utility. --KSmrqT 07:26, 21 August 2006 (UTC)
You ask what you get out when you put something into a Fourier transform. That is a very good question – the first that should be asked. Let me answer it for you. The continuous Fourier transform is the one applicable to the situation you describe. Into that one, you put a complex valued function of one real variable and out you get another complex valued function of another real variable. That is, in goes x(t), out comes X(ω). The new function X(ω) contains exactly the same information as x(t) does, only represented in another way.
You then wonder how to interpret the result to know the frecuency content. That is also a very good question. When dealing with Fourier series, what comes out is a complex valued function of one integer variable. Each of those integer values represent a frequency and the corresponding complex value tells you the amplitude and phase of the sine with that frequency. The situation with the continuous Fourier transform is similar, the difference is that we have all sorts of frequencies (represented by the real number ω), not just some discrete ones (represented by the integer variable in the output of Fourier series). So, a frequency ω is in your signal x(t) if X(ω) is non-zero.
Often, as you mention, the input x(t) is real-valued (but X(ω) is still complex). When making plots of X(ω), the absolute value is usually what you want to see (because it tells you the amplitude of the different frequencies). Have a look at som examples where that has been done for some periodic signals (that is, using Fourier series). For your viewing pleasure, also see the animation that shows what happens when you sum more and more sines (of the right amplitudes and phases) to construct a sawtooth wave:
Image:Synthesis_sawtooth.gif
Bromskloss 08:23, 21 August 2006 (UTC)


Fantastic replies, thankyou guys. So with my function X(ω), if I plug the right number in, I'll receive a complex number that tells me the amplitude and phase of a sinusoidal component, otherwise 0 if the frequency isn't there.

Am I right in *assuming* I can then infer amplitude and phase from taking the complex number in exponential form and, using Euler's formula, work out the real and imaginary part which would represent these two values of amp. and phase in some way? Or are those spectrum plots Argand planes? My understanding of mathematics is pretty poor, but I'm interested in this topic from an audio signal analysis viewpoint.

Perhaps if you could describe or point me to a very simple, example transformation that doesn't involve complex numbers and vectors? -- 82.46.89.15 10:39, 21 August 2006 (UTC)

Regarding you first paragraph – yes, it is precisely as you describe. As for your second paragraph, I am not sure what you mean. Perhaps something will become clearer if I say that \left | X(\omega)\right| is the amplitude and \arg(X(\omega)) is the phase.
Now I'll try to give an example that you requested. We will not be able to avoid complex numbers – they are even present in the definition of the transform – but we can use a real valued signal. In fact, in most situations in practice, the signals are real valued. There is no need to talk about vectors (even though that is elegant). So, we take x(t) to be the rectangular function, i.e., x(t) is one when t is in the interval [ − 1 / 2,1 / 2] and zero otherwise (just look at the image in the article to see what it is like). By definition, the Fourier transform is
 X(\omega) = \frac{1}{\sqrt{2\pi}} \int\limits_{-\infty}^\infty x(t) \mathrm{e}^{- \mathrm{i}\omega t}\mathrm{d}t .
Since x(t) is zero when t < − 1 / 2 or t > 1 / 2, we need only integrate over the interval [ − 1 / 2,1 / 2]. In that interval, x(t) is simply one. Now we can do the integration:
 X(\omega) = \frac{1}{\sqrt{2\pi}} \int\limits_{-1/2}^{1/2} \mathrm{e}^{- \mathrm{i}\omega t}\mathrm{d}t = \frac{1}{\sqrt{2\pi}} \left[\frac{\mathrm{e}^{- \mathrm{i}\omega t}}{-\mathrm{i}\omega}\right]_{t=-1/2}^{1/2} = \frac{1}{\sqrt{2\pi}} \frac{1}{-\mathrm{i}\omega} \left(\mathrm{e}^{-\mathrm{i}\omega /2} - \mathrm{e}^{ \mathrm{i}\omega /2}\right)
With the help of Euler's formula we can rewrite the parenthesis in a simpler way:
\mathrm{e}^{-\mathrm{i}\omega /2} - \mathrm{e}^{ \mathrm{i}\omega /2} = 2\mathrm{i}\sin(-\omega/2) = -2\mathrm{i}\sin(\omega/2) \,\!
Using this simplification, we get
X(\omega)=\frac{1}{\sqrt{2\pi}} \frac{1}{-\mathrm{i}\omega}(-2\mathrm{i}\sin(\omega/2)) =\frac{1}{\sqrt{2\pi}} \frac{2\sin(\omega/2)}{\omega}
and that is the answer. Just let us know if you want something explained in more detail. —Bromskloss 12:14, 21 August 2006 (UTC)

I appreciate the process of simplification, but I don't understand what the final result represents.

1) Is ω real or complex?

2) Is X(ω) a vector, with | X(ω) | representing the magnitude (modulus)?

3) The rectangular function isn't periodic. By taking the integral between [-1/2, 1/2], we're dealing with a constant. How can a series of sinusoids make up this constant?

I don't think I know enough maths at this stage to get my head round this concept. Either that or I've got some fundementals wrong. For instance, if I assign ω = 500, I get a very small, negative value. If I assign it to 0.5, I get a small positive value. I was under the assumption that ω represents frequency. I know that a square wave needs a lot of terms to even look like a square wave, I was wondering if this was the case for our particular function. -- 82.46.89.15 12:59, 21 August 2006 (UTC)

Tempting as it may be, I don't think we should occupy the reference desk with a tutorial on Fourier transforms, especially when there are both online tutorials and excellent texts. The Digital Signal Processing group at Rice University has a variety of course materials online. At Stanford University, Professor Julius Smith has provided a wealth of lectures online for his course "Introduction to Digital Audio Signal Processing". Also, these links may help. Since you ask what the result of a Fourier transform "looks like", you may appreciate these applets for Java at Brown University. We can still answer focused questions if you get stuck, but try to use these other resources as much as possible. --KSmrqT 13:41, 21 August 2006 (UTC)
  1. Real
  2. Don't bother about vectors, X(ω) is just a complex number, and | X(ω) | is its modulus (or absolute value, which is the same thing).
  3. No, it's not periodic, there's no need for it to be. As for sines making up a constant, well they just can. That's the nice thing – they can build up many different functions. The case with the rectangular function is not very different from the square wave, which also has many constant parts.
If you plot X(ω) against ω (which sure is the frequency) you would get a graph like these, only stretched differently (both horizontally and vertically). Actually, both graphs in the image are the same, just stretched differently. Here is the square wave you talked about:
Image:Synthesis_square.gif
In the case of our rectangular function, however, we weren't dealing with a Fourier series (because the rectangular function isn't periodic). That means that to reconstruct x(t) from X(ω) we do not sum over the frequencies, but we integrate over them. In the Fourier series, only certain, discrete, frequencies are involved (so we can sum over them), but in the Fourier transform, every real number is a frequency (so we cannot sum over them). The so-called inverse transform (that takes us from X(ω) to x(t)) is
 x(t) = \frac{1}{\sqrt{2\pi}} \int\limits_{-\infty}^{\infty} X(\omega) \mathrm{e}^{ \mathrm{i}\omega t}\mathrm{d}\omega.
If we want to construct a similar animation for our rectangular function as there is for the square wave and others, we could perhaps first do the inverse transform partly by integrating over only a small interval and then extending it more and more. The result would probably be a curve that looks more and more like the original rectangular function. —Bromskloss 13:46, 21 August 2006 (UTC)

[edit] Factorial Congruence

I've been looking for a proof that k+1 is prime iff (k+1)|(k!+1), or in other words, k is prime iff (k-1)! = -1(mod k), but I can't find one. Any suggestions? Black Carrot 03:57, 21 August 2006 (UTC)

That looks rather like Wilson's theorem :-) --HappyCamper 04:19, 21 August 2006 (UTC)
That's the one. Thanks. Black Carrot 05:09, 22 August 2006 (UTC)

[edit] How to generate a probability distribution?

How can a probability distribution be generated? Is the method the same for both discrete and continuous distributions? Is there any built-in function in GNU Octave or Matlab to do this?

I have a method:

  1. Take the probability density—this has domain [-inf; inf] and range [0; inf]
  2. Integrate it to get the cumulative density—this has domain [-inf; inf] and range [0; 1]
  3. Find the inverse function of the cumulative—this has domain [0; 1] and range [-inf; inf]
  4. Generate a random number with uniform distribution between [0; 1] and apply the inverse of the cumulative

Will this method work? Does it have a name?

Masatran 14:06, 21 August 2006 (UTC)

Hi, I think you'd definitely be interested in the Inverse transform sampling method article. This method is nice mathematically but, IIRC, can sometimes be a problem in practice since computing that inverse may be computationally difficult. A way around this has been found for finding normal variables, see Box Muller. -- Deville (Talk) 15:17, 21 August 2006 (UTC)
What you call "cumulative density" is usually called "cumulative distribution function". If a real-valued random variable has a probability density function, you obtain the cdf by integrating it, but discrete distributions and mixtures of discrete and continuous distributions do not have a density in the usual sense. For the method to work also for such distributions you need to modify it slightly. Take the cdf as the point of departure. Since it is in general not continuous, it need not have an inverse, but we can define an "almost inverse" as follows. For cdf F, define function G by G(r) = inf {x | rF(x)}. If F is continuous, then this is its inverse. The crucial relationship is that G(r) ≤ x if and only if rF(x). (See also Galois connection.) Now you can apply G to a uniformly distributed random variable as in Step 4. A final remark. Most methods for generating pseudo-random numbers give you a rational value in [0,1), and, for R denoting the (pseudo) random variable, Pr(Rx) ≠ Pr(R < x) = x. Since Pr(1 – Rx) = x, the complement 1 – R is for this purpose actually a (very slightly) better approximation. --LambiamTalk 16:24, 21 August 2006 (UTC)