Talk:Hilbert transform
From Wikipedia, the free encyclopedia
[edit] Practical Uses
The article should say what the Hilbert transform is used for; currently is does not, and this is a significant omission. I suggest a section be added entitled 'Applications of the Hilbert transform' which gives practical examples of what the transform can be used for. —Preceding unsigned comment added by 81.168.113.121 (talk) 08:53, 27 February 2008 (UTC)
[edit] Introduction
It is not true that the integral defining the fourier transform of h diverges. It is not even possible to consider h as a tempered distribution and thereby get the result. On the other it is possible to define a tempered distribution out of h by cutting off near 0 and taking a limit. But this is closely related to the fact we need a principle value. And it is correct to say the as an operator is the multiplier operator with multiplier . We may want to find a way to rephrase this part.
[edit] Hat notation
The notation should be removed. It is not standard across mathematics and signal processing. And conflicts with the notation more standard notation for the fourier transform. Seeing how the Hilbert transform almost always involves a discussion of the Fourier transform I believe this to be a poor choice of notation.
- Please sign your entries with "~~~~".
- Regarding your suggestion, the argument that it is "not standard across mathematics and signal processing" is not a strong argument, since very few notations of any kind enjoy that unique status. The hat notation is quite common in signal processing, at least. It's no accident that it appears here. Furthermore, I for one have seen quite a few Fourier transforms, and I don't recall any that used the hat notation. I am not saying it doesn't exist, but it is certainly far from being the standard you claim it to be. --Bob K 05:21, 14 July 2006 (UTC)
- Regarding my suggestion. Here are a list of references I might mention that use a hat to denote the Fourier transform.
M. Pinsky "Introduction to Fourier Analysis and wavelets"
B. Hubbard "The world according to wavelets"
C. Blatter "Wavelets: A primer"
W. Rudin "Real and Complex Analysis"
Conte and de Boor "Elementary Numerical Analysis An algorithmic Approach"
E. Stien and R. Shakarchi "Fourier Analysis: An introduction"
And the list continues, see also papers published by R. and C. Fefferman, T. Tao, E. Stein, etc.
Now I don't doubt that there is plenty of evidence for the Hilbert Transform. But the notation definitely conflicts, and we can agree that H(f) is a reasonable way to denote it. We can definitely add comments about different notations in different fields.
- Regarding my suggestion. Here are a list of references I might mention that use a hat to denote the Fourier transform.
Thenub314 20:24, 17 July 2006 (UTC)
-
-
- That's a good list. (Thanks.) It's not an encyclopedia's job to suppress any of the common notations. On the contrary, its job is to try to describe the real world in all its ugly complexity and inconsistency. The really hard part, of course, is trying to maximize its own internal consistency at the same time. Maybe some day each reader will have a "profile", of his own choosing... and articles will be automatically translated through that profile to create the view he wants to see. --Bob K 03:41, 18 July 2006 (UTC)
-
[edit] MathWorld & Matlab
Math world has the plus and minus reversed in
The way it is now coincides with the MATLAB function when plotted, and also coincides with the diagram in my book (going from −∞ to +∞ during a positive side of the square wave), and the equation in my book for a pulse of width τ delayed by τ/2, so I'm leaving it this way. - Omegatron July 2, 2005 17:03 (UTC)
Very strange. In the mathworld we see that hilbert transform is the convolution with -1/t*pi function, not 1/t*pi. But the table with hilbert transform is nearly the same. Somebody just have written it down without thinking.
I will correct it as soon as I create an account.
--83.25.155.136 16:41, 18 September 2005 (UTC)
[edit] Discrete HT
I'm trying to figure out what is going on with the discrete HT. That section now says that there is an ideal discrete HT, so and so, but this operation cannot be realized in the signal domain. Then, it presents a filter which seems to do the job, derived from the DFT. This seems contradictory. --KYN 21:41, 10 November 2005 (UTC)
- I think it's not contradictory, because his is non-realizable, as written. But maybe what you are saying is that n does not go to ± inf, because it is just a DFT. Therefore it can be made realizable, with sufficient delay. And the "ideal" filter would not have that characteristic. Is that the problem? --Bob K 09:30, 7 December 2005 (UTC)
I guess what is said is that from the ideal filter in the Z-domain it is not possible to derive a filter in the signal domain by means of the inverse Z-transform? Maybe then it is better not to involve the Z-domain in the discussion? Is it possible to present it in the following way? First define a "Hilbert filter" in the Fourier domain as
H(u)= -i for even integer < u < odd integer
H(u)= +i for odd integer < u < even integer
- Please clarify this definition, if you can. I'm not getting it yet. --Bob K 20:36, 7 December 2005 (UTC)
- Hopefully it's all OBE now, since I just re-wrote that section of the article. --Bob K 01:52, 8 December 2005 (UTC)
ie an oscillating square wave. The inverse DFT of this function will be precisely the discrete filter presented in the article. Then maybe continue to say that there is an ideal version of this filter in the Z-domain, with the given expression, but there is no formal relation to the discrete filter via the Z-transform or its inverse.
KYN 21:41, 10 November 2005 (UTC)
About the discrete algorithm I read a trick : it work better with a 0 for the first point. So, thanks to the fast fourier transform the algorithm may be written
X(f) = FFT( x(t) ) if f == 0 : H(f) := 0 if f > 0 : H(f) := - i * X(f) if f < 0 : H(f) := +i * X(f) h(t) = iFFT( H(f) )
but with almost all the implementation of the FFT the spectrum unfolding (the negative part is stored after the positive one) imply one more 0 as you can see in this dirting matlab script :
len = length(wave_in); fft_in = fft(wave_in); fft_quad = [ 0 ; - 1i * fft_in(2 : len / 2); 0 ; 1i * fft_in(len / 2 + 2 :len)]; wave_out = real(ifft(fft_quad));
btw very useful to get the envelope ^^ enveloppe(t) = sqrt ( x(t)^2 * h(t)^2 ) for all t.
[edit] Practical implementations
Forgor to write something in the "comment" but I noticed the comment about the discrete ideal filter being non-causal, and realized that this is a general propery of the HT, both continuous and discrete. I tried to formulate something about this.
[edit] Discrete HT again
I don't really have an experience with the DHT and therefore I must ask the following questions, some of which hopefully can work their way into the article to make it more understandable.
- Exactly how is the DHT related to the continuous HT? I can see that there are similarities, but not that the DHT follows straightforward from the CHT. The CHT is defined as it is simply because it make sense in a number of applications. I guess that it can be shown that the DHT also has this property. Examples?
- >>>Probably its main use is to convert real-valued discrete signals into the analytic representation. --Bob K 06:21, 4 January 2006 (UTC)
- I understand that there is a problem in defining the discrete equivalent of the filter h(t) since it would have to be infinite at n=0. OK, let's set h[n]=0. But why should also all other even samples of h[n] vanish? Why is not a valid approximation to define h[n]=1/(pi t) for n not 0 and h[0]=0? For example, the CHT of a bandlimited signal is again bandlimited. Consequently, the computation of the CHT could be made in terms of an operation on a discrete version of the signal, if appropriately sampled.
- >>>But the discrete version of the signal is no longer band-limited. The DTFT is periodically extended (to infinity). A CHT would shift all the positive freq components by -90 deg (and all the negative freq components by +90 deg). But (as the article states) what's actually needed is to shift (0,π] by -90, (π,2π] by +90, (2π,3π] by -90, etc. The derivation of the corresponding h[n] sequence is indicated at slide 21. Note that the function takes on alternating 1,0,1,0,1,0,... values for odd and even values of n. --Bob K 06:21, 4 January 2006 (UTC)
- Would the corresponding operation be a discrete convolution by the h[n]-filter as it is defined in this section?
- >>>Yes, except for the fact that it has to be approximated by something with finite duration. --Bob K 06:21, 4 January 2006 (UTC)
- If so, this fact is worth mentioning since it makes the definition of h[n] much clearer. If not, I am lost.
- What exactly does "usual filter design tradeoffs" refer to? I can guess, but some readers will not understand this. Either be a little more specific or expand the subject on a page of its own.
- >>>Filter-order vs. frequency-response and latency is always a tradeoff. Another issue is truncation vs. numerous methods of gradual tapering off to zero-valued coefficients. These topics are of general enough interest that they should be treated elsewhere and only referenced here. --Bob K 06:21, 4 January 2006 (UTC)
- There is a filter design article. Are the tradeoffs described sufficiently well there so that it is reasonable to use a link?
- >>>Sorry, I haven't investigated that myself. --Bob K 06:21, 4 January 2006 (UTC)
- >>>I just took a peek, and the treatment is indeed brief. But filter-design is a well-studied and dauntingly large subject to cover in detail. And excellent tools are widely available so that most practitioners don't really have to understand all the issues. So I doubt that wikipedia will take on that challenge anytime soon, if ever. All most readers will need to know is that there are tradeoffs, so they should seek the help of a decent tool if they want to do a good job. It's like telling people that their car needs its oil changed, rather than detailing how to change the oil in every kind of car. I think that is valid. --Bob K 06:55, 4 January 2006 (UTC)
- What exactly does "DFT approximation of it" mean. "it" seems to refer to h[n]. It what sense are we dealing with an approximation? Of what?
- >>>Perhaps a second reading would help. But here is the short version: Instead of the convolution, people often do a DFT, modify the coefficients in the obvious way, and then do an inverse DFT. That is equivalent to a circular convolution with the approximation shown in the article. --Bob K 06:21, 4 January 2006 (UTC)
- The operation which is referred to as "fast convolution" is not really described to a sufficient detail to make it understandable. I don't see it before me how this operation is implemented in practice. What is the relation between fast convolution and cyclic convolution? Maybe fast convolution needs a page of its own, rather than to have to be described in detail in this article?
- >>>Yes. Due to the efficiency possible with the FFT algorithm, the DFT/IDFT approach is often faster than simple convolution, even when actual multiplications (by other than ±1 and 0) are required. Again, that topic is of sufficiently general interest that this is not the right place for it. And again, I have not searched for the reference. If you have looked already, then you might try looking for the names overlap-save and overlap-add, which describe specific techniques for piecing together the outputs of block-processing, an essential part of the FFT approach with streaming data. --Bob K 06:21, 4 January 2006 (UTC)
--KYN 23:51, 3 January 2006 (UTC)
Now, I am still a bit concerned about the rest of that section. To me, it appears to discuss rather general principles of filter design, how to truncate and shift an infinitly extended filter, and how to implement the convolution operation in the frequency domain instead of the signal domain. The first part of this is more or less identical to the discussion for the continuous HT, isn't it? The latter, is something that relates to any discrete convolution operation, not just DHT. In that case, it could be moved to somewhere where the relation between signal domain convolution can be compared to "fast convolution" on a more general level. --KYN 13:48, 4 January 2006 (UTC)
- The article does contain a link to circular convolution, which goes into more depth about fast convolution. But normally, one uses design techniques to choose a finite filter order. Then one transforms that design into the frequency domain. For some reason, when it comes to the DHT, there seems to be a propensity to skip that step, and just use the idealized , which makes circular convolution unavoidable. Seems to me that it's like using an ideal rectangle for a lowpass filter, which nobody does. Since this seems to be endemic to the DHT, I thought it was worth a little space at the expense of some redundancy. --Bob K 23:28, 4 January 2006 (UTC)
-
- Ok, I will look at the fast convolution section in the circular convolution articles which you pointed out, and try to make sense of all this. For some reason this page does not appear when I try to search the wikipedia for "fast convolution". Any idea why?--KYN 00:42, 5 January 2006 (UTC)
[edit] Why named after David Hilbert?
Did he develop and define this concept? Whaa? 21:31, 23 April 2006 (UTC)
- Excellent question. I don't exactly know myself, I suspect that either it came up during his study of integral equations. But none of the biographies I have read explain exactly what he was thinking about in this respect. As far as the history of the Hilbert transform, the first real theorems I know that were proven about it go back to M. Riesz. It is also the prototypical example of a singular integral operator, and the Hilbert transform was what motivated Zygmund to study such opertators. 75.3.32.251 12:14, 18 July 2006 (UTC)
[edit] (h*s)(t) or h(t)*s(t) ?
This has undoubtedly been discussed somewhere else, so here we go again. If it comes to a vote, I prefer this convention:
to this one:
--Bob K 14:50, 17 October 2006 (UTC)
If it comes to a vote I prefer
Thenub314 16:15, 17 October 2006 (UTC)
- I agree with Bob K. First Harmonic 19:39, 17 October 2006 (UTC)
One thought on the subject is that the current notation is consistant with the Convolution article. Thenub314 23:36, 18 October 2006 (UTC)
- Clearly, both notations are common in the literature, and on WP. The notation that I prefer is commonly used in signal processing and communication systems engineering, wheras the notation preferred by Thenub314 is more common (I think) in mathematics. So you could argue that either notation (or both) is acceptable. That being said, it comes down to a question of taste and familiarity. I prefer the one notation over the other not only because it is what I am familiar with, but also because I think it is easier to understand, clearly represents what is what, and is less ambiguous than the other. First Harmonic 16:01, 26 October 2006 (UTC)
[edit] PLEASE DEFINE YOUR VARIABLES!!!!!
I search in vain in each of several linked articles in this article including convolution, fourier transform, signal processing, etc., etc., etc., etc., finding the one thing all of these articles have in common is that not one defines ALL of the variables used. That is perfectly fine for those who are already experts, but makes it basically useless to those who are unfamiliar with the subject. If the unfamiliar reader isn't the targeted audience then what is the point of an encyclopedia anyway? Surely not just a book mark for all those equations you regularly use?
You don't need to break into the text, just add a section below where each variable is defined such as t =, omega =, theta=, etc.. I think that is not too much to ask. Drillerguy 14:42, 3 September 2007 (UTC)
[edit] Too much to ask?
What may be too much to ask is to describe the subject understandably to the general public in the introductory paragraph, before diving into the spagetti of equations. These readers may be very interested and very intelligent but still not familiar with the subject. Only the best technical writing achieves that and it may be too much to ask of Wiki but I hope is the goal. In the articles I have contributed to, I approach it as trying to explain the subject to my neighbor.Drillerguy 14:44, 3 September 2007 (UTC)
- The way it makes sense to me is to start with the frequency domain and ask the question: "What happens to x(t) if I eliminate the negative frequency components of a symmetrical X(f); i.e. what is the inverse transform of X(f)·U(f)?"
- X(f) = symmetrical implies x(t) = real. The inverse transform is x(t) plus an imaginary part, which is the Hilbert transform of x(t). That makes it interesting enough to look at the mathematics, IMO.
- If I had written the article, that's what I would have done. So "no", I do not think it is too much to ask.
- --Bob K 16:45, 3 September 2007 (UTC)
- Actually Analytic signal already does what I suggested. No need to do that again. I think it is sufficient to mention that article in the intro.
- --Bob K 13:07, 5 September 2007 (UTC)
-
- That clarified it a lot to me. I may add your explanation to this page. —Ben FrantzDale (talk) 15:58, 16 January 2008 (UTC)
[edit] Issues with the Definition
The Hilbert transform is not defined by convolution. The integral stated does not converge for reasonable functions s(t) (say ). I invite comments before I make any changes. Thenub314 (talk) 17:27, 15 April 2008 (UTC)
- FYI, both Mathworld and "Digital communications" by J.G.Proakis introduce the Hilbert transform using the aforementioned convolution. What would you say the definition should be? Oli Filth(talk) 19:54, 15 April 2008 (UTC)
-
- Well, it is a very tempting definition, it contains the right intuitive idea. This article used to mention the Cauchy Principal Value, as part of the definition. I missed the foot note, which I just noticed now. So I feel slightly better. Nonetheless I think this is not a foot note to be added to the definition, but in fact the key part of the definition. While it has little impact to it's practical applications in signal processing, it is very important mathematically. I do not mean this just because that is the rigorous definition. According to his colleges it was the Riez's proof of the boundedness of the Hilbert transform that fascinated Zygmund. Thus leading to him and Calderon studying other Singular Integral Operators. Which has been some of the most fundamental work in analysis in the past 100 years. definition. To comment on your references, mathworld defines it correctly, with PV appearing in front of the integral. I don't have the other reference to look at, but like I said for practical applications it is probably harmless to overlook it.Thenub314 (talk) 02:49, 17 April 2008 (UTC)
As a PS to my comments about definitions, why does the introductory sentence demand the function be real valued?Thenub314 (talk) 02:54, 17 April 2008 (UTC)
- "demand" is an exaggeration. It is simply a valid statement of an important fact. It is not every possible fact or the most general statement possible. Nor does it claim to be. Why not add a complementary statement regarding complex functions and what their value might be under Hilbert transform?
- --Bob K (talk) 16:39, 17 April 2008 (UTC)
-
- Well, maybe demand is to strong a word. But the definition given here, nor the changes to the definition I suggest, require the function to be real valued. I was just a bit surprised the very first sentence made an issue of it. How do people feel about the sentence: "In mathematics and in signal processing, the Hilbert transform is an operator that takes a function, to another function,, on the same domain." Is it possible real-valued was referring to the parameter t? It would make sense in this article, as we do not mention Hilbert transforms on the circle, or Riesz Transforms in higher dimensions. Thenub314 (talk) 00:26, 18 April 2008 (UTC)
- I can authoritatively (since I was the author) attest that "real-valued" does not refer to t. My thinking is this:
- The only thing I have ever seen the Hilbert transform used for is to create a function, that can be added to to cancel all its negative frequencies while preserving all its positive ones.
- The only time I have seen people do that is when it is a reversible operation... no "information" is lost.
- One class of functions whose negative frequency components can be reconstructed from its positive frequency ones is the class of real valued functions.
- We can invent other classes, but AFAIK they don't occur naturally in practice.
- If there is a whole nuther realm of Hilbert transform applications, we should of course include it in the article. Until I know what it is, I can't confidently judge whether the article should start with one general intro paragraph or with two more specific ones.
- --Bob K (talk) 01:43, 18 April 2008 (UTC)
-
- Those are good reasons. I hope I did not offend. The Hilbert transform is also an important object to pure mathematics, where one does not necessarily assume the functions it operates on are real-valued. For example it is useful when describing Hardy spaces on the real line. It is the first basic example that motivated the study of Singular Integrals. I would be happy to add a few paragraphs about this part of the theory. Thenub314 (talk) 02:25, 18 April 2008 (UTC)
- No problem. Happy to have your contributions. When I wrote the intro, I expected something like this, sooner or later. As I recall, I am also the one who demoted your Cauchy Principal Value to a footnote. It was all in response to complaints from other readers... the usual encyclopedia vs. math textbook debate. My own bias is to divide and conquer, rather than try to please everyone in one all-encompassing article. The exciting thing about the Wikipedia format is the ease with which several articles can be linked together, either vertically (hierarchically) or horizontally (web-like).
- --Bob K (talk) 10:58, 18 April 2008 (UTC)
[edit] Last sentence in the intro
I have some issues with the sentence "Except for the component, which is lost..." For the discrete Hilbert transform it is certainly true. But in the case we are considering it doesn't follow. Thenub314 (talk) 13:53, 20 April 2008 (UTC)
- I don't agree. Can you explain why you think it doesn't follow?
- --Bob K (talk) 14:19, 20 April 2008 (UTC)
Sure, let's take for example then, let's denote the fourier transform by . We have , and if we calculate . But then .
More generally, if f(ω) = g(ω) for all then for all x because changing the integrand at one point doesn't change the value of the integral. Thenub314 (talk) 16:29, 20 April 2008 (UTC)
- Now I see what you mean. But on the other hand does not preserve the value of the DC component
- --Bob K (talk) 18:21, 20 April 2008 (UTC)
True, but you only run into this trouble when the DC component is infinite. (Infact, that is not really enough, you really need to be dealing with something like the delta function.) It is a theorem that the square of the Hilbert transform is the the negative of the identity on the space of L2 functions. (Yuck what an ugly sentence. I just mean on ) So it is not the case the DC component is necessarily lost. Thenub314 (talk) 19:34, 20 April 2008 (UTC)
- I think you make a good point, regardless of what happens to the DC component. I attempted to fix the offending statement.
- --Bob K (talk) 19:55, 20 April 2008 (UTC)
I like the change. Just what I would have done. Thenub314 (talk) 20:29, 20 April 2008 (UTC)
[edit] Notation.
The recent edits by 69.247.68.82 are good information to have in the article. Unfortunately it calls attention to a bit of a conflict in notation. In this article (as is common in many applications) the Hilbert transfrom of s(t)is dentoted by a . This conflicts with the notation used for the Fourier transform by most professional mathematicians. I suggest we denote the Hilbert transform by which is recognized by both groups, and appears in the definition. Thenub314 (talk) 15:12, 29 April 2008 (UTC)
- FWIW, other than:
- which I don't even understand, I like the current useage. I would not like to see represent the Fourier transform. So it sounds like we should simply avoid using it for anything of such a general nature as a Hilbert or Fourier transform. Just define it locally, as needed, for sundry things.
- --Bob K (talk) 18:54, 29 April 2008 (UTC)
- Sorry, it means "For What It's Worth". It's the opinion of a person who does not claim to be a "professional mathematician".
- --Bob K (talk) 23:00, 29 April 2008 (UTC)
-
- :) FWIW, I don't claim to be a professional mathematician. I am just familiar with their notation. I would just to like the article readable to everyone. I know notation issues are small, but can be confusing when your starting out. That is why I thought the neutral ground of using would be good. Thenub314 (talk) 23:38, 29 April 2008 (UTC)
- I have noticed that Bob K has switched the notation back to for the Hilbert transform. I have two objections to this. First of all, it cannot be typeset inline, so the text becomes littered with ugly forced-PNG renderings of mathematical expressions. More distressing, however, is the fact that this is more commonly used to denote the Fourier transform of a function. I submit that the article should standardize on the ambiguity-free notation H(u). There is ample precedent for this in the literature. siℓℓy rabbit (talk) 13:40, 3 June 2008 (UTC)
-
- Hold on a sec... I only switched in the "signal processing" section, and the article previously acknowledges: "In signal processing the Hilbert transform of u(t) is commonly denoted by " I also defined what I was doing, for clarity. We can use , if you insist. My reason for shortening the notation is to avoid obfuscating the main points of the signal processing application of Hilbert transform.
- --Bob K (talk) 15:08, 3 June 2008 (UTC)
-
-
- Ok. I've looked at it again, and it's not as bad as I thought. I would still prefer to keep the notation consistent in the article, but I don't feel so strongly about it anymore. siℓℓy rabbit (talk) 15:19, 3 June 2008 (UTC)
-
[edit] Major Revision.
I did some reading about the history and tried to include some references. I hope people like this version, there are admittedly some omissions (generalizations of the H.T. for example). And places that are very awkward (discussion of other types of discrete Hilbert transforms.) But I think the info should make a nice addition. Thenub314 (talk) 15:36, 12 May 2008 (UTC)
- Sorry... I like the previous version much better.
- --Bob K (talk) 13:43, 13 May 2008 (UTC)
- Your notation is strange to me, as perhaps mine is to you. And I like starting off with the frequency domain definition, because it is easy to understand, and IMO, it is the whole reason most of us care about the Hilbert transform at all.
- --Bob K (talk) 15:01, 13 May 2008 (UTC)
-
- I'll have to display my ignorance for a moment. What does IMO stand for? I understand issues with notation, I tried to be careful to avoid using a anywhere for a Fourier transform because I recognize this would make the article very difficult to read for anyone who is used to that notation for the Hilbert transform. As far as the frequency response definition, I do think it is important, but I think starting with the the definition via integration has two advantages. First, it is requires sligthly less background. Second, is that it is often how the transform is introduced. Thenub314 (talk) 16:09, 13 May 2008 (UTC)
- IMO = "in my opinion". I do appreciate that we're not using for a Fourier transform. That is very helpful. Why introduce for frequency? Everywhere else we use and , which is bad enough. is best, because there is only one common form for the transform definition. (not to mention that everyone understands "cycles per second") With there are two common forms to worry about. And with , all three are in play.
- And I think this is worth writing explicitly:
- And we should stick to for the transform operator and for the transfer function.
-
- I changed the notation of the transform to H for typographical reasons, since the page suffered from far too much inline PNG. I'm not married to any particular system of notation, but I think we should use one that is at the very least easy to typeset. For the symbol of the operator, I have used the notation σH, which is standard in pseudodifferential operators, although admittedly there is no universal standard for this. I did not have a chance to visit the section on the discrete transform until just now. The ξ's should, I agree, be changed into ω's (or vice versa, I don't really care). silly rabbit (talk) 20:50, 13 May 2008 (UTC)
-
-
- I was actually referring to other articles (that use and ), not to the discrete transform section. And you are confusing me. is (or was) the "the symbol of the operator". So according to the paragraph above, you now have H and σH representing the same thing. I don't like either one of them, and I'm surprised others aren't complaining with me.
- --Bob K (talk) 22:53, 13 May 2008 (UTC)
-
-
-
-
- Sorry, I meant "symbol" in the technical sense of Fourier multiplier (in analysis), or transfer function in signal processing. Hope that clarifies things. silly rabbit (talk) 23:00, 13 May 2008 (UTC)
-
-
- I have changed ξ to ω for the moment. I do not like to denote frequency variables by f. The letter f is so commonly used a function it can be distracting if your not used to using it to denote frequency. Thenub314 (talk) 21:31, 13 May 2008 (UTC)
- Please don't take this as criticism, but I don't understand that attachment to f. I worked in signal processing for a few years, and the rule of thumb about notation was nothing was consistent. I was just checking some of my more engineering slanted books that I still own, just to make sure grad school hasn't warped my memory. In none of the books I have come across does the author use this notation. I suppose in the end this part of the discussion belongs on the Fourier transform page though. The reason I don't like it (beyond the fact that I am not use to it) is that the students I teach and the lay person in general is probably more accustomed to f as a function then f as a variable. It is those readers I hope not to distract. Thenub314 (talk) 02:28, 14 May 2008 (UTC)
-
- Lots of signal processing books represent frequency (in hertz) with the variable One that I happen to have handy at the moment is:
-
-
- Ambardar, Ashok (1995). Analog and digital signal processing. Boston: PWS Pub.. ISBN 0-534-94086-2.
-
-
- In particular, I would draw your attention to chapter 6 (Fourier Transforms), chapter 11 (Sampling and the DTFT), and chapter 12 (The DFT and FFT). And in chapter 14 (Digital Filters), he uses to represent the ratio where is the sample-rate. No doubt you (or I) could also go web-surfing and find lots of examples. There is no Wikipedia standard, and Wikipedia is not interested in choosing one, as far as I can tell. As an encyclopedia, its job (ideally) is to document our world the way it really is, not the way we wish it was. As evidenced by the Fourier transform article, people here seem to agree, in principle, with the concept of documenting multiple standards, when no single one exists. However, no consistent way of doing that has been established. It's a hard job, and I don't think the will exists. So we are left to figure it out for ourselves, knowing that someone will probably come along later and rewrite the whole thing anyway.
- --Bob K (talk) 04:06, 14 May 2008 (UTC)
-
- Another book with plenty of examples of variable (for frequency) is:
-
-
- J. Harris, Fredric (2004). Multirate signal processing for communication systems. Upper Saddle River, NJ: Prentice Hall PTR. ISBN 0-13-146511-2.
- --Bob K (talk) 04:47, 14 May 2008 (UTC)
-
-
-
- I didn't mean to imply there are no books that use this notation. I have seen it before, it just didn't seem to me any more common than any other system. My experience is that some authors use ω, but it is measured in hertz. Others use f but it is an angular frequency, many authors use neither of these letters. (If you'd like specific references I would be happy to find some.) Your comment about not using f for a function is correct. If we want to use f as a frequency we should avoid using it as a function. But that struck me as a odd for the purposes of this article. Thenub314 (talk) 12:25, 14 May 2008 (UTC)
-
-
- I cannot claim that is the most common notation for frequency. What I do claim is that whenever I have seen it, I could safely infer that the underlying transform definition is:
-
- Of course I didn't read your book where "Others use f but it is an angular frequency", which makes no sense to me.
-
-
- It is an interesting point that this article doesn't make clear which choice of the Fourier transform it is using. I don't feel confident that just putting f for the variable would clarify the matter. I have grown to realize that I always need to check what notation the author is using when reading a book or article that invokes the Fourier transform. To clarify I included a note. Thenub314 (talk) 00:38, 15 May 2008 (UTC)
-
-
-
-
- I'm ambivalent about the note. First of all, it's the "wrong" Fourier transform (i.e., the non-unitary one), but that is more of a personal preference. Secondly, it doesn't actually matter which transform we use, since the Hilbert transform takes place entirely in the time domain. So I'm not sure if it's a good thing or a bad thing to indicate explicitly which transform is used. It could be a potential source of confusion. silly rabbit (talk) 12:35, 15 May 2008 (UTC)
-
-
-
-
-
-
- I thought the same thing about a note being unnecessary until I started calculating symbols. If the Fourier transform is defined as , then the symbol of the Hilbert transform should change by a factor of . The fastest way to see this must be true is that with this definition we get , instead of just . Since we are claiming something specific about the symbol, we are at least not choosing this transform. The other common choices would have been fine. For Bob's sanity (and perhaps many others) I decided I should put the non-unitary version, or use the unitary one and change the frequency variable to ξ. It was faster to just put the non-unitary version, but I also agree that it is the "wrong" one. Thenub314 (talk) 13:33, 15 May 2008 (UTC)
-
-
-
-
-
-
-
-
- I don't think the symbol changes. If we are writing
- then surely σ doesn't notice if we multiply by a constant and divide by the same constant. To see this with convolution, suppose that is the unitary transform and is some other transform (k is constant). Then
- so that the symbol of is
- which is independent of the normalizing constant k. (The symbol will of course depend on how the convolution is defined, but this is a separate matter.) silly rabbit (talk) 13:54, 16 May 2008 (UTC)
- I don't think the symbol changes. If we are writing
-
-
-
-
-
-
-
-
- Your absolutely correct. My own comment should have shown me the mistake. The Fourier transform of changes by a factor of , but exactly because of the convolution theorem, the symbol of a convolution operator under this definition of the Fourier transform are not just the the function you convolve with, thanks for staying sharp. Thenub314 (talk) 16:10, 16 May 2008 (UTC)
-
-
-
-
-
-
-
-
-
- The point being made above is that the symbol does not depend on which Fourier transform is being used. You seem to have neglected the fact that in defining the symbol both the inverse and forward transforms are used. When they both come in, any arbitrary normalizations will cancel. silly rabbit (talk) 15:09, 17 May 2008 (UTC)
-
-
-
-
-
-
-
-
-
-
- OK, I agree with that. Another way to look at it is that includes the factor in:
-
-
-
-
-
-
-
-
-
- (from Fourier_transform#Some_Fourier_transform_properties). But many people will make the mistake of applying it again. It is an unnecessary opportunity for mistakes, which is bad design. And two of us have fallen into the trap right here. We wouldn't even be having this conversation if someone hadn't changed ordinary frequency to radian frequency in this article. And I don't agree with the minimalist approach of leaving out transform definitions. If a formula applies to all three conventions, just say so. Another minimalist example was when this:
-
-
-
-
-
-
-
-
-
-
- "the negative and positive frequency components of are shifted by +180° and −180°, respectively. The result is Or in other words:
-
-
-
-
-
-
-
-
-
-
-
-
- "
-
-
-
-
-
-
-
-
-
-
-
- was truncated to just this:
-
-
-
-
-
-
-
-
-
-
- "the phase of the negative and positive frequency components of u(t) are shifted by +180° and −180°, respectively."
-
-
-
-
-
-
-
-
- Well, I don't like the characterization that I fell into a trap. I did a calculation with a form of the Fourier transform I was not used to and I forgot a factor of 2π. It is perhaps notable that this is not a general property of multiplier operators that their symbol doesn't depend on which Fourier transform your using (Though comments about multiplying and dividing still apply.) It has to do with the kernel is homogeneous of degree -1. For example if we take the "ordinary frequency" definition then the symbol of Δ is -4π2 ν2, where as if we we take one of the "angular frequency" definitions then symbol would -ω2. But I can't decide if this thought deserves comment. Thenub314 (talk) 00:49, 18 May 2008 (UTC)
-
-
-
-
-
-
-
- I apologize for offending you. Call it what you wish. My point is that "forgetting a factor" of or is a fairly common error when working with angular frequency definitions. You did it, and so did I. Others will do it too.
- I have no idea what your "symbol of Δ" is, but I doubt that it will help the article.
- --Bob K (talk) 02:22, 18 May 2008 (UTC)
-
-
-
-
[edit] Iterated Hilbet transfrom.
The exact same text seems to have been added in the page Gianfelici Transform. This article has been nominated for deletion because it is not clear if this transform is notable. Maybe we should make sure this section belongs here. There doesn't seem to be a lot that comes up about it on Google besides the reference given. Thenub314 (talk) 17:33, 12 May 2008 (UTC)
[edit] Improper Integration
Does the phrase "Improper Integration" imply that we are dealing with something other then the Lebesgue integral? Being an encyclopedia article I don't want to get too technical. Though if you follow the link to the Improper integration page it says something like: "For the Lebesgue integral, deals differently with unbounded domains and unbounded functions, and one does not distinguish an 'improper Lebesgue integral'...", but then it follows this with something (I think) is false so I don't know what to make of that statement. Anyways, it is a fine expression to have, maybe we could write the limit directly? Thenub314 (talk) 12:36, 16 May 2008 (UTC)
- Whether it is a Lebesgue integral or not, it still needs to be improper. I have corrected the example at Improper integral: it now reads
- Obviously, by the divergence of the Harmonic series, in order to make sense of this integral as a Lebesgue integral, it needs to be regarded as an improper one. I'll add the limits in to the article (in particular, it is the limit at zero which is important, not the one at infinity). silly rabbit (talk) 12:51, 16 May 2008 (UTC)
Thanks for fixing that example, I didn't quite have enough time before I left for work. It was mostly the phrase "improper integral" I was asking about. I thought it simply wasn't used when discussing the Lebesgue integrals. I am not sure why, it seems natural enough. This is what I thought the page improper integral meant by "...one does not distinguish...". Thenub314 (talk) 16:42, 16 May 2008 (UTC)
[edit] Comments about Hilbert transform table.
It is not quite true that "The Hilbert transform of the sin and cos functions are defined using the periodic form of the transform, since the integral defining them otherwise fails to converge absolutely." At some point I was bothered by these two being in the table aswell, but when I start with the definition (instead of thinking about it as an operator) this is what I get, if there are any mistakes please let me know.
-
- .
As an operator it does map L∞ to BMO, so the statement could also make sense in that way. But I feel those the above calculation justifies the entries in the table. Thenub314 (talk) 00:22, 18 May 2008 (UTC)
- I think it depends on whether you take the principal value at infinity as well. I'm inclined to think of the integral as an ordinary Lebesgue integral "at infinity", with the improper part at zero. This is certainly how things are done in the old folks like Riesz, Titchmarsh, and Zygmund. (If an integral was improper at infinity, they indicated this explicitly.) Anyway, I am more comfortable indicating that the integral may be problematic, rather than take for granted that the calculation works. silly rabbit (talk) 00:34, 18 May 2008 (UTC)
- L∞ and BMO would be a nice addition. But this is even more problematic from a practical point of view since BMO is not (exactly) a function space, so one needs to modify the definition of the transform. silly rabbit (talk) 01:09, 18 May 2008 (UTC)
-
- Well I think the general point of view back then regarding Cauchy Principal Values was to cut symmetrically around what ever singularities there were, but it was a little before my time so it is tough to say. I can say this Zygmund and Calderón in "Singular integrals and periodic functions" cut off basically the way I did in order to calculate the Fourier Transform of the kernel. (Of course this paper was dealing with a whole class of kernels.) Though they have a very nice result that the Fourier series of the periodic kernel is the the the Fourier transform of the corresponding kernel sampled at the integers. Maybe I will add something about L∞ and BMO but I would like to get the Riesz transforms mentioned here first. Thenub314 (talk) 02:19, 18 May 2008 (UTC)
-
-
- I've changed the wording slightly. In case the conditional convergence bothers someone (like me), the periodic kernel is available to do it properly. I have also added a section on BMO in one-dimension. The Hilbert transform in several variables is probably a better addition than the Riesz transform, which is easily important enough to have its own article. silly rabbit (talk) 13:22, 18 May 2008 (UTC)
-
-
-
-
- I will take a look at the wording. I am ok with it, it would bother me too, except that it occasionally comes up, particularly in calculating the Fourier transform, so I suppose I have gotten used to it. What definition of the Hilbert transform in several variables did you have in mind? The only definitions I am familiar with are the directional Hilbert transform and the Riesz transforms. Thenub314 (talk) 13:45, 18 May 2008 (UTC)
-
-
-
-
-
-
- I think the standard generalization is the naive one, which basically takes the Hilbert transform with respect to each variable separately:
- silly rabbit (talk) 15:05, 18 May 2008 (UTC)
- I think the standard generalization is the naive one, which basically takes the Hilbert transform with respect to each variable separately:
-
-
-
-
-
-
- I see, I have not had an opportunity to give any deep thought to this operator. It seems it is a product of Hilbert transforms acting in each variable separately. I mentioned the Riesz transforms because they are the operators that generalize the connection between the Hilbert transform and conjugate functions. Thenub314 (talk) 19:28, 21 May 2008 (UTC)
-
-
-
-
-
-
- It's basically the "Cauchy kernel" in several variables, making it seem likely that it might arise in studying edges of wedges. (Though I don't know much about the role, if any, it plays here.) I think the article should address in more detail the connection with the Riemann-Hilbert problem (1D edge-of-the-wedge), as well as the connection with analytic functions. For my own part, I'm still considering the proper way to go about this. siℓℓy rabbit (talk) 03:50, 22 May 2008 (UTC)
-
-
-
[edit] Convolutions.
Just to make sure I am correct, consider the sentance... "However, a priori this may only be defined for u a distribution of compact support. It is possible to work somewhat rigorously with this since compactly supported distributions are dense in Lp."
Shouldn't this be functions of compact support? Distributions of compact support do not sit inside Lp, if they are dense you'd first want to intersect with Lp. Thenub314 (talk) 13:53, 18 May 2008 (UTC)
- Yes, of course that is what I mean. The discussion is somewhat informal, but I will change it to make it more clear. silly rabbit (talk) 14:01, 18 May 2008 (UTC)
Ok, cool, you can allow Schwartz functions if you really want, as you point out in the line above that is a tempered distribution. Thenub314 (talk) 14:06, 18 May 2008 (UTC)
- I think compact support is needed in the next paragraph where I use the commutativity of convolution. The Schwartz space is not stable under convolution with tempered distributions,
whereas compactly supported distributions are.silly rabbit (talk) 14:15, 18 May 2008 (UTC)
-
- Comment: I struck out the obviously false statement above. silly rabbit (talk) 14:30, 18 May 2008 (UTC)
- Correction: What I mean is that as long as all but one of the distributions in the convolution are compactly supported, then the convolution is a commutative and associative operation, so that formal manipulations can be performed with it. silly rabbit (talk) 14:33, 18 May 2008 (UTC)