Talk:Orthogonality

From Wikipedia, the free encyclopedia

Those trained in computer science think they invented everything known before computers existed: integrals, mathematical induction, orthogonality, etc. I've left the page a bit of a messy hodge-podge, but far better than what was here. Michael Hardy 02:27, 13 Jan 2004 (UTC)

If non-orthodox is "heterodox", is "heterogonal" non-orthogonal? (Google has one hit for that word, in an unmaths context.) 142.177.126.230 21:05, 5 Aug 2004 (UTC)

It is a needless complication of the definition of orthogonality to bring in the subscripts i and j when one is only trying to define what it means to say two functions are orthogonal. And it is incorrect unless one has first given them some meaning. Michael Hardy 01:45, 6 Sep 2004 (UTC)

Contents

[edit] examples

i'd like an example of two simple functions that are orthogonal. - Omegatron 16:22, Sep 29, 2004 (UTC)

Take two orthogonal vectors and then change basis to {1, t, t^2, ..., t^n}?

Dysprosia 22:32, 29 Sep 2004 (UTC)

No, that won't work until you specify a measure (or "weight function") with respect to which those are orthogonal. See for example Chebyshev polynomials and Legendre polynomials and Hermite polynomials (all exceptions to the rule that it is better use singulars as Wikipedia article titles). Those are examples. Also, see Bessel function. Michael Hardy 00:32, 30 Sep 2004 (UTC)
Well, it does depend on the inner product you use to determine orthogonality, though. But yes, if you use the inner product defined in the article, it won't work. Dysprosia 01:48, 30 Sep 2004 (UTC)
some of us don't know what that means... aren't sin and cosine orthogonal? and certain pulse trains? - Omegatron 22:53, Sep 29, 2004 (UTC)
If you use the inner product from the article, and take the integral from -a to a with weight 1, sin x and cos x are indeed orthogonal (calculate it for yourself). Dysprosia 01:48, 30 Sep 2004 (UTC)

And explain why the integral is a to b instead of -∞ to +∞? - Omegatron 16:24, Sep 29, 2004 (UTC)

No reason, though you can define another inner product with those bounds and then consider orthogonality with respect to that inner product.Dysprosia 22:32, 29 Sep 2004 (UTC)
i see that the a and b are used in the inner product article, too. - Omegatron 22:53, Sep 29, 2004 (UTC)

It is important to realize that functions are orthogonal only on a defined interval. In other words, sin(x) and cos(x) are not orthogonal, generally speaking. They are only orthogonal on the interval [a, b] if |b - a| = n*pi where n is a nonzero integer. This is also why inner product is defined on [a, b] and not -∞ to +∞. Severoon 22:41, 1 May 2006 (UTC)

[edit] Missing bracket

There is a missing opening square bracket on the integration example image, I believe. --anon

Fixed now. I think that bracket was left out on purpose. But I agree with you that things look better with the bracket in. Oleg Alexandrov 18:25, 15 May 2005 (UTC)

[edit] vectors

for some positive integer a, and for 1 ≤ k ≤ a-1, these vectors are orthogonal, for example (1,0,0,1,0,0,1,0)T,(0,1,0,0,1,0,0,1)T ,(0,0,1,0,0,1,0,0)T are orthogonal.

interesting. so this is where discretely sampled signals like
...0,0,1,0,0,1,1...
...1,0,0,0,1,0,0...
...0,1,0,1,0,0,0...
come from? and these signals are orthogonal too, according to another site I saw. can we extrapolate the signal processing version from the many dimensional vector version? maybe graphs? - Omegatron 13:41, Sep 30, 2004 (UTC)

They appear to be. Calculate the dot product of these "signals", so to speak, across each triplet. If they sum to 0 for all the bit triplets over your time period they are orthogonal. I don't understand what you mean about "extrapolate the signal processing version from the many dimensional vector version". Dysprosia 14:04, 30 Sep 2004 (UTC)

the difference being that this is a discrete function instead of a vector,

function f[n] = ...,0,1,0,0,4,0,0, − 1,0,2,...

vector \mathbf{a} = (...,0,1,0,0,4,0,0,-1,0,2,...)

but i guess they can be seen as the same thing from different perspectives? can you have infinite-dimensional vectors? the discrete-"time" function can be "converted" to a continuous-time function (think sampling), though, which can also be orthogonal to another similar function if they have the same "shape" relationship... - Omegatron 14:40, Sep 30, 2004 (UTC)

heh. lots of "quotes". i can explain better later. i will draw some pictures... - Omegatron 14:41, Sep 30, 2004 (UTC)
Yes, you can have vectors of infinite dimension. You know there is in fact nothing really special about any of these definitions of orthogonality - what is the important property is the inner product, which determines whether two vectors in a vector space are orthogonal or not, or determines a "length" or not. Change the inner product, and these definitions change also. Dysprosia 14:49, 30 Sep 2004 (UTC)
Not sure I understand what you're trying to say. So you could define your own "inner product" for which a cat is orthogonal to a dog? - Omegatron 19:55, Sep 30, 2004 (UTC)
Metaphorically, yes, as long as the inner product you define is in fact an inner product. There are some requirements on this, see inner product. Literally, you have to define what you mean by a cat and dog first before you can say they are orthogonal to each other... ;) Dysprosia 01:07, 1 Oct 2004 (UTC)
can you have infinite-dimensional vectors?

Except that it's the space that is infinite-dimensional, rather than the vectors themselves. The two most well-known infinite-dimensional vector spaces are \ell^2, which is the set of all sequences of scalars such that the sum of the squares of their norms is finite (for example (1, 1/2, 1/3, ...) is such a vector because 12 + (1/2)2 + (1/3)2 + ... is finite) and L2, the set of all functions f such that

\int_\mathrm{whatever\ space}\left|f\right|^2 < \infty.

("Whatever space" could be for example the interval from 0 to 2π, or could be the whole real line, or could be something else.) Michael Hardy 19:30, 30 Sep 2004 (UTC)

Yes. So what is the connection between the discrete function with an infinite number of points ...,f[-1],f[0],f[1],... and a vector with an infinite number of dimensions (...,x-1,x0,x1,...)? Are these the same concept said in two different ways or are there subtle differences? For instance, in MATLAB or GNU Octave you use vectors or matrices for everything, and use them to represent strings of sampled data or two dimensional arrays of data, both of which could also be thought of as functions of the vector or matrix coordinates.
Not that this is a site for teaching people math, but it could point out things that need to be included in various articles. :-) - Omegatron 19:55, Sep 30, 2004 (UTC)
Let xi = f(i)? Dysprosia 01:07, 1 Oct 2004 (UTC)

[edit] Orthogonal curves

This article does not mention orthogonal curves or explain what it means that two circles are orthogonal to each other. Hyperbolic geometry mentions orthogonal circles, but I had to look up the exact meaning elsewhere (more precisely, on MathWorld).

My question is, should orthogonal curves and circles be covered in this article, or do they qualify as a "related topic"? Fredrik | talk 03:16, 21 Oct 2004 (UTC)

The concept's not really that different, though Mathworld's geometric treatment may merit a seperate page. One could perhaps say generally that two curves parametrized by functions f and g are orthogonal, if where they interesect ∇f.∇g = 0, though I'm not sure that's a decent, established, or useful definition... Dysprosia 08:14, 21 Oct 2004 (UTC)

[edit] Quantum mechanics

The article states that

In quantum mechanics, two wavefunctions ψm and ψn are orthogonal unless they are identical, i.e. m=n. This means, in Dirac notation, that < ψm | ψn > = 0 unless m=n, in which case < ψm | ψn > = 1. The fact that < ψm | ψn > = 1 is because wavefunctions are normalized.

This is wrong in the general case. The author probably supposed that ψm and ψn are eigenstates of the same observable relating to two different eigenvalues, in which case it is trivially true. The definition of orthogonality in quantum mechanics is the same as in the L2 space in mathematics, so that this precision can be removed without there lacking anything.--82.66.238.66 20:11, 16 April 2006 (UTC)

It's "trivial"? Not for everyone! I'm not an expert in quantum mechanics--my specialisation is in complex systems--so I will not defend my original statement down to the last letter. However, I do feel strongly that the comments on quantum mechanics should be modified, not removed.
The reason I added the paragraph in question is because when I was studying for my last quantum mechanics class, I found that Wikipedia did not answer the questions I had about orthogonality. If you simply take out the stuff on quantum mechanics, then other people will likely come along with the same queries as me--and they'll be unsatisfied too. If you want to clarify that it's for the two eigenvalues of the same observable, that's fine. But just because it's not the most general case doesn't mean it's not an important one. Ckerr 16:12, 19 April 2006 (UTC)
Since there has been no reply, I'm going to reinstate the part on QM. Please correct it if it needs correcting, but please don't just axe it! Ckerr 09:04, 25 April 2006 (UTC)

[edit] Weight Function?

Why is there mention of a weight function w(x) in the definition of the inner product? It's presence plays no role whatsoever in the definition of the inner product of f and g, so why not remove it? (I understand the role of a weight function in PDEs like the heat eqn, but isn't it unnecessary and extraneous in a page on orthogonality?) Severoon 22:45, 1 May 2006 (UTC)

Well, I suppose weight functions aren't truly essential to the notion being discussed, but they make it much more accessible. We could just say "Given an inner product \langle f, g \rangle, f and g are orthogonal if ....". But the use of weight functions gives a good motivation for the construction of inner products, and for the notion that one can construct different inner products, and hence different notions of orthogonality, on the same underlying set of objects (e.g. polynomials.)
On second thought, I see your point. The section isn't very clear. I'll fix it William Ackerman 15:46, 12 May 2006 (UTC)
I agree that the section is not clear. In fact, it's so unclear it seems to have led to confusion right in the examples section: "These functions are orthogonal with respect to a unit weight function on the interval from −1 to 1." (see the third example) In fact, the functions in the example are not "orthogonal wrt a unit weight function"...they're orthogonal to each other on the specified interval!
This definitely needs to be changed. The introduction of a weight function should be brought up in the context of a physical example, something like the heat equation on a 1D conductive rod of nonuniform density. Short of an explicit physical application, it just seems to be confusing things. Severoon 23:34, 12 May 2006 (UTC)

[edit] Emergency fix

I have just put in an emergency fix for the question raised by 66.91.134.99, and left a note on his talk page. This was a proof that an orthogonal set is a linearly independent set.

It's not at all clear that putting in this proof is the right thing for the article as a whole -- I just needed a quick fix. (It's not even clear that this is the best proof. It was off the top of my head. And it's definitely not formatted well.) Maybe the linear independence is truly obvious, and saying anything about it is just inappropriate for the level of the discussion. Maybe the proof/discussion should be elsewhere.

If/when someone has the time to look over the whole article, and think about the context of the orthogonality/independence issue, and figure out the right way to deal with all this, it would be a big help.

William Ackerman 16:08, 21 July 2006 (UTC)

Thanks for the proof. But I tend to agree with your doubt that the proof was not the right thing for the article as a whole, especially that early in the article. Proofs are not really encyclopedic to start with (see also Wikipedia:WikiProject Mathematics/Proofs). I removed the proof for now. Oleg Alexandrov (talk) 08:33, 22 July 2006 (UTC)

[edit] On radio communications

The radio communications subsection claims that TDMA and FDMA are non-orthogonal transmission methods. However, in the theoretically ideal situation, this is not the case. For FDMA, note the orthogonality of sinusionds of different frequencies; thus, restricting users to a certain frequency range IS orthogonal so long as the frequency ranges are non-overlapping.

This is similarly true for the TDMA case. Assume that each user is restricted to transmit in in specific, non-overlapping time, i.e.,

f_1(x) = 0 \; \forall x \notin [a,b]

and

f_2(x) = 0 \; \forall x \notin [b,c],

so that the inner product

\int_{-\infty}^{\infty} f_1(x)f_2^*(x) dx = 0.

[edit] Radio Communications

I agree with the comment already present in the comment page. The sentence "An example of an orthogonal scheme is Code Division Multiple Access, CDMA. Examples of non-orthogonal schemes are TDMA and FDMA." is wrong and should be deleted. All in all the section on Radio Communications is not satisfactory as it is. I would delete and replace with something such as the following text, or similar one: "Ideally FDMA (Frequency Division Multiple Access) and TDMA (Time Division Multiple Access) are both orthogonal multiple access techniques, and they achieve orthogonality in tne frequency domain and in the time domain, respectively. In practice all orthogonal techniques are subject to impairements, which however can be controlled to any desired level with appropriate design. In the case of FDMA the loss of orthogonality arises due to the imperfection of spectrum shaping, and it can be combatted with appriopriate guard bands. In the case of TDMA, the loss of orthogonality is the result of imperfect system syncronization. The question can be asked if there are other "domains" in which orthogonality can be imposed, and the answer is that a third domain is the so called "code domain". This leads to CDMA (Code Division MA), which is a techniques which impresses a codeword on top the digital signal. If the set of codewords is chosen appropriately (e.g. Walsh-Hadamard codes), and some more conditions are assumed on the signal and on the channel conditions, CDMA can be orthogonal. However, in many conditions, to guarantee near ideal orthogonal condition for the CDMA implementations is more critical. In packet communications, with non coordinated terminals, other MA techniques are used. For example the Aloha technique originally invented for computer communications via satellite. Since the terminals transmit as soon as they have a packet ready, in an uncoordinated manner, packets can collide at the receiver, so producing interference. Therefore Aloha is one example on non orthogonal MA technique, even under ideal operational conditions."213.230.129.21 09:55, 1 October 2006 (UTC)

[edit] Discrete function orthoganality?

If someone thinks its appropriate could they add the definitition for orthoginality for discrete functions. For example the kernel of the DFT. Thanks.