Talk:Dot product

From Wikipedia, the free encyclopedia

WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, which collaborates on articles related to mathematics.
Mathematics rating: Start Class Mid Priority  Field: Geometry
One of the 500 most frequently viewed mathematics articles.

Contents

[edit] Circular proof?

In the Law of Cosines article, it is proved using vector dot products. And over here in dot products, a dot b = a b cos theta is proved using the Law of Cosines. Shouldn't somebody fix this? --Orborde 07:47, 9 September 2005 (UTC)

Yes, not here though. Proving Law of Cosines using dot products is silly. All of vector calculus depends on basic trigonometry, not the other way around.

I don't know what you're talking about:the Law of Cosines is just the definition of the dot product!!!! (someone didn't understand their high school lessons)

Of course, they're basically the same thing. The issue is the circularity of the proofs. Pfalstad 19:01, 4 February 2006 (UTC)

[edit] Merge?

This needs some major merging and clarification of terminology with the article inner product space. For example, this article never defines what a dot product is, esp. over an arbitrary vector space. Then it defines the dot product in terms of the "angle" between two vectors, which is never defined. This is circular definition. The typical way is to define the cosine between two vectors in terms of the inner product. Then all of what follows here is just special case when V is R^n with usual dot product. Personally, I think there could be two articles, the inner product space one giving the abstract algebraic formulation, and a more concrete geometric one focusing just on R^n or C^n, aimed more at people who might just use it for calculus or physics classes. Revolver 00:00, 1 Apr 2004 (UTC)

I agree. Markus Schmaus 15:37, 27 July 2005 (UTC)
A quote from the main article between the horizontal rules:...

[edit] Properties

The definition has the following consequences:

  • the dot product is commutative, i.e. a·b = b·a.
  • two non-zero vectors a and b are perpendicular if and only if a·b = 0
  • the dot product is bilinear, i.e. a·(rb + c) = r (a·b) + (a·c)

From these it follows directly that the dot product of two vectors a = [a1 a2 a3] and b = [b1 b2 b3] given in coordinates can be computed particularly easily:

a·b = a1b1 + a2b2 + a3b3

I'm afraid that I don't see how "it follows directly".

It can be easily shown with 2D vectors that this is true using "cos(A-B)=cosA.cosB+sinA.sinB", but I'm not sure how to extend this to 3D (or more D) vectors.

Is it worth expanding the article to show this/these derivation(s)? -- SGBailey 21:56, 2003 Nov 16 (UTC)

Given a basis of perpendicular unit vectors, it does follow at once.

Charles Matthews 09:53, 19 Nov 2003 (UTC)

I accept that it works. I just don't see how the three consequences "directly" cause the ax.bx+ay.by+az.cz construct. I used to know this stuff 30 years ago - sigh. -- SGBailey 2003-11-18

I dislike the first couple of prargraphs now. They are diving into too much depth without giving a general overview first. However I'm not able to edit it without losing much of the content of those paras. The equations for the dot product and its description want to come before the stuff about vector spaces and fields. -- SGB 2004-03-24


I would like to point out that most of this article is related to *real* vector spaces, when the field is not the reals, things are different. For example when the characteristic is non null, for example Z/5Z, then there are vectors in the 2-dimensional vector space (Z/5Z)² which are such that <x,x>=0, for example x=(1,2), without having <x,y>=0 for all y. And there no chance to define the notion of norm or angles between lines (there are 6 of them). All the things about angles should be left to the real case only. -- Christian Mercat 2004-04-01


I agree with some of the concerns mentioned above. First, as far as I know, the term dot product is only used in real vector spaces. Over an arbitrary field, the common used terminology is bilinear form. Even over the complex numbers, dot product sounds a little too conversational; in this case I would instead say inner product, and refer to it as a Hermitian form.

Apart from terminology, the article currently features considerable confusion between the real and arbitrary field cases. Dmharvey Image:User_dmharvey_sig.png Talk 21:58, 3 Jun 2005 (UTC)

[edit] Fork?

I just wonder, why was the subpage dot product/Temp created? Making forks is not considered a good idea on Wikipedia. Do you plan to eventually overwrite the dot product article? Maybe you should have started by listing your objections on the talk page. Either way, to me it looks like stealth editing, and in this context I am not sure that is necessary. Oleg Alexandrov 15:39, 27 July 2005 (UTC)

I originally created the page as a proposal for a re-write in response to the discussion at Talk:Inner product space#Separate_inner_product_page.3F but I haven't had much time to do the re-write. Anyways, it looks like someone just re-wrote everything anyways so dot product/Temp is pointless now. Don't worry, no ulterior motives. --Dan Granahan 18:00, 27 July 2005 (UTC)

[edit] Target group

I believe the target group of this article is not mathematicians, but mainly engineers and high school students.

I used length instead of norm, as it is a more down to earth concept and I knew about (spatial) vectors and their length long before I heard about norms. Similarly, I first gave the definition for three-dimensional vectors, as I belive many people looking for this article will be only interested in this case and have never seen a Σ sum before. Markus Schmaus 11:57, 29 July 2005 (UTC)

I very much agree with Markus. People, please try to keep things simple. This is a general purpose encyclopedia, and the most frequent complaint is that mathematicians write things here only for themselves, which shutting everybody out. Thanks. Oleg Alexandrov 15:19, 29 July 2005 (UTC)

I'm curious what level of "dumbed-down" simpleness we are trying to achieve. For instance, switching from norm to length in the Geometric interpretation section seems unnecessary given we have terms such a Euclidean space and projection in the same paragraph. In my opinion, keeping phrasing such as

||a|| and ||b|| denote the norm (or length) of a and b

was reasonable and in no further need of generalization or simplification. After all, I've seen the notation ||a|| for the length of a vector since 8th grade or whenever kids are first taught about vectors. I just want to avoid excessive simplifying. --Dan 20:35, 10 August 2005 (UTC)

I don't see any harm in excessive simplification. Dot product is a pretty simple, basic concept, and if someone is looking it up, it's likely they are very mathematically unsophisticated. They might not even be in 8th grade yet; maybe they saw dot product on a computer graphics site somewhere. The ||a|| notation is still there, it's just further down under "Generalization". And the link to inner product space contains even more rigor and fancy notation for those who are so inclined. Pfalstad 20:56, 10 August 2005 (UTC)

The person writing proof of the geometric interpretation didn't use ||a|| but a for the length of a vector notation and I think it is the more basic notation. If you covince me that high school students are more familiar with the other I have no problem with it. It will be much harder to convince me that using norm instead of length in the geometry section is a good idea. Norm generalizes length and the norm of a Euclidean vector is its length. Markus Schmaus 22:49, 10 August 2005 (UTC)

Fair enough. You make a good argument that norm shouldn't be used in that section. As for the notation for length, I guess thats really just trivial as long as its consistent. Personally, I'm used to the ||a|| notation instead of a but I suppose they're both just as acceptable. The only thing that might be worth changing is the picture, which uses a different notation than the text. --Dan 03:23, 11 August 2005 (UTC)

[edit] Distributive law

If you know that the distributive law holds for dot product, "it follows directly."

I'll prove why.

a·b = (x1 i + y1 j + z1 k) · (x2 i + y2 j + z2 k) =

(x1*x2)i·i + (x1*y2)i·j + (x1*z2)i·k +

(y1*x2)j·i + (y1*y2)j·j + (y1*z2)j·k +

(z1*x2)k·i + (z1*y2)k·j + (z1*z2)k·k

Since i · i = cos 0 = 1, i · j = cos 90 = 0, i · k = cos 90 = 1,

j · j = cos 0 = 1, j · k = cos 90 = 0,

k · k = cos 0 = 1,

We can simplify the above equation as

(x1*x2)(1) + (x1*y2)(0) + (x1*z2)(0) +

(y1*x2)(0) + (y1*y2)(1) + (y1*z2)(0) +

(z1*x2)(0) + (z1*y2)(0) + (z1*z2)(1)

= (x1*x2) + (y1*y2) + (z1*z2)

but my question is... how do we prove that the distributive law for dot product holds??? The preceding unsigned comment was added by 129.97.235.130 (talk • contribs) .

[edit] Distributive law

I'll show you babe! Grrr...

We are trying to prove this: \vec{c}.(\vec{u}+\vec{v})=\vec{c}.\vec{u}+\vec{c}.\vec{v}
Now let: \vec{c}=\displaystyle{x_c\choose y_c} , \vec{u}=\displaystyle{x_u\choose y_u} , \vec{v}=\displaystyle{x_v\choose y_v}

and so \vec{u}+\vec{v}=\displaystyle{x_u+x_v\choose y_u+y_v}

now to the juicy bit:

\vec{c}.(\vec{u}+\vec{v})= x_c(x_u+x_v)+y_c(y_u+y_v)=x_cx_u+y_cy_u+x_cx_v+y_cy_v=\vec{c}.\vec{u}+\vec{c}.\vec{v}

QUESTION: Didn't you just use the distributive law while proving distributive law above?

Yes, we used the distributive law for real numbers to prove the distributive law for dot products. Nothing wrong with that. Pfalstad 18:01, 11 August 2006 (UTC)

[edit] TRANSPOSE MATRIX?

a^tb and yet vector "a" was not modified, only transpose of the vector "b" was implemented insted of the vector "a" or am i missing someting. --anon

a^T b means transpose of a times b. Oleg Alexandrov (talk) 23:33, 22 June 2006 (UTC)

so why was the transpose of "b" taken insted of "a"? thanx —The preceding unsigned comment was added by 209.176.23.253 (talk • contribs) .

The transpose matrix thing shouldn't appear in the intro, I dunno where else it can go but its confusing to matrix newbs and needs to be moved. Fresheneesz 19:53, 10 August 2006 (UTC)

My question is whether the correct expression should be (a,b) = (b^t)a, because otherwise the multiplication will yield a matrix instead of a number. a^t being a nx1 vector times b being a 1xn vector will yield an nxn matrix. Can you verify it?130.54.130.229 09:16, 21 December 2006 (UTC)

[edit] Complex dot product and more general definition

The definition of a dot product for complex vectors is more general, and i'm thinking it wouldn't be a bad idea to simply make that the main definition at the top. Most people learning about the dot product wouldn't be confused if a note about the complex conjugate were given "for real numbers, the complex conjugate of a number is equal to that number, and so that operation may be ignored in the case of real numbers". Comments? Fresheneesz 20:17, 11 August 2006 (UTC)

It would be a very bad idea. The whole article is about the real dot product, and all the aplications in physics and geometric intuition are about that. The generalization to complex numbers would be sterile and confusing. The fully general dot product is described in the inner product article. Oleg Alexandrov (talk) 20:39, 11 August 2006 (UTC)
Agree with Oleg. Pfalstad 21:32, 11 August 2006 (UTC)

I would like to add that the definition as it is right now includes complex vectors, but a lot of the rest of the article is about real vectors only, without explicitely stating so. This is misleading, as for example the dot product of complex vectors is not commutative. —Preceding unsigned comment added by 130.237.43.57 (talk) 12:02, 14 April 2008 (UTC)

Perhaps we could add a note that most of the article refers to real-valued vectors (since this is what most enquirers will wish to learn about), but with a link to vector spaces for a more general mathematical treatment of vectors? dbfirs 12:21, 14 April 2008 (UTC)

[edit] Move proof to separate page

The proof is a bit lengthy and unweildy in this article. Also, I doubt most readers care about it. I think it would be a lot more concise and easy to read if a link to the proof was given in the section on the geometric interpretation. Comments? Fresheneesz 01:07, 13 August 2006 (UTC)

I wouldn't move it.. People don't have to read that far if they aren't interested. It's not a very long article. Pfalstad 02:19, 13 August 2006 (UTC)

[edit] Error in image

The label below the image shows |A|cos(theta) . It is missing the |B| factor —The preceding unsigned comment was added by 200.117.138.199 (talk • contribs) 09:52, 8 September 2006.

That label is for the scalar projection of A on B, not A • B. I've added a caption that hopefully clarifies this. --Mrwojo 07:46, 5 March 2007 (UTC)

[edit] error in included gif?

In the /*Geometric Interpretation*/ section, the equation included as a .gif uses sin(theta) where it should use cos(theta) - I don't know how to edit/fix that. —The preceding unsigned comment was added by 24.61.195.42 (talk) 09:20, 22 February 2007 (UTC).

This was apparently vandalism that was later fixed. --Mrwojo 20:50, 5 March 2007 (UTC)

[edit] History

Since the dotproduct is a relatively simply mathematical tool, i think the article could be improved further by adding a short History section. Who decribed the dotproduct first? Who have made contributions to it? And so on. Sorry for the bad english - i'm danish --Bilgrau 18:06, 11 March 2007 (UTC)

Good idea. For prospective researchers, the term inner product apparently comes from Grassmann's Ausdehnungslehre (1844). [1] Dot product and dot notation appear to be from Vector Analysis (Edwin Bidwell Wilson 1902). [2][3] --Mrwojo 20:31, 11 March 2007 (UTC)

[edit] Um, "4", right?

It seems to me that the answer to the example is '4', not '3' (1)(4) + (3)(-2) + (-5)(-1) =

    4 +      -6 +        6 = 4

—The preceding unsigned comment was added by 63.226.32.16 (talk) 18:04, 3 May 2007 (UTC).

No, since (-5)(-1)=5 Kuteni 19:00, 15 August 2007 (UTC)

[edit] The dot product is not a binary operation

Unless I'm very mistaken, the dot product can't be a binary operation on vectors, because it is not closed. I've removed the words from the opening sentence. MrHumperdink 21:14, 6 July 2007 (UTC)

I don't believe closedness is a necessary characteristic of binary operations. For instance, Wolfram MathWorld distinguishes between a Binary Operation (any function that applies to two quantities) and a "Binary Operation on a set A" (a function f : AxA -> A), which they also call a Binary Operator.
The wikipedia article you link to also supports this interpretation, suggesting that whether this is a requirement or not is not universally agreed upon.
I'd support restoring the sentence. JulesH 16:13, 27 October 2007 (UTC)
Rereading, this of course means it can be described as a binary operation, but not a binary operation on vectors. The phrasing "a binary operation from vectors to scalars" might be appropriate. JulesH 16:16, 27 October 2007 (UTC)

[edit] Proof

Shouldn't the proof include a proof of distributitiveness of the dot product, seeing as it relies on this property, and it isn't proven anywhere else in the article? JulesH 16:13, 27 October 2007 (UTC)

[edit] ommision in the image (?)

In the image the dotted line is at right angles to the vector B isn't it? Shouldn't the image indicate this? stib (talk) 01:55, 19 February 2008 (UTC)

[edit] Clarifiction on the dot product example

I have a small clarification to make in the article on Dot Product
(Mathematics).It is shown that dot product is matrix multiplication of a transpose multiplied by b
But in the example the transpose of b is taken.

Can you please calrify this 195.229.236.247 (talk) 09:04, 17 March 2008 (UTC) Aravind


[edit] Vector vs row or column vector

I don't think we need to distinguish between row and column vectors here. That distinction only needs to be made when considering vectors as degenerate forms of matrices. If vectors were always considered as degenerate matrices, we'd simply do matrix multiplication and forget about defining it as a vector operation.

Instead, we have separate operators (the dot or the Cartesian x) so that the vectors can remain vectors (as in vector-space), not specifically row or column vectors. The result is equivalent to considering the first vector as a row-vector and the second as a column-vector and applying normal matrix multiplication, but matrix multiplication is not commutative.

I believe the generalization (neither considered as row nor as column) makes dot product truly commutative. (additional operations such as transposition should not be artificially required by forcing row or column orientation on your model).

_-T 德 —Preceding unsigned comment added by 129.115.13.107 (talk) 16:29, 21 March 2008 (UTC)

[edit] Ambiguity in the introduction-definition

Is the dot product valid for "non-orthonormal vector spaces"? For readers who don't know the exact definition of the Euclidean space (i.e. most readers, in my opinion), the introduction does not answer this question explicitly. It just states that the dot product "is the standard inner product of the Euclidean space". Readers appreciate a concise and generic introduction, but they are not supposed to know whether the Euclidean space is required to be orthonormal or not. The definition section creates or accentuates this doubt. Here is the first sentence of that section:

The dot product of two vectors (from an orthonormal vector space) a = [a1, a2, … , an] and b = [b1, b2, … , bn] is by definition:

The phrase within parentheses puzzles me and may also confuse other readers. Why did the author use parentheses?

First hypothesis. Did the author mean that the dot product is only defined in orthonormal vector spaces? (and only its generalization, the inner product, can be used for non-orthonormal vector spaces)? In this case, the parenheses might indicate that the writer regards the information as redundant. However, this is not true for all readers, because the information is not given explicitly in the previous paragraphs.

Opposite hypothesis. Did the author mean, on the contrary, that a more general and complex definition of the dot product exists (coinciding with the definition of the inner product given below), valid for non-orthonormal vector spaces? Did the author choose to give first a commonly used simplified version of that definition? In this case, the definition section should be completed with the warning that a more general definition exists in the literature, and it coincides with the definition of the inner product.

What is the correct hypothesis? Do you agree that the phrase is an important part of the definition and should be enphasized, rather than confined within parenthesis? Paolo.dL (talk) 12:57, 19 April 2008 (UTC)

Here's what I think: the parenthesis was unhelpful (I've removed it). I think the dot product is naturally defined just using the formula that's given, and so we don't require an assumption that the basis of the vector space is orthonormal. I suppose if you took the geometric interpretation as the definition, then you'd need this qualification, but that's not the way the article is written, so the qualification is just noise here. That's my opinion! Ezrakilty (talk) 18:37, 12 May 2008 (UTC)

I believe that it is important to warn the reader that the definition of the dot product is not valid for non-orthogonal Euclidean vector spaces. Not valid means that it does not compute what is meant to compute (a number which can be used to define length and angle). I agree with Ezrakilty that this "validity" is related to the geometrical interpretation of the dot product, but IMO the geometrical interpretation is inseparable from the mathematical definition

\mathbf{a}\cdot \mathbf{b} = \sum{a_i \overline{b_i}} .

Moreover, if we accept Ezrakilty's decision to remove from the definition section the sentence about restriction to orthogonal vector spaces, we should remove it everywhere in the article, including the beginning of section "Conversion to matrix multiplication". Also, notice that at the end of the section "Generalization", the inner product formula is shown to simplify into the dot product formula when the basis set is orthogonal. This information is precious in this context.

Being not sure about what's the most common or most appropriate definition of the dot product, I undid Ezrakilty's edit, just to restore consistency between sections. I hope that the discussion on this topic will continue. Paolo.dL (talk) 14:44, 13 May 2008 (UTC)

Hi Paolo: I hear what you're saying. You're right to strive for precision, of course. On the other hand, I feel that this article presently mixes the narrow definition (the right one for many readers) with the more precise and general one (right for other readers). The opening is a case in point: it describes the dot product as an operation on "vectors over R" (a sensible statement for the narrow interpretation) but then takes a different tone, calling it "the standard inner product of the orthonormal Euclidean space." (Is that description even meaningful if we take vectors as simply tuples of reals?) I'd like to see the article separate the two viewpoints: it should first treat the dot product in the narrow sense and generalize only later. Then casual readers won't be scared off by the more abstract treatment. Ezrakilty (talk) 14:01, 15 May 2008 (UTC)

Your point is reasonable. Indeed, in physics textbooks, where I learned the definition of the dot product, the operation is described in R3 and is not even extended to n-dimensional spaces.

I guess that you accept my "first hypothesis". I still have a doubt. Can we exclude that some authoritative author might have endorsed, in the literature, my "opposite hypothesis" (see previous section)? In other words, does any mathematician call dot product the inner product for non-orthogonal vector spaces? I hope not. And I hope that some expert mathematician will give a final answer to this question either by posting a comment here or by editing the article. Paolo.dL (talk) 15:35, 15 May 2008 (UTC)

[edit] Removed ambiguous sentence

We still don't have an answer, but in the meantime I removed the ambiguous sentence. The last sentence of the definition section presents non-ambiguously a similar concept, hedged by the adverb "typically". Even if my "opposite hypothesis" were true (and I hope it's not), the ambiguous sentence would not be a good way to inform the readers. Paolo.dL (talk) 09:00, 30 May 2008 (UTC)

[edit] Decomposition and rotation

In the section "Properties," there's a spurious comment: "Decomposing vectors is often useful for conveniently adding them, e.g. in the calculation of net force in mechanics." Decomposition of vectors isn't defined anywhere in the article; but this is an interesting topic and perhaps it deserves treatment. Would anyone like to move this line to a new section and expand on the idea? Ezrakilty (talk) 18:46, 12 May 2008 (UTC)

People are supposed to know vectors before studying vector multiplications, such as the dot product. Decomposition is just how the scalar components (or direction cosines, or coordinates) of a vector are determined. It is useful for millions of reasons, not just for adding. And the topic is discussed in detail elsewhere. An enlightening example in this context is its application to vector or basis rotation. See if you like my edits. I moved in a separate subsection of "Geometric interpretation" the existing text about scalar projection, and created a new subsection called "Rotation". Paolo.dL (talk) 21:14, 1 June 2008 (UTC)