Talk:Tensor product

From Wikipedia, the free encyclopedia

WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, which collaborates on articles related to mathematics.
Mathematics rating: Start Class High Priority  Field: Algebra

/Archive1

On a different point, the top of the article mentions "degrees of freedom"... what is that?!? Can a function of 7 real variables have 12 degrees of freedom? How, and why, please explain this mystery!

No. A function of 7 real variables has at most 7 degrees of freedom. if, by "function of 7 variables", you mean z = f(x1,x2,x3,x4,x5,x6,x7), and not 0 (or constant) = f(x1,x2,x3,x4,x5,x6,x7) or x7 = f(x1,x2,x3,x4,x5,x6), which have a maximum of 6 degrees of freedom. if x1 and x2 are neccessarily correlated then the function really has one less degree of freedom than the number of variables. the number of degrees of freedom another name for the number of "linearly independant" variables. Kevin Baastalk: new 03:30, July 31, 2005 (UTC)

Fair enough... why then does the article state "...resultant dimension = 12" when the displayed tensor product has 7 variables (see http://en.wikipedia.org/wiki/Tensor_product ; the left hand side has variables a1,a2,a3,b1,b2,b3,b4) ?!?

It lies in a space of arrays having 12 components. Charles Matthews 07:17, 2 August 2005 (UTC)


Something needs to be adjusted... as things stand (i.e., as the page is actually written):

   12 = dimension of ambient space = resultant dimension = count of degrees of freedom = number of variables = 7
The page is OK, really. The question of what the image of the tensor product is, which is what this discussion is circling round, is really something different (comes under Segre embedding, for example) and non-linear. Sticking with the linear theory, there are 12 dimensions in the 'array' space, into which 7 linear dimensions are mapped. Since the image is not closed under addition, there is no 'paradox'. Charles Matthews 15:28, 2 August 2005 (UTC)


Connect the dots ... on one side is the number 12, and on the other side is the number 7... I don't think 12 = 7, so which equality in the chain is wrong?!?

The third = sign is questionable; there are constraints. Charles Matthews 21:03, 13 October 2005 (UTC)

Contents

[edit] Left exact?

The section on abstract construction states that tensor product is left exact. That seems a bit strange. The functor "- \otimes_R N" is _right_ exact (indeed, if I have a surjection M -> M' then the comment on generating sets implies that M \otimes_R N surjects onto M' \otimes_R N). The fact that this functor is not in general exact might imply what we want here, though it might be easier to see it directly: Z/22 \otimes_Z Z/3Z = 0.

Correct: see Tor functor for confirmation. Charles Matthews 21:00, 13 October 2005 (UTC)

[edit] Programming

I don't see why we need so much on the programming. For array programming, OK: these things are arrays, though not just arrays. But why does this page have to teach lower-level stuff like filling up arrays with numbers from other arrays? Charles Matthews 09:25, 25 October 2005 (UTC)

I agree. Anyone who knows C could implement it based on the opening example. The SQL example is interesting because it relates OUTER JOIN with the tensor product, but it doesn't need two examples. The article could also do with mention of the tensor product of functions. —BenFrantzDale 12:48, 25 October 2005 (UTC)

I'm pulling one of the two SQL examples. RaulMiller 13:01, 25 October 2005 (UTC)

[edit] Tensor product of two tensors?

I'm a bit dubious about this line:

\mathrm{dim}( U \otimes V )=\mathrm{dim}(U) \cdot \mathrm{dim}(V)

In the traditions I'm familiar with, dimensions are represented as vectors, and \cdot represents the inner product between two vectors. However, the dimension of a tensor product is the concatenation of the dimensions of its arguments. I think either (a) better notation should be used, or (b) an explicitly labeled link to a description of this notation should be used. RaulMiller 13:01, 25 October 2005 (UTC)

No, dimensions are numbers and this is just the dot standing for ordinary product of integers. Charles Matthews 13:08, 25 October 2005 (UTC)

Ok, then this is an ambiguity. A tensor of rank 5 could be said to have five numbers describing its dimension -- perhaps <2,3,5,7,11> or it could be said to have a single number describing its dimension -- perhaps the product of (2)(3)(5)(7)(11). I don't want to belabor this point, but I think the entry could use some phrase indicating the latter usage of the term. (I'll update the page if I think up something good before someone else does.) RaulMiller 15:53, 25 October 2005 (UTC)

Right, I see that the initial example was in unhelpful notation and I've clarified that. I've taken out the\cdot also. Charles Matthews 16:37, 25 October 2005 (UTC)

[edit] Tensor product of vector spaces

I'm not sure about the formulation "take the vector space generated by V x W and factor out the subspace generated by the following relations". Maybe it's just that the word "generated" is ambiguous, but when I think of V x W as a vector space, I usually have an addition like (v1,w1)+(v2,w2)=(v1+v2,w1+w2) in mind. In this space, you surely can't factor out a subspace to get the tensor product, which is in genral bigger than V x W. Should the addition in this case be defined in a different way? I think this should be mentioned. SaschaR 15:28, 4 June 2006 (UTC)

It means the vector space with basis V x W. I've modified the article to state this explicitly. --Zundark 16:40, 4 June 2006 (UTC)
Hm. Let V = W = R^2. Now V x W [b] as a vector space[/b] can be isomorphically represented has R^4 with the usual basis { (1, 0,0,0), (0,1,0,0) ...}.
The representation by the Kronecker Product sends all these base vectors to 0 Thus the "factoring out" can not be the vector-homeomorphic factoring but only the factoring of sets (the grouping of elements within the same set theoretic equivalence relaction.
Or have I misunderstood the term "with basis V x W"? 84.160.205.130 07:20, 24 October 2006 (UTC)
I think you've misunderstood it. V x W = R^2 x R^2 is the basis of the vector space, and the elements of the vector space are formal linear combinations of basis elements. You should ignore the structure on R^2 when forming this vector space - the basis elements are all linearly independent, by definition. --Zundark 08:24, 24 October 2006 (UTC)
So the whole of V x W is a basis where all elements (uncountably many) are all linearly independent by definition? That's to say the set V x W realy is an index set for the base vectors of the pre-factored vector space? If so, wouldn't it be clearer to start with the space Hom(VxW, R)? 84.160.205.130 18:18, 24 October 2006 (UTC)
Yes, you can think of V x W as an index set for the basis vectors. In fact, I would define the vector space as \bigoplus_{p\in V\times W}\!\!\!K_p, where each Kp is a copy of the base field. This is the set of functions with finite support from V x W to the base field (with the obvious operations). It's not the same as Hom(VxW, R). --Zundark 19:52, 24 October 2006 (UTC)
I see Hom(VxW, K) is obviously not the same as \bigoplus_{p\in V\times W}\!\!\!K_p. Further, for finite dimensions, Hom(VxW, K) has the same dimension as VxW and V\otimes W. So, on the way to V\otimes W, there is nothing much left to be factored out. Trivially, Hom(VxW, K) is isomorphic to V\otimes W the way any two vector spaces with the same dimensions are isomorphic.
Now all useful information for a human brain is packed into the way in which those isomorphisms can be chosen.
Let v_1 ... v_N be a basis of V and w_1 ... w_M be a basis of W. Would \{\varphi_{kl} | \varphi_{kl}((v_\kappa,w_\lambda))=\delta_{k\kappa}\delta_{l\lambda}\} be a basis of V\otimes W in a canonical way? (Note the braces to mark (vκ,wλ) a single element of VxW).
84.160.237.162 20:32, 26 October 2006 (UTC)
If you are considering V x W as a vector space, then its dimension is the sum of the dimensions of V and W. But the dimension of V\otimes W is the product of the dimensions of V and W. So they are not usually isomorphic when the dimensions are finite. --Zundark 08:51, 27 October 2006 (UTC)
Ouch! Shame on me! I should have seen that. 84.160.238.49 18:33, 27 October 2006 (UTC)

[edit] Why B times A?

It is unclear why you write


\mathbf{b} \otimes \mathbf{a}
\rightarrow
\begin{bmatrix}b_1 \\ b_2 \\ b_3 \\ b_4\end{bmatrix}  
\begin{bmatrix}a_1 & a_2 & a_3\end{bmatrix} = 
\begin{bmatrix}a_1b_1 & a_2b_1 & a_3b_1 \\ a_1b_2 & a_2b_2 & a_3b_2 \\ a_1b_3 & a_2b_3 & a_3b_3 \\ a_1b_4 & a_2b_4 & a_3b_4\end{bmatrix}

rather than


\mathbf{a} \otimes \mathbf{b}
\rightarrow
\begin{bmatrix}a_1 \\ a_2 \\ a_3 \\ a_4\end{bmatrix}  
\begin{bmatrix}b_1 & b_2 & b_3\end{bmatrix} = ...

Paolo.dL 12:02, 26 June 2007 (UTC)

[edit] Tensor product of elements of Hilbert spaces

In the definition

 \langle\phi_1\otimes\phi_2,\psi_1\otimes\psi_2\rangle = \langle\phi_1,\psi_1\rangle_1 \, \langle\phi_2,\psi_2\rangle_2 \quad \mbox{for all } \phi_i,\psi_i \in H_i

I wondered what could be meant by φψ for elements φ and ψ in a (general) Hilbert space. Maybe it should be mentioned that the above line defines both an inner and a tensor product (of elements in a Hilbert space). —Preceding unsigned comment added by Paux (talk • contribs) 08:20, 24 September 2007 (UTC)

[edit] Splitting the Hilbert spaces section

I propose splitting the section on the tensor product of Hilbert spaces, and merging it with the existing article Tensor product of Hilbert spaces. Currently the main article is shorter than the section here. The other option is to delete the main article (Tensor product of Hilbert spaces) and merge its contents here. Either way, the way the current content fork is currently set up is not a good idea. Are there any objections or suggestions? Silly rabbit (talk) 15:55, 8 January 2008 (UTC)

Question for editors: Should this be merged with Topological tensor product (currently Tensor product of Hilbert spaces redirects there), or should a new article be created? silly rabbit (talk) 19:47, 25 April 2008 (UTC)

[edit] Main example is misleading

It looks too much like matrix multiplication. I'm gonna go ahead and replace it with one of the examples from Kronecker product in a few hours/days if there are no complaints. —Preceding unsigned comment added by Thric3 (talk • contribs) 09:04, 7 February 2008 (UTC)

k edited. -Thric3 (talk) —Preceding comment was added at 06:16, 8 February 2008 (UTC)

[edit] Most general bilinear operation

The lede says the tensor product is always the most general bilinear operation. While I think I know what is meant by that (something close to being a universal object), it is a fairly cryptic formulation, and I think it is more likely to be confusing then helpful at this point. Besides, the word "product" mostly refers to the result of an operation, rather than to the operation itself. I'm tempted to take the phrase out, does anyone object? – Marc van Leeuwen (talk) 10:02, 24 April 2008 (UTC)

well, the result of the operation is the universal object, so the operation itself is the most general one... Mct mht (talk) 08:29, 25 April 2008 (UTC)