Wikipedia:Reference desk/Archives/Mathematics/2008 May 3

From Wikipedia, the free encyclopedia

Mathematics desk
< May 2 << Apr | May | Jun >> May 4 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


Contents


[edit] May 3

[edit] Involution

From the article on Involution: In linear algebra, an involution is a linear operator T such that T2 = I. Except for in characteristic 2, such operators are diagonalizable with 1's and -1's on the diagonal. Where can I find a proof? (or how to prove it.) Thanks.--Shahab (talk) 05:59, 3 May 2008 (UTC)

Try looking at T in a canonical / normal form.John Z (talk) 10:09, 3 May 2008 (UTC)
I believe you also need be in a field to always get that result. I think the Jordan normal form could be particularly constructive. I would investigate as to what the eigenvalues of the matrix are, and how many linearly independent eigenvectors there are. GromXXVII (talk) 11:35, 3 May 2008 (UTC)
I'm not sure you need a field; it may be enough for 2 to have an inverse, so that (I + T) / 2 is defined (and, by construction, idempotent). —Ilmari Karonen (talk) 12:52, 3 May 2008 (UTC)
(ec)(I think linear algebra is usually assumed to be about vector spaces over a field unless otherwise stated) All you really need is that every vector is a sum of eigenvectors. v=\frac{(I-T)v}{2}+\frac{(I+T)v}{2}. This result (and the idea of its proof) generalises: an endomorphism is diagonable iff its minimal polynomial splits into distinct linear factors. Algebraist 12:55, 3 May 2008 (UTC)

[edit] Preservations in group homomorphisms

I’m trying to find some examples of properties of groups preserved by group homomorphisms. I can’t seem to find, well any. It seems like without knowing that the homomorphism is surjective, I can’t say too much. Because, for instance, any group G can be embedded in SG by the identity map. Any idea where I could go from there, or if there are any properties that are actually preserved?

Also, what if I allow myself an epimorphism? Then there are the first examples of abelian and cyclic. I believe solvable, finitely generated, finite, countable are preserved. Any ideas for more interesting properties though? GromXXVII (talk) 13:13, 3 May 2008 (UTC)

In my experience, one normally talks about properties being preserved by quotients rather than epimorphisms (it's the same thing, of course). Our article on quotients gives nilpotence as another preserved property. I think my group theory course had a few more examples (I'll check my notes), but the only one I can remember is the following more general theorem (generalising the example of soluble groups): if P is a property of groups preserved under quotients and subgroups, then the property poly-P is preserved by subgroups, quotients and extensions. Thus for example the properties of being polyabelian (=soluble) and polycyclic are both preserved by quotients. Algebraist 14:37, 3 May 2008 (UTC)
Along similar lines, if P is a property preserved by quotients, then so is virtually-P (where G is virtually P if it has a finite index subgroup which is P). Thus for example the properties of being virtually abelian, virtually soluble, and virtually polycyclic are preserved by quotients. Algebraist 14:43, 3 May 2008 (UTC)
Likewise, if P is preserved by quotients, then so is locally-P (where G is locally-P if every finitely generated subgroup of G is P). Thus for example the properties of being locally finite and locally cyclic are preserved by quotients. To finally answer your original question, for any groups G and H there is a homomorphism from G to H, so there are no interesting preserved properties. We could restrict ourselves to monomorphisms instead of epimorphisms. In this case we find that property P is preserved by monomorphisms iff not-P is preserved under taking subgroups, so we're not really doing anything new. Algebraist 17:48, 5 May 2008 (UTC)
People who study these sorts of questions usually call these "classes of groups". Some more examples are imperfect and perfect group, the former is defined as a group with no nontrivial perfect quotients, and the latter as a group with no nontrivial abelian quotients. If P is a property, then qP is quotient closed, where qP is the property of being a quotient of a P group. The properties of having a normal Sylow p-subgroup (p-closed) and the property of having a normal Hall p'-subgroup (p-nilpotent) are subgroup and quotient closed, so if P is their union, then poly-P (p-soluble) is quotient closed. Another nice property for P to have is if M,N are normal subgroups of G, then G/M,G/N being P-groups implies that G/(MnN) is a P-group. Such a property is called (finitely) residually closed. Properties that are quotient and residually closed are called "formations" and the study of formations was the beginning of a new phase of research into soluble groups, begun around 1960. JackSchmidt (talk) 14:13, 7 May 2008 (UTC)

[edit] Infinity

Is Infinity + a positive number and Infinity * A positive number = Infinity itself?? —Preceding unsigned comment added by 116.68.68.200 (talk) 13:16, 3 May 2008 (UTC)

That depends on what you mean by "infinity". There are various meanings of the word. If you're talking about having an infinite numbers of things, then adding a finite number of things, or multiply the number by a finite number won't change it - such infinities are called (infinite) cardinal numbers. You can also talk about infinitely long sequences, in that case, adding more elements to the end (although, not the beginning, interestingly) does make a difference (not so much to the length of the sequence, but rather to the position of the last element - the "infinity-plus-one-th" position is distinct from the "infinity-th" position). That kind of infinity is called an (infinite) ordinal number. Those articles will give you more details. Feel free to ask more questions if there's anything you still don't understand (the concepts are very counter-intuitive and difficult to get your head around). (There are yet more meanings of "infinity", for example the infinity to you across in analysis in phrases like "the limit as x tends to infinity", but that's more a direction than a position, so adding one isn't very meaningful.) --Tango (talk) 13:27, 3 May 2008 (UTC)
Infinity would be an obvious link. If you are asking about the kind of infinity found on the extended real number line, the answer is yes. -- Meni Rosenfeld (talk) 22:35, 3 May 2008 (UTC)

Sorry, I couldn't respond sooner. thanks for answering my question. I've learnt about cardinality, but not ordinality. From what I read, is ordinality the position of an element?, as in for the nth element, ordinal no. = n?? Is it necessary for sets to have order? can't the elements be jumbled?

[edit] change of basis for square matices

Can I confirm that if I have non-column vectors represented with for example, a 2x2 matrix, that in order to create a change of basis or linear transformation matrix for such vectors I simply can convert convert the cells a(12) and a(22) for example to a(31) and a(41) such that they become column vectors? Are there any caveats that should be issued when I do this? John Riemann Soong (talk) 22:08, 3 May 2008 (UTC)

Please clarify the question. -- Meni Rosenfeld (talk) 22:32, 3 May 2008 (UTC)
I don't understand how you can have a vector represented as a 2x2 matrix... what's the context? --Tango (talk) 22:50, 3 May 2008 (UTC)
Oh, just homework and preparation for my exam... one time I remember being asked about forming a basis that spanned the R^(2x2) vector space, and the standard basis vectors were naturally 
\begin{pmatrix}
  1 & 0 \\
  0 & 0 
\end{pmatrix}
\begin{pmatrix}
  0 & 1 \\
  0 & 0 
\end{pmatrix}
\begin{pmatrix}
  0 & 0 \\
  1 & 0 
\end{pmatrix}
\begin{pmatrix}
  0 & 0 \\
  0 & 1 
\end{pmatrix}
. So I assume that even if this isn't a strict vector space, it behaves sort of similarly, and the textbook has been treating this as such, along with the polynomial "vector" spaces and so forth. The question is when I'm creating linear transformations and change of basis matrices, is there any qualification or caveat to the method of converting a 2x2 matrix into say, 4x1 column matrix, for such purposes, or for at least simple tasks like evaluating linear independence? Have mercy on my ignorance! I'm just a high school senior. John Riemann Soong (talk) 23:59, 3 May 2008 (UTC)
As vector spaces there is no real difference. R^(2x2) is isomorphic to R^(4x1), meaning they have identical structure.
There is of course a logical difference, because say 
v=\begin{pmatrix}
  a & b \\
  c & d 
\end{pmatrix}, then v is in your space, but [v]_\beta
=\begin{pmatrix}
a \\
c\\
b\\
d\\
\end{pmatrix}
is not in your space. (here I am using β as the standard basis for R^(4x1)). So typically what one will do with a space like that is convert everything into column vectors, do all the work, and convert back. GromXXVII (talk) 00:13, 4 May 2008 (UTC)
\mathbb{R}^{2\times2} - with the operations of addition and multiplication by a scalar (but not matrix multiplication) - is a vector space. It satisfies all the necessary requirements. This vector space is isomorphic to \mathbb{R}^4, thus it is the same for all purposes. Remember that some familiar concepts, like treating a linear transformation as a matrix, apply to the coordinate vectors (which always reside in \mathbb{R}^n) and not to the vectors themselves. So the only caveat to treating 2x2 matrices as 4x1 vectors is that you lose the ability to multiply one 2x2 matrix by another. -- Meni Rosenfeld (talk) 10:11, 4 May 2008 (UTC)
See also vectorization. --Tardis (talk) 20:19, 5 May 2008 (UTC)