Talk:Polynomial ring
From Wikipedia, the free encyclopedia
[edit] Problems
Problem with this article: it confuses "a" ring R[X] with "the" ring R[X]. There are many polynomial rings, of which R[X] is just one example.
- Dear Linas, please sign your comments. For that, you need to use for tildas, like this ~~~~.
- Also, please put a period of the end of sentence. I mean your addition of "See also".
- Now, about your question above. I think, all polynomial rings R[X] are isomorphic to one another. So you can think the ring is unique. Am I getting something wrong? Oleg Alexandrov 22:35, 8 Jan 2005 (UTC)
-
- My objection is that "the polynomial ring" assumes that polynomial multiplication comes from ring multiplication. But there are other ways to define multiplication, in which case one gets other rings which are not isomorphic. For example the frobenius polynomial ring defines multiplication by composition; in this case, one gets a different kind of ring. It is NOT isomorphic because composition just works differently!
-
- The correct thing to do for this article would be to fix it so that it defines what it means to multiply polynomials together. linas 06:44, 17 Jan 2005 (UTC)
Linas, I don't think you are correct. The Frobenius polynomials form a ring under addition and composition, for sure, but that makes them a "ring of polynomials", not a "polynomial ring". I think that "polynomial ring" means R[X] and its multi-variable analogs, and nothing else. --Zero 11:10, 17 Jan 2005 (UTC)
- I agree with that. In the same way, there is only one field of real numbers. There are many weird ways of defining addition and multiplication for real numbers or for a subset of it, but those are not the field of real numbers anymore. Oleg Alexandrov 16:15, 17 Jan 2005 (UTC)
-
- Well, then at minimum, there needs to be some sort of disambiguation. Books on number theory abound with rings of polynomials of all kinds of shapes and sizes, and none are isomorphic or homomorphic to that thing you are calling "the polynomial ring". It is a subtle slip of the English language to equate "the ring of frobenius polynomials" with "the frobenius polynomial ring". But "the frobenius polynomial ring" is "a polynomial ring" which is not isomorphic to "the polynomial ring". See the problem?
-
- The definition of "the polynomial ring" needs to say that the multiplication is not just any old multiplication, but specifically "Cauchy multiplication" (ref article Cauchy product ??) and one needs to verify that the cauchy product is commutative, transitive over addition, etc.
-
- Finally, if you want to use the word "the" instead of the word "a", there needs to be a proof of uniqueness. There must be some "XYZ's Theorem" that states that all such rings are the same thing. Its far from obvious. For example, it is "Abel's Theorem" that all abelian groups are isomorphic to modular arithmetic, but this theorem is not trivial. I assume the ring theorem is non-trivial as well, right? Or am I missing something? linas 16:17, 18 Jan 2005 (UTC)
-
-
- Very trivial. First, a polynomial ring R[X] in one variable is by the abstract definition, the ring obtained from R by adding the symbol X so that it commutes with all elements in R, and that the quantities 1, X, X^2, X^3, ...are independent. Anybody can confirm this definition? In these conditions, given two polynomial rings R[X] and R[Y], there exists a unique R-algebra morphism between them mapping X to Y. This is quite simple to prove. Oleg Alexandrov 22:02, 18 Jan 2005 (UTC)
-
-
-
-
- The operation that replaces every X by a Y is an isomorphism from R[X] to R[Y]. That is trivial to prove and too obvious to mention. No other uniqueness result is needed. --Zero 09:14, 19 Jan 2005 (UTC)
-
-
[edit] Trivial proofs?
OK, It may be trivial to you but it it is anything but to me. Lets try to work out a specific example. Lets consider the ring of all polynomials with integer coefficients: I think you'd call it Z[X]. Now consider the set S = {Z[X] mod (x+1)}.
- Is the set S a ring? I think so. Its additive unit is 0, its have additive inverses, and is closed under addition. It has a multiplicative unit (x+1): every polynomial P(x) times (x+1) equals P(x) mod (x+1). Its closed under multiplication. That makes S a ring. Or am I missing something?
- Is the set S a ring of polynomials? Yes, trivially so, by definition.
- Accodring to the "Zero-Oleg trivial theorem", there exists some ring R such that S is isomorphic to R[X]. What is that ring R? I dunno, and it is not trivially obvious to me at this time.
- How many isomorphisms exist between S and R[X] ? Again, not obvious to me. One? More than one? Some formula involving Euler's totient? What is that formula?
linas 19:45, 19 Jan 2005 (UTC)
- You have a point. But the problem is, you miss the point of your discussion. You say you hack computers. Let me put it in that language.
- There is only one default addition for the int type in C++. Only one. Now, the addition operator can be overloaded, redefined, whatever. But, the default operation is only one.
- Now, the polynomial ring, the default polynomial ring, is the one where the addition and multiplication is defined the way you learned in high school, or rearlier. Now, you can redefine multiplication, but that is not the polynomial ring anymore, OK? Please understand this. That's another ring, still made up of polynomials, but not the polynomial ring. Oleg Alexandrov 20:50, 19 Jan 2005 (UTC)
-
- Yes, but ... in the above example, the concept of multiplication is not redefined, that's why I picked this example. It is exactly the same multiplication that Z[X] uses. Is it "the" polynomial ring for some value of R?
-
- I know from group theory that it can be very hard to figure out if two groups are the same group or not, if they are isomorphic to each other. This was the whole point behind abelian groups: you've got this big overarching theorem that says abelian groups are always just modular arithmetic, and if you have an abelian group, no matter how complex it may seem to be, you will eventually succeed in identifying which abelian group it is.
-
- Here, you and zero seem to be implying that its somehow "easy" to figure out when a polynomial ring is "the" polynomial ring R[X] for some R. Is there some theorem I can use to find out if/when "a" polynomial ring is isomorphic to "the" polynomial ring for some R?
-
- Lets assume I never try to redefine multiplication, say that I stick to standard cauchy product multiplication. That is, I want to only consider those polynomial rings that are generated by modulo arithmetic. Is there ever a situation where R[X] cannot be found, or is it always possible to identify R? Is there an algorithm for computing R? What is that algo? linas 23:31, 19 Jan 2005 (UTC)
-
-
- Your S is not the polynomial ring R[X] for any ring R. This because in a ring R[X] one never has 1, X, X^2, etc be linearly dependent. Your S is the quotient of the olynomial ring Z[X] by the ideal which makes that identification modulo X+1 possible. Oleg Alexandrov 23:40, 19 Jan 2005 (UTC)
-
-
-
-
- OK, well, I didn't know that the x^n have to be linearly independent. That wasn't an obvious assumption, but OK, my mistake. Now that I think about it, I guess that would make S a module (mathematics). So that makes sense. But still, common use is not to say "consider the module blah blah blah"; its always "consider the polynomial blah blah blah". Certainly, the engineering specs call them polynomials, and not modules. I'll bet that even most math books call them "polynomials" even when they really mean "modules"; it just seems to be a fairly common usage.linas 05:17, 20 Jan 2005 (UTC)
-
-
Dang, you've got me running in circles now. I re-read Ideal (ring theory), as suggested, and it seems to be saying that the quotient of a ring mod ideal is a ring, not just a module. So that means that the quotient of R[X] modulo p(x) **is** a ring, not just a module. So that means that the set S is a ring ... of polynomials ... its just not "the" ring of polynomials. So the original example stands: S is "a" ring of polynomials that is *not* isomorphic to "the" ring of polynomials and so zero's "trivial" theorem is in fact not a theorem at all, its just plain false! There is no such theorem!
I really dislike being disabused by you guys, pulling out shit like "theorems that are too trivial to prove", when in fact they are bullshit, and just plain old incorrect! linas 05:17, 20 Jan 2005 (UTC)
[edit] Frobenius Ring
-
- Oleg, I think you are totally unrealistic. There is nothing elementary about polynomials, it is a big complex topic. The article just barely scratches the surface. And as to calling the frobenius polynomial ring exotic, it is defined on PAGE 3 of David Goss's 400-hundred page book "Basic structures of function field arithmetic", ... page 3 ... there is nothing exotic about it, its polynomials 101. linas 16:19, 18 Jan 2005 (UTC)
- Well, I know who David Goss is, I know what he works on, and why. And I didn't know this topic. So I beg to differ. Charles Matthews 17:16, 18 Jan 2005 (UTC)
-
- Well, I *don't* know him. Never heard of David Goss until I bought his book, and I still haven't heard of him in any other context. I was merely trying to solve some unrelated problems, and it seemed that this particular book might be vaguely related to the topic I was working on. On pages 1 & 2 he defines these polynomials, and on page three he says "some authors call this the ring of frobenius polynomials", and the rest of the book seems devoted to their study. I assumed that things on pages 1,2,3 of a fat book were not considered to be "esoteric". Maybe you are thinking of a different David Goss? linas 19:45, 19 Jan 2005 (UTC)
-
- What the isomorphism between R[X] and the frobenius ring is, I have no clue. It is not trivial, to me, a novice. Goss doesn't seem to mention any isomorphism; maybe its so trivial that its assumed that everyone knows it, but .. well, I don't get that from reading the first chapter. linas 19:45, 19 Jan 2005 (UTC)
[edit] Philosphy of wikipedia
Dear Linas, please do not think we are just a bunch of people here who want to dumb down math. The aim of writing Wikipedia articles on math is not to have an encyclopedia of all things known to mankind. The purpose is to have very gentle, very elementary articles giving some insights in what math is about to the 99% of people who are not specialists. Now, there is room in Wikipedia for complicated articles too. But there should be a clear separation of simpler articles and more complicated articles. Just because I know certain things about say "Optimization", does not mean I need to go to that page and bring it "up to date" as far as the science or art of it is concerened. Oleg Alexandrov 18:16, 18 Jan 2005 (UTC)
- Well, I'm one of the 99% who are not specialists. I am not a mathematician. My day job is to hack computers. This is a recreational, spare-time hobby activity for me. A year ago, I didn't know number theory from a hole in the ground. So as I follow my nose, I have to hope that what I find on mathworld and on wikipedia is reasonably accurate and complete.
- When I read, I take notes: that this is an X and that is a Y, basic definitions and stuff, nothing complex. Frankly, I am not that smart. Till now, my notes have always been on scraps of paper. But suddenly there's this possibility that my notes could be in wikipedia, and so when things aren't clear, or they're vague, or the thing I'm taking notes on is clearly incomplete, then someone else will come along and fill in the empty spots in my notes. That would really help me.
- Nothing I've added to wikipedia seems to be "cutting egde"; from what I can tell, everything I'm working on now seems to have been figured out in the 19th century. Its all named after people who died 100 years ago ... I don't know how to be more "dumbed down" and "aimed at the generalist" than that. Polynomial rings seems to be one of the 100-200 year old topics that are wildly relevent today. Each global positioning system (GPS) satellite broadcasts a unique polynomial; its at the core of how they work. Polynomial rings are used for error correcting codes in computers and in cell phones, etc. So its definitely neat stuff, but ... hey, its *not* cutting edge. I had absolutely no effing clue that all these polynomials were uniquely isomorphic to some R[X] for some ring R, and I don't think any other novice reading this article will know that either. (The GPS spec makes no mention of this, nor do the radio communications books I've read). Omitting this kind of proof, no matter how trivial it may seem, is a dis-service.
-
- Dear Linas. I spent uncountable times talking to you in the last several weeks. Let me put it in short language what I think the heart of the problem is. It is not so much whether it is complicated stuff or not. The problem with your contributions are
-
-
- You do not think long enough before you insert your contributions.
-
-
-
- You do not read carefully the article as a whole after you insert your contributions.
-
-
-
- You do not have a good taste about what a good Wikipedia article is.
-
-
- In the Polynomial ring article you made a change (removed the commutativity assumption), but you did not read the paragraph right below, which dealt with the uncommutative case. Then you would have realized your addition was not appropriate.
-
- In the modular arithmetic article you made childish mistakes, and after I fixed one, you reverted it.
-
- There are other examples, like my earlier deletions of your stuff, when you restored them and told me to not mess up with what you write because I don't know what I am doing. Eventually you deleted that stuff yourself.
-
- In short, at least in several cases your contributions actually degraded articles. Please take a while to think of the three items above. Oleg Alexandrov 21:06, 19 Jan 2005 (UTC)
-
-
- OK, so I make childish mistakes. Most recently, I accidentally confused a polynomial ring with a module. Sorry. They are alike in so many ways. But at this point, I wouldn't know how to edit the article on polynomial rings to say "hey duude, that thing you think is a polynomial ring? Well, duude, its probably not, its a module".
-
-
-
- And similarly, the article module (mathematics) doesn't ever say that (for example) "the ring R[X] modulo a polynomial p(x) is a module". I'm not sure, but isn't *every* R[X] modulo p(x) a module? How should I ask the authors of the article on polynomial rings to state this? How do I ask the author of the article on modules to add this info? linas 01:15, 20 Jan 2005 (UTC)
-
-
-
-
- It would not make a lot of sence to write articles guiding against all possible mis-conceptions (I know you had only one, but others can have other ones). I would suggest that you read the article Ideal (ring theory), and especially, the part about quotient ring in there. You are very right, a hell of a lot of modules are just quotients by some ideal. Oleg Alexandrov 01:54, 20 Jan 2005 (UTC)
-
-
-
-
-
-
- You have a point about the polynomial ring article not being very complete. For example, something must be said about the nature of a polynomial in abstract algebra (as opposed to real analysis, see polynomial). I will get to this sometime soon.
-
-
-
-
-
-
-
- But you should also not make the mistake of considering Wikipedia a serious reference about math (or trying to make it one). The encyclopedia format has its limitations (and they are good limitations). Oleg Alexandrov 04:07, 20 Jan 2005 (UTC)
-
-
-
Why not? Mathworld takes itself as a serious reference, and for a while was widely celebrated, until the great debacle. Should I be applying for a mathworld editorship instead?
The only other possible reference, the follow-on to Abramowitz and Stegun, is a decade late, and presumably will never be finished. I see no reason why Wikipedia couldn't/shouldn't be a replacement for Abramowitz & Stegun. Although, clearly, the idea of "anybody can edit anything" is trouble when accuracy is important... linas 05:25, 20 Jan 2005 (UTC)
See Wikipedia:WikiProject Mathematics and its talk page, where you can ask the hard questions. Just think carefully of what you ask :) Oleg Alexandrov | talk 10:38, 20 Jan 2005 (UTC)
[edit] Definition of polynomials
Coming back to this after two days away, I think that the definition still needs work. The problem is that people will think of polynomials as representing functions: such as X^2+X representing the function f from R to R defined by f(x)=x^2+x. However, thinking of R[X] as a set of functions from R to R is a mistake for several reasons. The main one is that several different-looking polynomials can be the same function, for example X^2+X and 0 are always equal if R is GF(2), but they are different elements of R[X]. We can also note that in the case of finite R every function from R to R can be written as a polynomial. I think the right way to think of R[X] is the one given here: http://planetmath.org/encyclopedia/PolynomialRing.html . That is, the polynomials are just convenient ways to write down sequences. That definition also makes it clear that R[X] and R[Y] are not just isomorphic but identical. --Zero 12:06, 21 Jan 2005 (UTC)
- You are certainly right. I alluded to this problem above. I will soon get to explaining what exactly a polynomial is. Oleg Alexandrov | talk 16:26, 21 Jan 2005 (UTC)
I was ready to work more on this article, but when I checked out the polynomial page, I found an excellent section there about polynomials in abstract algebra, see Polynomial#Abstract algebra. So what should we do? Shorten the thing over there and move most stuff here? Do nothing? Make a copy of that stuff? The only thing missing on that page is the formal construction of the polynomial ring, by means of sequences of finte length, as Zero says above. but I am not even sure that is necessary. Help! Oleg Alexandrov | talk 01:25, 23 Jan 2005 (UTC)
- I hadn't noticed that. Actually there is another thing missing from Polynomial#Abstract algebra: the multivariate case. I think the simplest formal way to define R[X,Y] is that it means (R[X])[Y]; whether that is the easiest definition to understand, I'm not sure. As to what to do with this article, I think we should fix our own definitions to be consistent, then add a "for more information see" link from polynomial to here. I don't think we should reduce the material at polynomial. --Zero 02:27, 23 Jan 2005 (UTC)
-
- Well, fixing the defintions to be consistent probably means copying that thing over, and then work from there. Other ideas? By the way, what is written in Polynomial#Abstract algebra is indeed neat, I could not have written it so well myself. Oleg Alexandrov | talk 02:30, 23 Jan 2005 (UTC)
-
-
- Go for it. --Zero 06:08, 23 Jan 2005 (UTC)
-
[edit] explicite multiplication signs
Quote from the article:
- P(X)=X2+X=X(X+1)
Here the expression P(X) means function P taken of argument X, while the expression X(X+1) does not mean function X taken of argument X+1, but rather the product of X by (X+1). Why not use explicite multiplication signs to avoid this unclarity, here and elsewhere?
- P(X) = X2+X = X·(X+1)
Bo Jacoby 20:53, 25 October 2007 (UTC)
[edit] Use of N perhaps confusing
The variable N is used in at least two ways, allowing for some confusion. In the first sense, N is the set over which exponents of variables are drawn (and this is used right next to n which is the number of variables, rather than saying using k). Later N is used for k[x]/(x^2).
I believe something along these lines made Oleg Alexandrov and I interpret the article differently, as he reverted what was probably a valid correction (though for my money, the sentence remained too confusing to be "correct"). At any rate, to explain my insertion of the material in a new way, the set N is the set from which i can appear in the expression (and k is a positive integer 1 to n). The reversion seemed to indicate that you were considering n instead.
The only reason to need 0 in N is that the definition of multiplication requires that i+j work out nicely, and so one needs to have an x^0 to act as 1. For instance y=x^0*y^1 needs 0.
The section on the "alternate definition" itself is a bit confusing. Would it be better instead to take this opportunity to define more general exponents? Ring theorists love taking exponents to be subsets of rational numbers and the like, and as long as the exponents are taken from a commutative monoid N (like the nonnegative integers), things work about as expected, and it is just the free monoid ring on N. If N is a group, it is a group ring. If N is the p-adic integers, then you get rings associated to endomorphisms of p-groups.
Basically, the current section seems like easy mathematics made hard, so it might as well be used as a gateway to moderately complex mathematics (still made hard, hehe). JackSchmidt (talk) 04:53, 18 February 2008 (UTC)
[edit] multiplication sign
Oleg reverted my edit.
The section says: "the set of all polynomials with coefficients in the ring R, together with the addition + and the multiplication mentioned above, forms itself a ring, the polynomial ring over R, which is denoted by R[X]". So the article itself requires the dot for multiplication.
Secondly the formula for addition does not tell how to add polynomials of different degrees.
Thirdly the use of both latin and greek letters for indexes is unneeded.
Oleg, I repeat, discuss FIRST and revert after the discussion if that is the result. You are not the God of WP.
Bo Jacoby (talk) 10:56, 21 February 2008 (UTC).
- I am sorry that I had to do a wholesale revert. And I am surely not the god of WP, and I hope other people will comment here too. However, Bo, your efforts to place dots where they don't belong has been discussed at length at talk:integral and talk:derivative, and you refuse to listen. So let me try one more time: please use established notation. A polynomial is denoted aX rather than , although people use as that can't be written otherwise. Please find established references to the contrary. Otherwise, your edits are in violation of Wikipedia policies of following established practices. Thanks. Oleg Alexandrov (talk) 15:46, 21 February 2008 (UTC)
-
- The points made here are mostly valid, but the implementation was suboptimal. I made some minor changes which I think addressed the points while following standard conventions of wikipedia and mathematics. In particular, I only used the cdot to denote the multiplication of polynomials, and this is often used for clarity, though it could also be omitted. I think it is better to use it here, simply because one is defining cdot, and it is easier to define it if it is explicitly involved in the definition. The use of greek letters struck me as odd, and the standard indices i,j were still open. The problem with degree was handled strangely in Bo's edit (simply removing the bounds on the sums). I just stuck a sentence after the definition to explicitly say that the article does not disallow zero summands. I mean x^2+1 has a1=0, and it can be quite convenient to allow 0x^3+x^2+1 as well. I also removed the "one can check that", which was not terrible, but still was the sort of "order the reader around" language that is discouraged by WP:MSM. JackSchmidt (talk) 16:02, 21 February 2008 (UTC)
- Jack, thank you for taking the time to study carefully the issue and for your edit. I do agree that the multiplication sign needs to be used the first time, when the multiplication is defined. Oleg Alexandrov (talk) 03:52, 22 February 2008 (UTC)
- The points made here are mostly valid, but the implementation was suboptimal. I made some minor changes which I think addressed the points while following standard conventions of wikipedia and mathematics. In particular, I only used the cdot to denote the multiplication of polynomials, and this is often used for clarity, though it could also be omitted. I think it is better to use it here, simply because one is defining cdot, and it is easier to define it if it is explicitly involved in the definition. The use of greek letters struck me as odd, and the standard indices i,j were still open. The problem with degree was handled strangely in Bo's edit (simply removing the bounds on the sums). I just stuck a sentence after the definition to explicitly say that the article does not disallow zero summands. I mean x^2+1 has a1=0, and it can be quite convenient to allow 0x^3+x^2+1 as well. I also removed the "one can check that", which was not terrible, but still was the sort of "order the reader around" language that is discouraged by WP:MSM. JackSchmidt (talk) 16:02, 21 February 2008 (UTC)
Thank you gentlemen. The edits made improved the article.
- The addition formula would in my opinion be clarified by inserting (formally redundant) parentheses in order to stress the analogy to the multiplication formula
- The opposition against the explicite multiplication sign in talk:integral was because some editors did not consider the operation between f(x) and dx in the integral to be a multiplication (!). In the case here there is no argument against that is supposed to mean , so I still find it correct to include the multiplication sign, especially when the article explicitely states that the multiplication of the ring is called "". But using juxtaposition for multiplication in R and "" for multiplication in R[X] is perhaps an acceptabel compromise.
- In the addition formula the summation index of all three polynomials involved are all called i. There is no reason why the indexes in the multiplication formula are called j and k.
- The expression is unnecessarily confusing. Use
- The problem of different degrees is solved simply by remarking that the series has but a finite number of nonzero terms and so it is actually a finite sum.
Bo Jacoby (talk) 12:27, 22 February 2008 (UTC).
- I agree with 1, I don't agree with 2, 3, 4, the current notation looks better to me. It would not make sense at all to use your proposal 5, formal series are a much more complex thing than polynomials, I'd rather not invoke it. Oleg Alexandrov (talk) 15:24, 22 February 2008 (UTC)
- 1 sounds good. It will increase the symmetry and make it clearer which notion of addition is being defined. The summation signs themselves are basically just a formal way of writing down the sequence, but we want to avoid forcing the reader to realize that too early.
- 2 is hard for me to understand. I think the current amount of cdots is good if not optimal. Do you feel strongly about the use of cdot? I think objectively it cannot be that important (on some screens the dot will not even render, on year old printouts the dot will already have rubbed off), but the prolonged conflict about it cannot be good for the encyclopedia. I think the current level is a good compromise, and in fact is superior to both the previous version with none and the previous version with many, so I hope we can agree the current is at least "good enough to agree on".
- 3 could go either way here, but let me explain why I think the current way is better. I will assume you meant "no reason to use i and j on the left hand side", since the right hand side is a double sum so needs at least two indices. There is a reason to use a_i and b_j on the left side, but it is merely expository. In the addition formula's right hand side, the coefficients are a_i and b_i, so we do the same on the left hand side. Similarly, the coefficients appearing on the right hand side of the multiplication definition are a_i and b_j, so we do the same on the left, even though we could have used a_i and b_i on the left. Note this symmetry is part of my opinion on 4.
- 4 could also go either way in this article, but let me explain why I think the current way is better. In fact, I would have agreed with you a year ago, but I've seen lots of summations used by my combinatorialist friends. Using extra indices and describing the geometric set from which the indices are chosen under the summation sign is much more readable than what is in effect parameterising the geometric set with an arbitrary coordinate system. Now {(i,j):i+j=k} is just a line, so there is not a huge problem here (hence in this article it does not matter as much), but even {(i,j,m):i+j+m=k} begins to get harder to read. Once you have some slopes other than 1, and 4 or 5 dimensions, the single index convention no longer works at all (for people with bad eyesight, for people who receive photocopies of the article, etc.). Basically it is a question of typography, and it is easier to read the conditions on i,j under the summation sign than it is to read them smooshed together in the a_i b_{k-i}. A different reason to disagree with 4 is related to my "N" comment above. If we do want to do monoids for N, then we cannot subtract, only restrict to indices that add up correctly. However, no one has said it was a good idea, so perhaps it is irrelevant to this article. Convolution is more naturally phrased in terms of subtraction anyways, especially when it is integration with respect to Haar measure on a group.
- 5 makes the presentation seem too abstract, I think. I would say the problem of different degrees is already solved. Completely removing the bounds on the sums would make them harder for younger students to read. Your particular previous implementation of this required defining a_i = b_i = 0 for i<0, so there will need to be an extra sentence anyways. However, 5 might make a good contribution to the formal definition below. You could explain how "\sum_i a_i X^i" is a convenient expression for the sequence a:i \mapsto a_i, and how interpretting the sum formally and applying the distributive law formally produces the cauchy/convolution definition of multiplication, etc. In the early part though I think it will be too barren.
- In summary then, 1 seems like a good idea, 2 is hopefully addressed, and 3,4,5 potentially degrade the exposition. JackSchmidt (talk) 17:43, 22 February 2008 (UTC)
Gentlemen:
- Everyone agrees.
- Agreed. No, I do not feel strongly about the use of the dot except where the missing dot leads to misunderstanding or confusion. "" may be a function value or a product. When it is a product the dot helps: "". One example of omitting the dot is in the integration, and I was amused to learn that other editors do not consider "f(x)dx" to be a formal product. So the omission of the dot has really confused people. The trained mathematician is used to omitting the dot, and math books are read and written by trained mathematicians, but WP readers, not being trained mathematicians, are confused by the editor first explaining that the multiplication sign is "", and afterwords not using "" for multiplication. Omitting the dot is polite to the writer and rude to the reader. I prefer clarity to sloppy conventions in an encyclopedia. Also, programming languages require explicite multiplication signs, so we've got to get used to it anyway.
- I merely suggest that i is consistently the exponent of like in:
- does strangely not tell if i goes from 1 through k or from minus infinity to infinity. Luckily it makes no difference, but that doesn't go without saying.
- The simpler version is: , noting that the coefficients ai and bi are nonzero only for a finite number of nonnegative integer values of i. I prefer explicite multiplication signs, , but I understand that you guys don't. That's OK.
The explanation: "One can think of the ring R[X] as arising from R by adding one new element X to R and only requiring that X commute with all elements of R. In order for R[X] to form a ring, all sums of powers of X have to be included as well" should be moved upwards to become the second sentence. It is more understandable than the formulas. And it should be written in one way rather than in three the ways: , R[X], and R[X].
Bo Jacoby (talk) 19:35, 23 February 2008 (UTC).
-
- Done.
- All agreed, no action need here?
- I am almost convinced. It is either make a_i b_j on both sides, or make X^i all three times; both seem reasonable. Either the current, or your suggested version \left(\sum_{i=0}^n a_iX^i\right) \cdot \left(\sum_{i=0}^m b_iX^i\right) =\sum_{i=0}^{m+n}\left(\sum_{j+k = i}a_j b_k\right)X^i.
- I agree that the current notation is deficient, and basically in the same way as your long-ago suggested notation. I still prefer the current version, but I am open to suggestions.
- I still think this would be better to tone down the abstract of the section "Formal definition". I'll write the monoid section, since it will give us a place to expand the article, rather than worrying so long about indices.
- (the location of the sentence) I like the explanatory sentence as the second sentence (Bo's version). (Done Bo Jacoby (talk) 15:24, 25 February 2008 (UTC))
- I definitely agree, a single notation should be used for just R[X] in running text. I'll switch it to R[X] when it is in running text (small images slow down the page load, and make it jittery for everyone, and I cannot actually read the latex images on wikipedia, too teeny). JackSchmidt (talk) 00:05, 24 February 2008 (UTC)
Thanks. Remaining issues: 3, 4, 5. The subsection on generalized exponents says: "the formulas for addition and multiplication are the familiar: and where the latter sum is taken over all i, j in N that sum to n". I like that there are no explicite limits to the summations, but is should be explained that the apparently infinite sums are actually finite sums because only a finite number of terms are nonzero. Bo Jacoby (talk) 15:24, 25 February 2008 (UTC).
[edit] Unicodify
I would like to convert a number of the inline math tags to simple text. I say "unicodify" when in fact the characters are just ASCII. Images do not resize automatically, and make the articles hard for visually impaired readers to understand. I would leave all sections with \sum or \rightarrow as those do not translate well to plain text. I believe that would mean there would be 5 images on the page, and the rest would be resizable text. JackSchmidt (talk) 00:20, 24 February 2008 (UTC)
[edit] noncommuting variables
The example YX−XY = 1 is important in quantum mechanics. X is considered multiplication by an independent variable x: X = (f → (x→ x·f(x))), and Y is considered differentiation with respect to this variable x: Y = (f → df/dx). Then the Leibniz product rule gives (YX)f = d(x·f)/dx = x·(df/dx)+(dx/dx)·f = (XY+1)f implying that YX−XY = 1.
The exponential function t→eat is an eigenfunction to the differential operator d/dt with eigenvalue a because d(eat)/dt = a·eiωt. The frequency ν = a/2πi is an eigenvalue to the operator (1/2πi)d/dt because (1/2πi)d(e2πiνt)/dt=ν·e2πiνt. The energy E=hν (where h is the Planck constant) is an eigenvalue to the operator (h/2πi)d/dt because (h/2πi)d(e2πiEt/h)/dt=E·e2πiEt/h. So in quantum mechanics the energy E and the time t are related by the commutation relationship Et−tE=h/2πi.
This should be explained somewhere in WP. Perhaps there should be a link from here to commutator and to canonical commutation relation.
Bo Jacoby (talk) 05:25, 25 February 2008 (UTC).
- I'm not sure why that was not included already. I had actually written out the little derivation of the relation, explaining how it acts on polynomials, etc. I decided it was better just to site a reference since the calculations are not hard, but it helps to see them written down once, and then to write them down yourself. However, wikipedia is not a textbook and this is not an article on weyl algebras, so I figured it was better to stick with the cite. Somehow the final result was deleted too. At any rate, I added it in. JackSchmidt (talk) 16:26, 25 February 2008 (UTC)
[edit] The polynomial ring R[X].
Quote 1: "only requiring that X commute with all elements of R"
Question 1: Is is really required that X commutes with all elements of the ring in order that the polynomial ring is defined? I think not. Consider (Z[X])[Y]. Do you require that YX=XY ?
Quote 2: "If R is commutative, then R[X] is an algebra over R".
Question 2: Is this information appropriate here? I am more confused than enlightened.
Bo Jacoby (talk) 10:50, 26 February 2008 (UTC).
-
- Yes, it is required. In (Z[X])[Y] it is required that YX=XY.
- Yes, it is one of the main ways of defining polynomial rings over commutative rings. R[{x: x in X}] is the free unital, commutative, associative R-algebra with R-algebra basis of cardinality |X|, and is unique up to R-algebra isomorphism, the construction is natural in R and X, etc. In plain English, R[X] is the most general ring in which it makes sense to evaluate its elements in R-algebras.
- JackSchmidt (talk) 14:51, 26 February 2008 (UTC)
- Thank you. Normally the adjective restricts the meaning of the noun: every black dog is a dog, and so on. So a reader might get confused to learn that a 'non-commutative polynomial ring' is not a 'polynomial ring'. He might prefer the term 'commutative polynomial ring' to avoid misunderstandings. Bo Jacoby (talk) 10:30, 27 February 2008 (UTC).
[edit] Sum and product formula revert
Oleg wrote: (rv the new sum and product formula. Bringing in infinite sums and formal series is just poor judgement, why invoke something complex to explain something simple? No comment on other changes)
replacing
- where only a finite number of terms are nonzero in these formally infinite sums.
with
Answer: The limits n and m are undefined and unexplained. For example
In the formula with limits, the left hand side is
and the right hand side is
So the formula with limits is incorrect. The left hand side does not involve but the right hand side does.
In the formula without limits the left hand side is
and the right hand side is
The formula without limits is correct.
Note also that the expression
which is used in both formulas, is also formally an infinite series, unlike
which, however, assumes that is defined for negative indexes.
The sum of two terms can be written a+b or a+0+b+0+0. Any number of zeroes can be included or excluded from a sum without changing the value. An infinite sum with only a finite number of nonzero terms is a handy expression for a finite sum with an unknown number of terms. The fact that the theory of sums of an infinite number of nonzero terms - a series - is 'something complex' does not imply that the theory of sums of a finite number of nonzero terms is complex at all.
Summarizing: The formulas without limits are simpler. The limits complicates matters and actually made the formulas incorrect. It is not sufficient that a formula is found in books (and taught in American universities for years by Oleg), it must also be correct in order to qualify for WP.
Bo Jacoby (talk) 14:01, 26 February 2008 (UTC).
- "It is not sufficient that a formula is found in books, ... it must also be correct in order to qualify for WP" directly violates wikipedia policies. WP:V is "an official English Wikipedia policy", and it states "The threshold for inclusion in Wikipedia is verifiability, not truth." (emphasis theirs).
- It is not clear to me that some of your recent edits are being constructive, and they are going against the consensus process on this page. We are having a discussion here to work out what would be best on the page.
- I am happy to work towards consensus, but please work out the experiments on the talk page, not on the article, as there have been entirely too many reverts on the article page. Please see WP:REVERT for guidelines on reverting.
- Note that Oleg and I have both explicitly said the notation without limits is a poor choice for the section you have added it to. We have said this more than once, and you have added it more than once, against consensus. However, I have not been trying to "stalemate" the discussion, but have suggested your sums without limits might be more appropriate in the "Formal definition" section. While Oleg has not explicitly agreed, his argument that the notation is too technical would need to be revised since that section is already quite technical. At any rate, it would be a constructive edit to restore simple formulas to the simple section, and try your limitless expressions in the formal definition. JackSchmidt (talk) 14:51, 26 February 2008 (UTC)
- After a very thorough explanation I undid Oleg's illegitimitely revert, which was without explanation in the talk page. The "verifiability, not truth" does not mean that an incorrect formula should be included, but that nonverifiable formulas should be excluded. If you dislike the limitless notation then you should avoid it in all cases. By now, one of the four sums in the multiplication formula is without limits and that makes the formula incorrect. I too am happy to work towards consensus, and I look forward to see your suggested formula having limits and being correct. Bo Jacoby (talk) 15:14, 26 February 2008 (UTC).
-
-
- I raised the matter at Wikipedia talk:WikiProject Mathematics. Oleg Alexandrov (talk) 15:55, 26 February 2008 (UTC)
-
I prefer the limit-free version, since it more easily generalizes to the case of a free associative algebra on a set, where it may be inconvenient to be too strict in specifying the allowed index sets. It is much easier to say that only finitely many terms are non-zero than to try (needlessly) to pin down which terms are nonzero, particularly in the general case. I should also note that both versions are common, but the limit-free version is the one advanced by van der Waerden as well as by Bourbaki. Silly rabbit (talk) 16:08, 26 February 2008 (UTC)
- However, we should focus on the needs of the reader who is trying to learn these things, not on the needs of the expert. Also note that the index-free formula is already in the generalizations section. Oleg Alexandrov (talk) 18:33, 26 February 2008 (UTC)
-
- I would like to add that I find the current edit unacceptably pedantic, and would prefer to see the indexed version restored. I do not agree with Arthur Rubin assertion that the sum of an infinite number of zero terms is potentially problematic, although as an analyst myself I can certainly appreciate the queasiness he feels in having an unqualified summation. Silly rabbit (talk) 18:56, 26 February 2008 (UTC)
-
-
- I'd prefer having the indexed form, myself. I just think that, if this is the definition of addition and multiplication, it needs to be formally correct. — Arthur Rubin | (talk) 22:11, 26 February 2008 (UTC)
-
I'm going to go ahead and revert to the version by Oleg. Of the three possibilities so far, I find it to be the least controversial. I'd like to continue the discussion about how polynomials ought to be defined, but with the lesser of three evils in the article. Silly rabbit (talk) 22:25, 26 February 2008 (UTC)
- Well, I must say that the formulas
-
- and
- are very intimidating to a new reader. It is much simpler to put the original simple formulas with a finite sum and explain a bit how the indeces are handled than having this in. Polynomial multiplication is a simple thing, just using the distributive law, why make things complicated? Oleg Alexandrov (talk) 04:33, 27 February 2008 (UTC)
-
-
- I prefer the notation using the limits. The problem pointed out by Bo Jacoby that certain undefined coefficients showing up in the product formula is a minor one and it is rather formal. It can and should be resolved by saying in words below the formula that coefficients a_{-1} etc. are interpreted to be zero. Using elements of logical notation under the summation is IMO obfuscating the story. Striving for simplicity is a good thing, but I think introducing infinity here and then restricting back to finitely many coefficients does not help a beginner. Jakob.scholbach (talk) 12:28, 27 February 2008 (UTC)
-
[edit] Larger problems with this article
Without getting into the notation debate, let me point out that this article has more serious flaws, and with direct bearing on novices who may try to read it.
- Definition of a polynomial: If this article aims at someone not well versed in algebra, I suggest by starting with polynomials over a (commutative) field, and only in a later section mentioning that the coefficients may be taken to be a more general ring. The case of K[X] is by far the most useful in applications and it has quite distinguishing features: it's a domain, and moreover, a principal ideal domain, PID (in fact, even Euclidean domain), with all the usual corollaries for the multiplicative structure and the classification of modules and ideals and the corresponding homological properties. The motivation for treating powers of X in a formal way can also be smoothened out, by starting with fields Q and R and then doing things in complete generality. The only advantage for using arbitrary ring R of coefficients seems to be that the Hilbert's finiteness theorem can be stated in one line, but it seems to be far outweighed by the disadvantages of working in unnecessary generality and constantly worrying about extra hypothesis needed to state even the most basic properties of polynomial rings (such as being a domain, or factoriality).
- The polynomial ring in several variables: the one-sentence description does not do justice to the subject. It does not even acknowledge the inherent symmetry between the variables ( (R[X])[Y] = (R[Y])[X], let alone point out that polynomial rings in several variables are fundamentally more complex objects: think Serre conjecture, for example. "Alternative definition" in the same section is weirdly out of place. What is its function, actually? Almost no one (in algebra) thinks of the polynomial ring n variables as the semigroup ring of the free commutative cancellative semigroup on n generators. But this is, in effect, the point of view that this subsection (along with a later subsection on "monoid rings" under Generalizations) is trying to promote.
- Properties: this is a fairly dense and somewhat random list, which mixes fundamental properties with unnecessary abstract nonsense (thank you, Lang!)
- Some uses of polynomial rings: ditto, and a lot more eclectic at that. Wanting explanations, the same items may also be dubbed "uses of factorization by ideal" or pretty much anything else with algebraic flavor. Needs a lot of work.
- To summarize: beyond the definition of the ring structure, which, in my view, is given in unnecessary generality, there is hardly any connected English text explaining what polynomial rings are and how they are used.
Arcfrk (talk) 04:11, 1 March 2008 (UTC)
- You're very welcome to work on the article, as long as the topic is kept accessible and more complex issues are treated further down the article. Oleg Alexandrov (talk) 06:39, 1 March 2008 (UTC)
- I agree the article should focus on the importance of the polynomial rings, not their definitions. If working with Q[x], R[x], and C[x,y] lets us get to the good parts quicker, then I say leave the formal definition to be "a more complex issue treated further down the article".
- One purpose of the generalizations section was to indicate how the fundamentally important idea of polynomial rings has informed ring theory, and so subtly address the problem of "what are these polynomial rings for?". Another subtle purpose was to have the monoid definition give a simple, formal definition which generalized easily to Union k[x^(1/n!)], k<x,y>, kG, etc. Rather than "use complex ideas to explains simple things", I hope I include the "complex ideas" at the end, giving one paragraph summaries of "complex things", rather than multi-paragraph explanations of "simple things".
- This mitigates any real need to include formal definition sections, and hopefully encourages a more grounded approach in the earlier sections of the article. I like the idea of beginning with Q[x] and R[x], then perhaps (Z/pZ)[x] to emphasize the difference between polynomial functions and formal polynomial rings. I would also be fine with maintaining such explicit examples for a several variables section, perhaps using C[x,y] as an example.
- Note that my text was not meant as an endorsement of the monoid approach, merely recording the fact that it existed. I personally tend to lean in the "they are polynomials, they form a ring, what's there to say?" camp as far as the definition goes. I would very much like the article to concentrate on their importance, not their formal definition. The generalizations section is meant to address their importance, by describing their "children". JackSchmidt (talk) 06:41, 1 March 2008 (UTC)
-
- Just to make it clear: I like the "Generalization" section and think that it is by far the best part of this article. But without the foundation to build upon, it's hard to expect non-experts getting much out of it, or even reading that far. Also I have no problem with treating monoid algebras there, especially, in summary style, it was the convolution thingy in "the polynomial rings in several variables" that got me all worked up (and by extension, my axe fell upon the innocent head — brrr!) Arcfrk (talk) 08:47, 1 March 2008 (UTC)
[edit] field and ring
Arcfrk made improvements. Thank you. The ring K[X] is defined now when K is a field, but R[X] is used when R is a ring. Bo Jacoby (talk) 11:51, 3 March 2008 (UTC).
- I noticed that too. By the way, I don't see much value in starting with a field, rather than a ring, as it was before. It is better in my view to use the field assumption only when actually stating specific properties for which a field is needed. Oleg Alexandrov (talk) 03:57, 4 March 2008 (UTC)
-
- I do, and for both historical and pedagogical reasons. By the way, I've just checked Lang's Algebra (3rd ed), Chapter IV, where Basic properties for polynomials in one variable starts with the ring of polynomials A[X] for arbitrary commutative ring A, and it turned out that every single statement save the very first one, from Theorem 1.2 to Proposition 1.12, actually assumes that the ring of coefficients is a field! (Exception? You'll be amused: Theorem 1.1 (preceded by "We start with the Euclidean algorithm") deals with polynomial division f = gq + r in the special case when the leading coefficient of the divisor g is a unit in A. Needless to say, the Euclidean algorithm does not follow, and so Theorem 1.1 is not mentioned again in this section.) Of course, it will warm my heart as algebraist to define an algebraic variety as a separated scheme of finite type over the spectrum of a strictly henselian ring, but would that be a wise course for starting a wikipedia article "Algebraic variety"? Arcfrk (talk) 05:16, 4 March 2008 (UTC)
-
-
- Please note though, you define the polynomial ring only over fields, but then later the text deals with polynomials over a ring, without that having been defined. Pedagogically, that is not very good. Oleg Alexandrov (talk) 05:31, 4 March 2008 (UTC)
-
[edit] Some more picky points (or, about writing p(X) and about non-commutative rings)
I was just sucked into reading this article. It actually looks pretty good, but I have some questions.
- The definition says a polynomial is an expression of the form . Being pedantic one could object that it's not the equation but its right hand side that is a polynomial (so one should write only that RHS and say that such an expression is usually denoted as p(X) or some such). But I was actually more interested inthe left hand side. So if p(X) is a polynomial, then what is p? Possible answers: (1) you are not allowed to write p without parentheses following it, so the question makes no sense (2) it a a macro that gobbles a symbol as argument and produces the RHS with the argument inserted into in place of X (a slight precision of the previous answer; I think many people unconsciously think of it like this, but it is not very mathematical) (3) it denotes the polynomial function associated to the polynomial (really? then what is its domain, and why is X allowed?) (4) p is the same as p(X), since the stuff in parentheses is to be susbtituted for X in the polynomial p, and substituting X for X changes nothing (don't laugh, some authors seriously maintain this point of view) (5) p is the same as p(X), but the X is a warning to the reader that there are Xes hiding in the expression (6) (your answer here). In any case people do write p(X), and I'm not objecting to using at this point in the discussion. But also very few people really keep up the effort of writing all polynomials as p(X), and the current state of this article proves the point (I just replaced a p by p(X) in the line immediately following the definition, but in "Properties" the (X)es are mostly dropped). Except that in "further properties" we find P(x) twice, standing for the polynomial function associated to P evaluated at x. Which brings me to a next two points.
- When P(x) is twice used in "Further properties" to denote the value after substitution of x for X (first as a function of x with P fixed, then as a function of P with x fixed), wouldn't it be good to define the process of substition explicitly rather than implicitly? And mention that this requires the base ring to be commutative? Also I think it would be nice to add that the base ring R is a fairly important example of a unital associative R-algebra.
- In fact nothing in the article explicitly takes into account that the base ring could be non-commutative; shouldn't it be stated in the introduction that this article is not about polynomials over non-commutative rings (a slightly slippery subject for those used to the commutative case). I'm not saying nothing could apply to the non-commutative case, but it is clearly not what is being thought of. Many parts implicitly exclude the non-commutative case by requiring something stronger for the base ring (field, domain,…). It is true, Noetherian rings do not have to be commutative, and properly formulated the Hilbert basis theorem applies in the non-commutative setting, but is that really so important here? Note that even the part about non-commutative polynomials assumes the base ring is commutative, and that its elements commute with the indeterminates.
- In the introductory sentence, shouldn't it be somewhere said that the set of polynomials itself is (made into) a ring, if only to justify the name "polynomial ring"?
Marc van Leeuwen (talk) 15:27, 3 March 2008 (UTC)
- (Noncomm rings) I agree that it may be a good idea to begin only with commutative rings. In fact, I think it would be wise to discuss Q[x], R[x], k[x], Z[x], C[x,y], or so, before letting the coefficient ring be general. I think many of the nice categorical and module-theoretic properties of R -> R[x] fail when R is not commutative, so it might be wise to restrict to R commutative for the majority of the article.
- To clarify though, I believe in the generalization section, all requirements on R are stated explicitly and locally, but I could double check if you think there is a problem. In Lam's cited text, the chapter on division rings shows how one can reasonably extend the idea of "substitution" to certain non-commutative rings, and still have interesting mathematics. In general R[x] may be nearly unrelated to R or to equations over R when R is a non-commutative ring.
- In response specifically to "Note that even the part about non-commutative polynomials assumes the base ring is commutative, and that its elements commute with the indeterminates." This is phrased in a misleading way. The section defines non-commutative polynomials quite generally, and then remarks that they are free algebras when R is commutative. Note that even R is not an R-algebra when R is not commutative, so the assumption is pretty natural. JackSchmidt (talk) 16:19, 3 March 2008 (UTC)
- OK, thanks. See my edit that tries to clarify this (I hope you agree). By the way I think even the very notion of an R-algebra supposes that R is commutative (at leat that is what its article says). Marc van Leeuwen (talk) 05:59, 4 March 2008 (UTC)
- The new edits are nice. I shortened the parenthetical remark (if something is worth saying, it is worth saying without parentheses). Wait, that's not fair.
- Yes, the ring R need not be commutative, but an R-Algebra A is not only a module over R, but also over R/[R,R], the largest commutative quotient of R, so any non-commutative aspect of R is lost immediately. For instance an algebra over a Weyl k-algebra is 0, so one tends to lose generality rather than gain it by allowing R to be non-commutative. JackSchmidt (talk) 14:48, 4 March 2008 (UTC)
- OK, thanks. See my edit that tries to clarify this (I hope you agree). By the way I think even the very notion of an R-algebra supposes that R is commutative (at leat that is what its article says). Marc van Leeuwen (talk) 05:59, 4 March 2008 (UTC)
- Further, in the section "skew polynomial rings", two important ring constructions are given where the indeterminates do not commute with the coefficients, but rather act as derivations or ring endomorphisms. In each case, it is important that some sort of "PBW" basis exist to give some hope that the polynomial rings are noetherian, but the method is not so trivial as "indeterminates commute with coefficients". JackSchmidt (talk) 16:19, 3 March 2008 (UTC)
-
- In spite of my edit mentioned a few lines up, I think it would be best this article were dedicated to commutative polynomial rings (which I guess is what most people think of naturally), stating so clearly, with a separate article about noncommutative issues (both for coefficients and variables), making clear what can be retained and what changes with respect to the commutative situtaion. This is probably more clear than to have the reader having to search what are the precise assumptions at each point. Marc van Leeuwen (talk) 05:59, 4 March 2008 (UTC)
-
-
- I disagree that non-commutative concerns should be removed. I agree that commutative rings should be emphasized earlier. Please see WP:NPOV for the general wikipedia policy on inclusion of multiple points of view.
- The reader need only search to the section heading, "non-commutative polynomial rings", to be aware of the new scope. All the assumptions are stated within the paragraph they are used. If we cannot expect the reader to read the whole sentence, then we cannot expect them to understand the article. The generalizations section includes both non-finitely-generated k[x] algebras and non-commutative ring extensions of k[x]. I think the entire research community in non-commutative rings thinks of polynomial rings as being non-commutative, so I do not think non-commutative concerns are given undue weight (five paragraphs, one of which serves double duty to the commutative algebraists as well, the other four describing three distinct and physically important ring constructions). I do think the earlier sections need some expansion, but I think that is being taken care of by Arcfrk and Marc van Leeuwen. JackSchmidt (talk) 14:48, 4 March 2008 (UTC)
-
- On the one hand, I agree with Marc that it would be better to have a separate article dedicated to noncommutative polynomials, and perhaps another one dealing with the ring extension R ⊂ R[X] in the context of noncommutative rings, where the theory is quite different. I cannot speak on behalf of the entire research community in noncommutative rings, but it appears quite unusual to assume by default that a "polynomial ring" (the subject of this article) is noncommutative. That is not how the term is commonly used. On the other hand, there is nothing wrong with briefly mentioning noncommutative theory under "generalizations", especially, if this is done in summary style. Arcfrk (talk) 18:11, 4 March 2008 (UTC)
- To be clear, I am not asserting that anything in the generalizations is the standard primary meaning of polynomial ring, but I am asserting that the non-commutative ring theory research community does not assume that R[X] is commutative, because they do not assume that R is commutative. For example Köthe's conjecture can be phrased as several equivalent statements about how ideals composed of nilpotent elements behave under polynomial rings; the conjecture is trivially true for commutative rings or noetherian rings, but has been an area of active research since the 30s. Polynomial extension preserves nice properties such as prime, semiprime, noetherian, being Ore, but for instance it is an open question whether R[x] being Ore implies R is Ore. Polynomial extension can be very complex: it does not preserve the property of Goldie in general, but of course does for commutative rings and noetherian rings. Generally speaking, it is a standard question in ring theory, for a given property P, does R have P iff R[X] has P? JackSchmidt (talk) 18:54, 4 March 2008 (UTC)
[edit] Other generalizations
First off, thanks to Arcfrk and others for lots of improvements to the article. I noticed the new link to Laurent polynomial, k[t,1/t], which is technically covered under the monoid definition, but not otherwise mentioned. It seems this is a pretty important part of polynomial rings, but so are function fields (which currently only is a disambig; I mean the field of fractions of a polynomial ring over a field). I was thinking of adding these to the generalizations section somewhere, but I worry about making that section too large. One reason I think polynomial rings are so important is that quite a lot of algebra is in some sense a generalization of polynomial rings!
Another wonderful edit by Arcfrk linked to Ore extension. I suggest that someone (I'll do it) trim down the skew+differential section, and instead put more detail into the Ore extension article. In other words, one combined paragraph on skew and differential rings. The only really large section left then would be the monoid ring section, and I think it is mostly big because it does a double or triple duty of formalizing the earlier not-generalized cases.
It might be wise to discuss which generalizations would then use the "room left". Laurent polynomials, function fields, rings of polynomials on varieties, affine schemes, polynomial representations of algebraic groups, ... clearly not all should be included directly, but it might be possible to organize it so that most top-level generalizations are mentioned with a sentence or two and a wikilink.
One might be able to condense free algebras and ore extensions into the same section, but I worry that this must be done carefully. "Non-commutative polynomial ring" links to free algebras, and I think this is the predominant primary meaning, but already there has been confusion about whether a polynomial ring with coefficients in a non-commutative ring is a "non-commutative polynomial ring" etc. Sticking all non-commutative generalisations under one heading might be inviting a comedy of confusion.
If the ideas sound basically plausible, here is my preliminary suggestion:
- Monoid ring (since it makes it easy to discuss others in a vaguely uniform way)
- Localizations and their completions (Power series, laurent polynomial, rational functions)
- Noncomm (free algebra, ore extension)
- Geometry (regular local ring, affine scheme)
I'm not sure if/where algebraic groups and their polynomial representations belong. Maybe there should be a blurb on symmetric functions? Is there a nice category they could fit under that would have one or two other interesting topics? Maybe some belong more under a "uses" than a "generalizations"? JackSchmidt (talk) 02:11, 6 March 2008 (UTC)
- Thanks for your kind assessment, I am glad to be of service. I think that you worry too much about condensing. At the moment, all sections in "Generalizations" look about the right length. The section on noncommutative polynomials might be a tad too dry: at least it can mention the notation K<X,Y> and perhaps display the expansion of a noncommutative polynomial on a separate line. I suggest not going into too much details with fields and regular local rings. They can be mentioned casually in the text or linked under "See also", but making separate sections may be too much.
- Of course, "Ore extension" requires a lot of work and if anything from this article can be of use, go ahead and put it in there. But I think that short description and links to the Weyl algebra should stay, as this is one of the most fruitful noncommutative analogues of the polynomial rings. Arcfrk (talk) 02:34, 6 March 2008 (UTC)