Wikipedia:Reference desk/Archives/Mathematics/2008 February 21
From Wikipedia, the free encyclopedia
Mathematics desk | ||
---|---|---|
< February 20 | << Jan | February | Mar >> | February 22 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
[edit] February 21
[edit] Quick question about log convexity
I've read the article on power series, and it mentions that for more than one variable, the domain of convergence is a log-convex set instead of an interval.
But what is a log-convex set here ? I mean, how do we take the log of a set ? Surely the set can contain negative values for example...
Thanks. -- Xedi (talk) 02:13, 21 February 2008 (UTC)
- Sorry, I have no idea what they could mean by log-convexity of a set. The multi-variable stuff was added in June 2004, and the log-convex comment was made in Oct 2004. It does not appear to have been touched since then. I'll ask the author about it.
- The domain of convergence is a union of polydisks, but I think perhaps it need not be a polydisk. Some reasonably elementary notes are at Adam Coffman's website, specifically his research notes on Notes on series in several variables. It includes the polydisk result, as well as the continuity, and differentiability of functions defined by a series, and includes the nice "recentering" result familiar for one complex variable allowing analytic continuation. JackSchmidt (talk) 04:25, 21 February 2008 (UTC)
-
- I've added an example and a little more explanation to the article. Terry (talk) 23:07, 21 February 2008 (UTC)
- Thanks. I suppose when the "center" is (a1,a2,...) then we consider (log|x1-a1|,log|x2-a2|,...) ? —Preceding unsigned comment added by Sam Derbyshire (talk • contribs) 13:08, 22 February 2008 (UTC)
- I've added an example and a little more explanation to the article. Terry (talk) 23:07, 21 February 2008 (UTC)
[edit] Natural Logs
I'm being asked to evaluate natural logs and I'm getting very confused. One problem I'm having particular trouble with is lne(-x/3)
Could anyone please help explain to me how to solve problems like these and how to determine how to get started?
Thanks, anon. —Preceding unsigned comment added by 66.76.125.76 (talk) 03:53, 21 February 2008 (UTC)
- What is lne(-x/3) ?
- Is it Log[ Exp[-x/3] ] ?
- or is it Log[-x/3] where Log is Log based e.
- 202.168.50.40 (talk) 04:34, 21 February 2008 (UTC)
-
- ln is usually a shorthand for loge --PalaceGuard008 (Talk) 05:07, 21 February 2008 (UTC)
- y = log(-x/3)
- ey = -x/3
- x = -3ey
--wj32 t/c 05:16, 21 February 2008 (UTC)
- If the question is about ln e−x/3, what can you say in general about ln eA? --Lambiam 12:05, 21 February 2008 (UTC)
[edit] is math the same as logic?
Is math the same as logic, or why not.
In other words, is there math that isn't logical, but just evaluates truths, for example "experimenting" with numbers to find out "facts" but not using logic to show that these must be necessarily true. After all, the other sciences don't rely on their findings to be NECESSARILLY true, just that they happen to be true... so which is math? —Preceding unsigned comment added by 79.122.42.134 (talk) 10:12, 21 February 2008 (UTC)
- You might be interested in logicism and experimental mathematics. Algebraist 10:43, 21 February 2008 (UTC)
- You might also be interested in David Hilbert's project to formalize mathematics.--droptone (talk) 12:45, 21 February 2008 (UTC)
-
- Experimental mathematics uses numerical methods to suggest conjectures which are then formally proved (or perhaps disproved if they turn out to be just a numerical coincidence). All mathematical results are proved using the techniques of logic, so, yes, they are necessarily true. But this does not mean that mathematics is the same as logic - logic does not determine what a mathemtician sets out to prove, or why a particular result is considered to be beautiful, deep or interesting. Logic is to mathematics as calligraphy is to the Rubaiyat of Omar Khayyam. Gandalf61 (talk) 14:04, 21 February 2008 (UTC)
[edit] Moment
Suppose you have a ladder leaning against a wall, which makes an angle θ with the ground, and a man, who can be treated as a particle, stands a distance l up the ladder. Now his weight will obviously be acting downwards and so working to work out the moment produced, you would have to resolve his weight vector to find the component of it that acts perpendicular to the ladder.
My question is, if you used Pythagoras or trig to determine his perpendicular distance from the ladder's point of contact with the ground to the line of action of the weight vector and then multiplied that by his weight, would that give you the correct value of the moment? Cheers in advance 92.3.49.42 (talk) 12:45, 21 February 2008 (UTC)
- Unless I've misunderstood, yes. What I tend to do is extend the "arrow" of the force such that the perpendicular goes through the point you are trying to resolve against, then it's a simple case of force times distance from point. x42bn6 Talk Mess 13:02, 21 February 2008 (UTC)
- Yes 92.3.49.42, both those methods are valid ways of calculating the moment around the ladder's point of contact with the ground, and both will yield the same answer. --mglg(talk) 17:18, 21 February 2008 (UTC)
[edit] σ-algebra redux
A revised version of There are no countably infinite σ-algebras, hopefully closer to being correct?
Lemma. If is an infinite algebra over a set X, partially ordered by inclusion, then contains an infinite ascending chain or an infinite descending chain .
Proof: Let . If has a chain with no upper bound, then that gives us what we wanted. Otherwise has a maximal element b1. If and then so by maximality, and therefore . That means there are infinitely many subsets of b1 in .
Let be that collection of subsets, minus b1 itself. Then has a maximal element b2, and so again contains infinitely many subsets of b2. Inductively we obtain an infinite descending chain .
Corollary. Every infinite algebra contains an infinite collection of pairwise disjoint sets.
We just take the sequence of differences of elements of the chain.
Corollary. There are no countably infinite σ-algebras.
An infinite σ-algebra is an infinite algebra, and by the above has an infinite subcollection of pairwise disjoint sets. The map is then an injection from the power set of N into the σ-algebra.
— merge 13:58, 21 February 2008 (UTC)
- That seems to work, but as Trovatore commented above, you don't need the Axiom of Choice (you may not care about AC, of course, but I do). Since you have no infinite ascending chains, you don't need Zorn to find a maximal element (the top element of any chain is maximal). If you start with a bijection between your (putative) countably infinite sigma-algebra and N and, whenever you have to choose an element, always choose the one of least index (in terms of this bijection), the proof becomes wholly choice-free. Algebraist 16:30, 21 February 2008 (UTC)
-
- Many thanks for the review. You're right of course that there's no need to invoke the ghost of Max Zorn to get maximal elements there! I'm not too concerned about using AC, but I do like to improve my awareness of when and how it's being used and when it is or isn't necessary, so your comments are much appreciated. Although there are other ways to solve the original problem, the results for infinite algebras in general seem more useful than the fact that there are no countably infinite σ-algebras, and it's nice that once you have them the statement about σ-algebras becomes trivial. — merge 17:19, 21 February 2008 (UTC)
(slightly modified the above) It's worth noting also that for the results on infinite algebras we aren't assuming countability, so for these I believe AC is required (?). — merge 13:00, 22 February 2008 (UTC)
- I think the argument goes through if you just know that every infinite set has a countably infinite subset, which does need a little choice to prove. But you don't need full AC to prove it -- for example, it follows from the axiom of countable choice, which is consistent (for example) with every set of reals being Lebesgue measurable. --Trovatore (talk) 22:23, 22 February 2008 (UTC)
[edit] Differential equation gives weird result!
Hi there, I've been given differential equation dy / dx = e − 3x / y3
I separated variables and integrated, and I eventually came up with the general result y = ( − 4e − 3x / 3 + D)1 / 4 where D is 4C, my initial constant of int.
But... the function is always going to be negative, and I have to take the fourth root of it! Have I messed it up, or am I ok so long as D makes the stuff under the fourth root positive??
Thank you! Psymun (talk) 15:20, 21 February 2008 (UTC)
- Your working is correct. Could you not have a result that includes complex numbers? -mattbuck (Talk) 15:42, 21 February 2008 (UTC)
- 'Spose! I didn't see it coming in this course... although seeing as I did them in the last maths course I possibly should have! Thanks for reassurance! Psymun (talk) 15:47, 21 February 2008 (UTC)
- No need to consider complex numbers. The function just isn't defined for . This isn't any different from solving and getting for a result. -- Meni Rosenfeld (talk) 16:09, 21 February 2008 (UTC)
- Your general solution should be . If you're restricting yourself to the real numbers, this is equivalent to . But if you solve for y by taking the fourth root like you did, you're excluding negative values for y, so your solution isn't the general solution any more. —Bkell (talk) 05:20, 22 February 2008 (UTC)
[edit] Is there anything that HAS TO be true but isn't?
Has mathematics ever proved anything, so that it has to be true, but in fact it isn't? For example, I'm thinking of something like proving that something can be done within some constraints, but in fact all the variations have been tried by computer and none of them are successful. So there HAS TO be a way, but there isn't. —Preceding unsigned comment added by 79.122.42.134 (talk) 15:32, 21 February 2008 (UTC)
- No, because if it could be proved true but all attempts to give examples fail, then you did it wrong. -mattbuck (Talk) 15:38, 21 February 2008 (UTC)
- (edit conflict) No. This state of affairs is impossible based on the any commonly accepted concept of a mathematical proof. If you modify your statement to "Has anything ever been accepted as proven by the mathematical community...", then I do not know. Taemyr (talk) 15:47, 21 February 2008 (UTC)
- There are of course things that seem to have been logically proven but aren't true. See paradox, and especially Zeno's paradox. DJ Clayworth (talk) 15:51, 21 February 2008 (UTC)
Do you guys mean to tell me that the universe is 100% consistent 100% of the time, and anything that follows is in fact thus, with no exceptions, ever? I find that a little hard to swallow... —Preceding unsigned comment added by 79.122.42.134 (talk) 16:15, 21 February 2008 (UTC)
- We're not talking about the universe, we're talking about mathematics. Mathematics has not been proven to be consistent (and in a certain sense can't be), but is generally believed to be so. But yes, in the universe as in mathematics, the result of a valid deduction from true premises is true (in the jargon, valid deductions are truth-preserving). Algebraist 16:23, 21 February 2008 (UTC)
- Indeed. Don't confuse reality with mathematics. And even in mathematics it is possible to have a valid proof that a statement is true and an equally valid proof that the same statement is false (simplest case is that you start with both P is true and P is false as axioms). Unfortunately, that result indicates that you are working in a system that is inconsistent, and so not very interesting or useful. Gandalf61 (talk) 16:28, 21 February 2008 (UTC)
-
- A number of logicians, such as Graham Priest, argue that inconsistent systems can be both interesting and useful. See paraconsistent logic for example. -- Dominus (talk) 00:29, 22 February 2008 (UTC)
-
- Two possibly relevant Einstein quotes [1]:
- One reason why mathematics enjoys special esteem, above all other sciences, is that its laws are absolutely certain and indisputable, while those of other sciences are to some extent debatable and in constant danger of being overthrown by newly discovered facts.
- As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.
- --mglg(talk) 17:10, 21 February 2008 (UTC)
- Two possibly relevant Einstein quotes [1]:
- I think it's possible for a mathematical theory to prove something exists, but for it to be provably impossible to find an example (using a different mathematical theory) without either being inconsistent. For example, you might be able to prove that there is an even number above two that isn't the sum of two primes using one theory, and prove that you could never find an example by using a hypercomputer and the original theory with allowance of infinite steps (in other words, your example, but with an infinite number of variations and an infinitely fast computer). — Daniel 01:38, 22 February 2008 (UTC)
- Indeed. For example, ZFC proves the existence of a well-ordering of the real numbers R, but (assuming ZFC is consistent) there is no formula that provably well-orders R. Algebraist 21:00, 22 February 2008 (UTC)
- There can't be any rules necessary for basic mathematical reasoning that don't work... i.e. disproven by counterexample. Because then they simply wouldn't be rules! Interestingly enough I've been reading about aspects of mathematics which can be logically seen to be true but not provable with the commonly accepted "axioms" or rules of mathematics (Roger Penrose - The Emperor's New Mind, Oxford University Press). See Gödel's incompleteness theorems! Psymun (talk) 22:56, 22 February 2008 (UTC)
[edit] Vandermonde determinant and polynomials
Hi, im having trouble doing this problem from a book on galois theory. Its from the start of the book (Galois theory - Ian Stewart) so wont have much to do with real galois theory, mostly just algebra realated to polynomials. The problem is:
Two polynomials f,g over define the same function if and only if they have the same coefficients.
If are distinct complex numbers, then the Vandermonde determinant is defined as Then there is a hint which doesnt really help me:
Consider the aj as independent determinants over . Then D is a polynomial in the aj, of total degree . Moreover D vanishes whenever aj = ak for some as it has 2 identical rows. Therefore D is divisible by aj − ak hence it is divisible by
∏ | (aj − ak) |
j < k |
now compare degrees.
Thats as far as i got really, any help would be great, thanks. —Preceding unsigned comment added by 137.205.93.126 (talk) 15:50, 21 February 2008 (UTC)
- Huh? I'm confused. The statement 'Two polynomials f,g define the same function if and only if they have the same coefficients (up to multiplication by a constant)' is simply false (multiplication by a constant changes the function, different polynomials can define the same function over finite fields), and I can't see any obvious true statement it could be a misprint of. Moreover it seems to have little to do with the Vandermonde matrix. The one standard problem about the Vandermonde determinant is to show that it is non-zero if the ai are distinct, and that seems to be what the hint is leading you towards proving. Algebraist 16:18, 21 February 2008 (UTC)
- Sorry i didn't explain it well. I seemed to have added the up to multiplication bit for no reason as now i think about it it is wrong, ive taken it out of the question and rewote it exactly as it appears in the book. In the book it gives a different proof using calculus, then goes on to say "For a purely algebraic proof see [this problem]". I had the following idea: consider , the equation (where V is the vandermonde matrix) has a solution if and only if det(v) = D = 0 so showing the vandermonde determinant is non zero if the coefficients in it are distinct, that is the polynomials in the vector solution of have different coefficients. Is this in any way correct? Also if you could give me a little help showing the Vandermonde determinant is 0, that would be helpful too. Sorry if my wording is off, im having alot of trouble understanding this. Thanks. —Preceding unsigned comment added by 137.205.93.126 (talk • contribs)
- Unfortunately, I was taught Vandermonde determinants by someone that went far too fast, so I don't fully understand them. I have one general tip, though: A matrix has zero determinant if the columns (or, equally rows) aren't linearly independent - in particular, if one row/column is a scalar multiple of another (that's not necessary, but it is sufficient). --Tango (talk) 18:39, 21 February 2008 (UTC)
-
- Looked in my copy of Stewart's Galois Theory (2nd edition). The only problem I can find that mentions the Vandermonde determinant is Exercise 18.4 - but that is in the penultimate chapter of the book, not near the beginning, and it isn't worded like your question. Is your question from a different edition ? Gandalf61 (talk) 20:17, 21 February 2008 (UTC)
-
-
- Mine is third edition. it appears on page 28, exercise 2.5*. Nothing about it is mentioned in the preface to the third edition though. —Preceding unsigned comment added by 137.205.93.126 (talk) 22:29, 21 February 2008 (UTC)
-
-
-
-
-
- The start of the question wants you to show that D is nonzero. Then [somehow] it wants you to use that to prove that "Two polynomials f,g over define the same function if and only if they have the same coefficients."—Preceding unsigned comment added by 137.205.93.126 (talk) 22:29, 21 February 2008 (UTC)
-
-
-
-
-
-
-
-
- Hmmm. The sentence "Two polynomials f,g over define the same function if and only if they have the same coefficients" seems to me to be a definition rather than the objective of the problem. And you can't show that D is unconditionally nonzero, because D is zero if the ai are not distinct. If you are not clear about what the question is asking, then you start with a big handicap when trying to solve it. Gandalf61 (talk) 15:37, 22 February 2008 (UTC)
-
-
-
-
-
-
-
-
-
-
- I don't think that's a definition. Conceivably it might be possible to have two different polynomials (i.e., polynomials with different coefficients) that happen to give the same value when evaluated at any . Stating that two polynomials define the same function if and only if they have the same coefficients is equivalent to stating that this never happens, which would require proof. —Bkell (talk) 17:18, 22 February 2008 (UTC)
-
-
-
-
-
-
-
-
-
-
-
-
- Well, if you have to prove this, you look at f-g, which is zero everywhere, and so must be the zero polynomial, so f and g must have identical coefficients. But this statement about identical polynomials has no obvious connection with the Vandermonde determinant (as Agebraist has already said above). I suspect it is part of the previous problem that has been copied in error. Stewart often gives exercises that are lists of statements which the student has to mark as "true" or "false"; this statement feels like part of one of these "true or false" exercises. Of course, if someone has the 3rd edition of Galois Theory to hand, they could check this, and maybe also clarify the whole problem and resolve the original posters confusion. Gandalf61 (talk) 17:51, 22 February 2008 (UTC)
- The statement is clearly equivalent to the statement "if a polynomial is zero everywhere, it is the zero polynomial", the intention may be to prove that (it's not always true, it's a property special to polynomials over fields of characteristic 0, so it's far from a trivial statement - it shouldn't be all that hard to prove, however). I do seem to remember a connection between the Vandemonde determinant and checking if functions are equal, but as I said before, those lectures didn't make a lot of sense. --Tango (talk) 19:50, 22 February 2008 (UTC)
- The characteristic is irrelevant, actually. The usual proof is that a polynomial of degree n has at most n roots; this works over any infinite integral domain. Algebraist 20:10, 22 February 2008 (UTC)
- Over a finite field, the number of solutions can be less than n simply because there are less than n elements in the field. Consider . --Tango (talk) 20:58, 22 February 2008 (UTC)
- But we are not working in a finite field. We are told that f and g define the same function over C. Therefore f-g is zero everywhere in C. Therefore f-g is a polynomial with an infinite number of roots. Therefore by FTA f-g is the zero polynomial. Maybe the point of introducing the Vandermonde determinant is to prove this without using FTA. Gandalf61 (talk) 22:50, 22 February 2008 (UTC)
- Over a finite field, the number of solutions can be less than n simply because there are less than n elements in the field. Consider . --Tango (talk) 20:58, 22 February 2008 (UTC)
- The characteristic is irrelevant, actually. The usual proof is that a polynomial of degree n has at most n roots; this works over any infinite integral domain. Algebraist 20:10, 22 February 2008 (UTC)
- I now have the book in my hand. The problem is to show that the Vandermonde determinant is nonzero if the ai are distinct complex numbers. The hint is as given by the OP. The next question is to use the Vandermonde determinant to show that if a polynomial f(t) over C vanishes at all points of C then it is the zero polynomial. The hint 'Substitute t = 1, 2, 3 ... and solve the remaining system of linear equations of the coefficients' is given. Algebraist 20:18, 22 February 2008 (UTC)
- The statement is clearly equivalent to the statement "if a polynomial is zero everywhere, it is the zero polynomial", the intention may be to prove that (it's not always true, it's a property special to polynomials over fields of characteristic 0, so it's far from a trivial statement - it shouldn't be all that hard to prove, however). I do seem to remember a connection between the Vandemonde determinant and checking if functions are equal, but as I said before, those lectures didn't make a lot of sense. --Tango (talk) 19:50, 22 February 2008 (UTC)
- Well, if you have to prove this, you look at f-g, which is zero everywhere, and so must be the zero polynomial, so f and g must have identical coefficients. But this statement about identical polynomials has no obvious connection with the Vandermonde determinant (as Agebraist has already said above). I suspect it is part of the previous problem that has been copied in error. Stewart often gives exercises that are lists of statements which the student has to mark as "true" or "false"; this statement feels like part of one of these "true or false" exercises. Of course, if someone has the 3rd edition of Galois Theory to hand, they could check this, and maybe also clarify the whole problem and resolve the original posters confusion. Gandalf61 (talk) 17:51, 22 February 2008 (UTC)
-
-
-
-
-
-
(Outdent) Gandalf61: You don't need the FToA to prove that a degree n polynomial has at most n roots. You can do that (over any integral domain) with the factor theorem and induction. The FT tells us that (counting multiplicity) the poly has exactly n roots. Algebraist 14:41, 23 February 2008 (UTC)
[edit] Fastest algorithm to find very smooth numbers close to some number
What is the fastest algorithm that could find smooth integers very near some specified integer. In other words which algorithm has the lowest ration of computation/(smoothness of a found number * smoothness of distance of that smooth number from a specified integer). 128.227.1.24 (talk) 20:30, 21 February 2008 (UTC)Mathguy
- How would you compare the following two approximations of 123456789:
- 2·39·55 = 123018750
- 27·39·72 = 123451776
- Which is better, and why? Do these two approximations have smoothnesses of, respectively, 5 and 7? If not, how do you measure the smoothness of a number. And what do you mean by "smoothness of distance"? —Preceding unsigned comment added by Lambiam (talk • contribs) 22:52, 22 February 2008 (UTC)
I would prefer the first one because its distance to N has a smaller largest factor. I was looking for a better/faster way to find Integers such that (Number-Integer)*(Integer) are smooth. The fastest method I tried so far is sieving. Dynamic programming subset sum solution is much slower than sieving. —Preceding unsigned comment added by 24.250.129.216 (talk) 00:33, 23 February 2008 (UTC)