Wikipedia:Reference desk archive/Mathematics/2006 July 10
From Wikipedia, the free encyclopedia
|
||||||||
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions at one of the pages linked to above. | ||||||||
|
Contents |
[edit] Mix CD in Linux
What's a good Linux program to make a mix CD from tracks that exist on other CDs I have, ideally with no loss of quality and playable in a normal CD player? I use gentoo, any common program is probably available. Thanks. -- Pakaran 00:58, 10 July 2006 (UTC)
[edit] Setting up OS X Server Managed Clients
Sorry in advance if I am posting in the wrong place. I saw the Computer Science bit and figured that this was the closest thing. I'm a lifelong Windows guy...so I'm sorry if I am getting stuff blatantly wrong in this question or testify to my stupidity in this question.
I just set up a server running Mac OS X Server 10.4. I'm trying to set up a group of ~15 computers running Mac OS X 10.4 (and ~5 with OS X 10.3) as "managed clients". They would derive their authentication, preferences, updates, etc. from the server. Individual logons could be created or disabled by the server, and privileges could be remotely managed.
Trouble is, I have absolutely no idea where I should start. I've set up user directories on the server, and am currently able to create user accounts and edit user privileges. I have no idea, however, what I should do with my individual clients. I've played around with networking, logon settings, everything I could think of. I've searched the internet and read several hundred pages of OS X server manuals.
So yeah, help me Wikipedia Reference Desk. Thanks! Alphachimp talk 03:04, 10 July 2006 (UTC)
[edit] Hardest Equation
What is the hardest calculus equation? And what is the answer? —The preceding unsigned comment was added by Yamabushi334 (talk • contribs).
- 42. Alphachimp talk 04:31, 10 July 2006 (UTC)
- As Alphachimp suggests, the question you pose is difficult to answer exactly. What is a hard problem is subjective of course, so some there will never be a universal answer to your question. In another sense, one answer might be a difficult question on material usually taught in a first year differential and integral calculus class. Alternatively, the natural extensions of a first year calculus class lend themselves to difficult, and often unsolved problems. A few candidates for the latter might be the N-body problem or Navier-Stokes equations (in the area of Differential equations)--see Unsolved problems in mathematics for some fun reading. Problems which are commonly covered in the first year of calculus have different difficulties to different students. I'm not sure a simple list could even be compiled. --TeaDrinker 06:24, 10 July 2006 (UTC)
- Also, if we exit the realm of calculus, there is Matiyasevich's theorem, which proves that there is no algorithm to solve general diophantine equations. 130.119.248.11 13:52, 10 July 2006 (UTC)
Also, considering that there are many unsolved problems in all branches of mathematics, one would assume that the "hardest" would be one of these. In that case, we cannot give you an answer, either. -- He Who Is[ Talk ] 14:33, 10 July 2006 (UTC)
[edit] Gosh Numbers
Wikimathematicians, if you are interested, please help determine this afd discussion about Gosh Numbers. Thanks! Bwithh 04:36, 10 July 2006 (UTC)
[edit] doubt...
hello sir,
i have a small doubt regarding matrices.
1. if ; what would be the diagonal matrix diag(G,G,G) be?????
2. if i.e it is a column matrix with entries 1 and 0 what would be the diagonal matrix diag(H,H,H) be ????
thank you sir. can u please mail the answer to evani.subrahmanyam (at) gmail.com . thanks
- I can't be sure whether you want to know the answer unless you use at least six consecutive question marks. -lethe talk + 14:54, 10 July 2006 (UTC)
-
- I can answer with only 4 question marks, but I'm not sure what "diag" means here. Is it a command in a program or are you using it in the sense of the diagonal matrix article? If the latter, it might be a matrix consisting of the single element 1 for both G and H, if one abuses the notation a bit. As used in the article, it is the elements on the diagonal of a square matrix, so since G and H are not square matrices, it isn't quite defined. Notice that in the article, diag(a1,...,an) means an n-by-n matrix whose elements are 0 everywhere except the diagonal, which contains the elements a1, a2, ..., an, going down by rows/columns. So diag(G,G,G) (or H,H,H) suggests a matrix whose diagonal elements are row or column matrices, respectively. Such a matrix is ill-defined. 128.197.81.223 22:09, 10 July 2006 (UTC)
[edit] power law decay versus exponential decay
Are you reading the news?
7 July 2006
If you think you're reading the news, be warned that this story -- and any other on the web -- will be barely read by anyone 36 hours after it was first posted. That's the message from a team of statistical physicists who have analysed how people access information online. Albert-László Barabási of the University of Notre Dame in the US and colleagues in Hungary have calculated that the number of people who read news stories on the web decays with time in a power law, and not exponentially as commonly thought. Most news becomes old hat within a day and a half of being posted -- a finding that could help website designers or people trying to understand how information gets transferred in biological cells and social networks (Phys. Rev. E 73 066132).
Can someone tell me (mathematically) the difference between a power law decay and an exponential decay.
I know that exponential decay is
Ohanian 22:12, 10 July 2006 (UTC)
- Yes, exponential decay is when the time variable is an exponent, as you demonstrated above. A power law decay is when the time variable is raised to a power (i.e. y = t^.5). The difference is that exponential decay decreases at a fixed rate, regardless of the amount of the decaying substance (e.g. time) whereas power law decay decreases at a changing rate. Compare the derivatives of the two to see what I mean. 128.197.81.223 22:21, 10 July 2006 (UTC)
Are you saying that
Power Law Decay is where 0 < k < 1
But that is not a decay because the amount gets bigger and bigger over time.
Ohanian 22:26, 10 July 2006 (UTC)
- I am indeed. For more information see power law. 128.197.81.223 22:30, 10 July 2006 (UTC)
- Oops, the answers below are more correct, I made a typo then misread your response. Sorry! 128.197.81.223 15:09, 11 July 2006 (UTC)
- I was always under the impression that exponential decay/growth took the form:
- where x is a quantity, x0 is initial quantity, and k is a constant. If k<0, it is decay, and if k>0, it is growth. However, I think the link to power law is correct, but for decay, k<0. Then we would get something like:
- where in this case k>0. But same thing, really. You will see that the one above will decay.
- I might be wrong... x42bn6 Talk 02:56, 11 July 2006 (UTC)
- Yes, x42bn6 has it right (unless I am myself mistaken). An exponential law would be (anything asymptotical to) x = x0ekt, a power law would be x = Atk. In both cases, for k>0 it's growth, k=0 it's constant, and k<0 is decay. Examples: The amout of radioactive isotopes in a sample decreases exponentially with time. The gravitational force between a planet and a projectile with positive total energy (assuming these are the only objects in the universe) decreases quadratically (power law with k=-2). Of course, for both growth and decay, an exponential law is much faster (after enough time) than a power law. -- Meni Rosenfeld (talk) 07:44, 11 July 2006 (UTC)
-
-
- Suppose k is a positive integer; then 1⁄k is a positive value less than 1, and T1/k is the k-th root of T. As T increases, so do all its roots, so this would not be decay. However, if T is always positive then T(−k) is (1⁄T)k, which does decrease as T increases. Therefore T(−k) is a power-law decay, just as e(−T) is an exponential decay.
- The curious thing about the news report is that it suggests that a power-law decay drops off faster than an exponential decay. A proper comparison is more subtle, involving constants, initial rates, and asymptotic behavior. What can we really say? --KSmrqT 12:47, 11 July 2006 (UTC)
- Let's start at t = 1, to avoid problems with tk,t = 0,k < 0. Then suppose that f(t): = Aeαt and g(t): = Btβ, and that f(1) = g(1) = 1,f(T) = g(T) = C with 0 < C < 1,T > 1. Then A = e − α and B = 1, and Tβ = C so β = logTC. Finally, eα(T − 1) = C so . Now note that and ; as required, these are equal for because the exponents are equal; for , however, g(t) < f(t) because lnt is concave down and t − 1 is linear (the denominators being constant in both cases), so the exponent of C < 1 is always larger for g. In this sense, powers decay more quickly: for any given amount of decay over a given time, the power law does more of the decay sooner. You can arrive at a similar conclusion by considering and , implying that more of the proportional change in g happens for small t, but it's not as rigorous (and is relatively obvious given the exponential nature of f and g and that x' / x = (lnx)'). --Tardis 19:22, 11 July 2006 (UTC)
-