Wikipedia:Reference desk/Archives/Mathematics/2008 April 24
From Wikipedia, the free encyclopedia
Mathematics desk | ||
---|---|---|
< April 23 | << Mar | April | May >> | April 25 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
Contents |
[edit] April 24
[edit] Likelihood computation question
I've been trying to work out a maximum likelihood estimate for a statistical model. I have the joint density for an observed variable X and an unobserved variable T, f(X,T | θ), as a function of parameter vector θ. Fortunately T is discrete, so estimate θ I figured I'd sum over T and get the likelihood of θ as a function of X alone. That is
(I can reasonably put a bound to make the sum finite with a known error.) However when I do this sum I can no longer take the log, so my computer chokes on the small values. Before recoding this to calculate using higher floating point precision (it is in R right now, I'd put it in python), does anyone see either a flaw in my logic, or a slick computational trick? Thanks, --TeaDrinker (talk) 05:14, 24 April 2008 (UTC)
- A computer does not choke by taking the logarithm of a positive number that can be represented in the computer, so I suspect that you are trying to take the logarithm of a negative number. Check that your function f is always positive. Bo Jacoby (talk) 08:43, 24 April 2008 (UTC).
- That isn't the problem. What the poster wants to do is represent the likelihood l in log-space to avoid underflows if it is very small. However, the log of a sum is not the sum of logs, so the log trick can't be used to compute this expression term-by-term, and an underflow results. One way to avoid the underflow without going to log-space is to compute N * f(X,T) for a sufficiently large N, then divide out N at the end. 134.96.105.72 (talk) 11:19, 24 April 2008 (UTC)
- Thanks for the ideas. You're spot on 134., the likelihood is zero (due to round-off error) everywhere except in the immediate vicinity of the true parameter values. I'll try the inflation approach. --TeaDrinker (talk) 18:03, 24 April 2008 (UTC)
- That isn't the problem. What the poster wants to do is represent the likelihood l in log-space to avoid underflows if it is very small. However, the log of a sum is not the sum of logs, so the log trick can't be used to compute this expression term-by-term, and an underflow results. One way to avoid the underflow without going to log-space is to compute N * f(X,T) for a sufficiently large N, then divide out N at the end. 134.96.105.72 (talk) 11:19, 24 April 2008 (UTC)
Is there a known probability distribution of T? Have you tried the following?
(You'll notice how I distinguish between lower-case t and capital T, and more importantly why I distinguish between them.) Michael Hardy (talk) 23:16, 24 April 2008 (UTC)
- Thanks! Indeed, you may have spotted that I was a bit loose in my notation. Formally what I was calculating is the likelihood
- Where f(X,T) is the "joint density" (of sorts--mixed continuous-discrete systems should perhaps not be described as densities) of X and T. I arrived at my joint distribution by knowing the pdf of X|T=t, fX | T(x,t | θ) and the marginal pmf for T, denoted . Thus to calculate the marginal for X (I think this is what you're suggesting), I compute
- I was a bit sloppy and wrote fX | T(x)fT(t) = f(X,T), the joint distribution. Unless I am overlooking something here. Is this what you're getting at? Thanks! --TeaDrinker (talk) 06:23, 25 April 2008 (UTC)
[edit] What's the difference of graphing
I know that a quadratic equations can be solved by graphing using the quadratic formula, completely the square and factoring but what are the pros and cons of them and when might you use each method appropiately? I'm a bit confused. —Preceding unsigned comment added by 71.185.104.176 (talk) 10:37, 24 April 2008 (UTC)
- Not all quadratic equations are factorable. However, you can always solve via the quadratic formula or completing the square. It will not always be a whole integer; it may be an irrational or nonreal answer. Hadiz (talk) 10:48, 24 April 2008 (UTC)
- Well, they are factorable, but the factors may not have integer/rational coefficients, which makes them much harder to find. I think the only real way of knowing if a quadratic will factor nicely is to try - with practice you can often tell at a glance, though. The quadratic formula and completing the square are the same method - the quadratic formula just jumps straight to the final result, so which of those you use is a matter of personal preference (if you have difficultly remembering the quadratic formula, just complete the square, for example). Graphing only gives you approximate roots, so isn't generally worth it (it's so easy to find the exact roots for a quadratic, why bother with approximate ones?). --Tango (talk) 13:06, 24 April 2008 (UTC)
-
-
- The discriminant of the quadratic, b2-4ac, tells you a lot about its roots:
- If the discriminant is negative, there are two distinct roots, and they are complex.
- If the discriminant is zero, there is a single repeated root, which must be real.
- If the discriminant is positive, there are two distinct roots, and they are real.
- Also, if the coefficients are integers, then the discriminant tells you whether the roots are rational - if the discriminant is 0 or a positive square number then the root(s) are rational. Gandalf61 (talk) 13:55, 24 April 2008 (UTC)
- The discriminant of the quadratic, b2-4ac, tells you a lot about its roots:
-
[edit] Properties of chaotic map?
I've been reading about various chaotic maps, and I thought of one (inspired partially by the Circle map and the Dyadic transformation map). Its equation is , and I've been trying to figure out the rotation number for it (since I don't have access to a computer which can simulate it for me). The only thing I've figured out is for B = 0 and k an integer greater than or equal to 1, in which case , where in base k + 1. I suspect similar results hold for k rational. But I'm wondering what happens when B is not 0 or when k is irrational. Does anyone have any insights? --Zemylat 14:54, 24 April 2008 (UTC)
[edit] .dvi reading
A lot of Mathematicians (e.g, this one only offer papers in .dvi format, with no .tex, .ps, or .pdf formats. I used Bakoma Tex Editor to open these files, but its time expired. Any suggestion of a good free program? Thanks in advance. Mdob | Talk 20:04, 24 April 2008 (UTC)
- Try MiKTeX. --Tango (talk) 20:06, 24 April 2008 (UTC)
- That is to say, try yap, the DVI viewer included with MiKTeX. Are you using Windows? Tesseran (talk) 07:10, 25 April 2008 (UTC)
- Yes (see below).Mdob | Talk 21:47, 25 April 2008 (UTC)
- That is to say, try yap, the DVI viewer included with MiKTeX. Are you using Windows? Tesseran (talk) 07:10, 25 April 2008 (UTC)
[edit] Measure theory question
My question is merely whether this lemma I'm using is a standard result, found in books or elsewhere in print.
Let (Ω, F, n) be a measure space and suppose n(Ω) < ∞. Let X : Ω → (0, ∞) be a measurable function. For A ∈F let
Then X is the Radon-Nikodym derivative dm/dn. Now for x ≥ 0, let
and
Then a question is: when does dM/dN = x?
Notice that N and M are non-increasing functions of x, but may fail to be one-to-one. But M is necessarily a strictly increasing function of N (at least if M is always finite); it cannot fail to be one-to-one. Since M is therefore a function of N, one can ask whether it is differentiable and what its derivative is.
The answer would appear to be that if there is any interval (a, b) on which N (and therefore also M) remains constant, then dM/dN does not exist at that point, since the derivative from the left could not be less than b and that from the right could not be more than a; but at all other values of N we would have dM/dN = x. The proof would be that if N is a strictly decreasing function of x, then x is a continuous function of N, and the difference quotient that approaches dM/dN would be squeezed between x and x + Δx, and continuity would imply that Δx → 0.
So dm/dn = X is easy, and dM/dN = x is a bit more work.
My question is: is this a standard result found in textbooks or other published sources (that should be mentioned if one relies on this in published work)? Michael Hardy (talk) 23:53, 24 April 2008 (UTC)