Wikipedia:Reference desk/Archives/Mathematics/2007 September 12

From Wikipedia, the free encyclopedia

Mathematics desk
< September 11 << Aug | September | Oct >> September 13 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


Contents

[edit] September 12

[edit] Calculating uncertainty in linear regressions with uncertain data

Hello, math desk folks. The other day I had an assignment in a lab to calculate the relationship between the voltage a pressure-meausring instrument was returning, and the actual value for the pressure (measured using a separate manometer) for purposes of calibration. That called for simple linear regression, which worked very well. However, I was also tasked to find the uncertainty in the measurement, and since I didn't have the uncertainty of the voltage, I decided to just use the standard error from the linear regression. However, that presented me with a problem: I knew that the pressure I was using to calibrate my digital instrument had a known uncertainty, but I didn't know how to account for that uncertainty in the standard error.

Eventually, the instructor sent everyone the uncertainty in the digital reader, so that solved my problem, but I'm still curious about it. If we have a set of data Y, and we know that each point in the data set has an uncertainty of ΔY, how do I account for ΔY in the standard error? As far as I can see, linear regression assumes that we know the data points with certainty, which kind of makes me wonder how it works with uncertain data (which is what we have in the real world...)

Also, this probably isn't possible, but can anyone measure ΔX (the uncertainty in the corresponding abscissa values) using the standard error, or some other statistical method? Titoxd(?!? - cool stuff) 20:22, 12 September 2007 (UTC)

I'm not sure if this answers your question. The total error of the measurements in the measurement data would be the combined effect of two independent errors: the errors in the reference source, and the errors from the measuring process itself. If you have two independent random variables X and Y, both having a probability distribution with mean 0 and variances of, respectively, σX2 and σY2, their sum X+Y also has mean 0, and has variance σX+Y2 = σX2 + σY2. (Assuming there is no systematic error, that is, the errors have a mean of 0, the "standard" error is the square root of the variance). So if you know the two errors, and the error sources are independent and have no systematic deviation, you can take the square root of the sum of the squares as the combined standard error, just like in the formula for the hypotenuse in the Pythagorean theorem.
As to the last part, given a collection of combined measurement errors, it is not possible to estimate the contributions of the individual sources from the data. Unless you know something special – which you typically don't – it is not possible to reconstruct the probability distributions (or even compute estimates for meaningful parameters of the distributions such as mean and variance) of two random variables, given the distribution of their sum.  --Lambiam 23:14, 12 September 2007 (UTC)
I am aware of the first part; we utilized that fact (the famous "square root of the squares" of the uncertainties) to calculate the uncertainty in the pressure of the manometer. (In case you are interested, we used it when we followed Pascal's Law, as we had to apply the uncertainty in the manometer height measurement Δh, and the uncertainty in the density ρ; the latter had to be derived the same way from known uncertainties in the air pressure and temperature, from the Ideal gas law.) I was wondering if (or how) the uncertainties affected the regression process itself.
As for the second question, I kind of was afraid of that, but it didn't hurt to ask... :) Titoxd(?!? - cool stuff) 04:18, 13 September 2007 (UTC)

[edit] NOT A NUMBER

NaN#NaNs_in_function_definitions Why is one raised to NAN considered one? Take for instance the definition of e, which if the limits are taken becomes one raised to infinity, an indeterminate form.--Mostargue 20:28, 12 September 2007 (UTC)

NaN is not infinity. --Spoon! 21:54, 12 September 2007 (UTC)

Also:


 e^{x} = 1 + {x \over 1!} + {x^{2} \over 2!} + {x^{3} \over 3!} + \cdots

Why not start the pattern earlier and write it as


 e^{x} = {x^{0} \over 0!} + {x^{1} \over 1!} + {x^{2} \over 2!} + {x^{3} \over 3!} + \cdots

Hmm? —Preceding unsigned comment added by Mostargue (talkcontribs) 20:39, 12 September 2007 (UTC) --Mostargue 20:40, 12 September 2007 (UTC)

Sure many people do write it that way. See for example in exponential function#Formal definition: e^x = \sum_{n = 0}^{\infty} {x^n \over n!}. I guess there might be some trouble with e0 because not everyone agrees that 00=1. --Spoon! 21:54, 12 September 2007 (UTC)
This question might be more appropriate at Wikipedia:Reference desk/Computing, since it has more to do with computing pragmatics rather than mathematics. I suspect nothing in the IEEE 754 standard dictates the value of 1 raised to a power which is a NaN, but in the arithmetic operations that are defined the result is always a NaN whenever any input is a NaN. Remember, NaN means "not a number"; it may help to think of it as an elephant or a colorless green idea.
The pow function is part of the C99 standard, and the best definition has been debated, with contention over
pow(1, +-inf) = 1
pow(+1, x) = 1 for any x, even a NaN
pow(x, +-0) = 1 for any x, even a NaN
pow(inf, y) = +0 for y<0 and not an odd integer
pow(inf, y) = inf for y an odd integer > 0
pow(inf,  y)  =  +inf  for  y>0  and  not an odd integer
But it all comes down to pragmatics; we hope to settle on the most useful definition. --KSmrqT 23:05, 12 September 2007 (UTC)

Would you please include the link into exponentiation, where there is a lengthy debate on the value of 00 ? Bo Jacoby 18:03, 13 September 2007 (UTC).

[edit] Can someone please do this for me.

what would the sequence 22+23+24..........+2249+2250 equal when solved out? -Icewedge 23:55, 12 September 2007 (UTC)

This is not that difficult to solve... but it has a whole lot of digits so there's no way I'm typing the whole thing out. I also hope whatever calculator you're using can hold 75 digits worth of number...
But this problem is very much like 2^0 + 2^1 + 2^2...+2^n, which has a very simple closed form. I'm not gonna tell you, because of the homework thing at the top of the page, but it's pretty simple to work out, if you don't know it already. Gscshoyru 00:03, 13 September 2007 (UTC)
This is not homework but it does not matter because 75 digits is way to big, the ballpark method I used before gave me an answer of under 10,000,000. I'm back to the drawing board. -Icewedge 00:13, 13 September 2007 (UTC)
Um, Icewedge, just Google the last term to get into the ballpark: 2250 = 1.80925139 × 1075 (if you're looking for ballpark, not precision). To use Google type 2^250. - hydnjo talk 00:31, 13 September 2007 (UTC)
Well... ok. The closed for for what I said above is just 2^(n+1)-1... so since you are missing the first two terms it'd be 2^(n+1)-4, or 2^251-4, to be specific. Just so you know. What are you working on, then? Gscshoyru 00:16, 13 September 2007 (UTC)
I am trying to calculate about how much faster genetic mutations would have had to happen to make Baraminology even remotely possible. -Icewedge 00:24, 13 September 2007 (UTC)
Why hasn't anyone mentioned geometric series yet? —Bromskloss 12:11, 13 September 2007 (UTC)

Try the J (programming language). The program is (2x^251)-4 and the result is 3618502788666131106986593281521497120414687020801267626233049500247285301244 . (The 'x' indicates extended precision). Bo Jacoby 17:50, 13 September 2007 (UTC).