Talk:Propagation of uncertainty
From Wikipedia, the free encyclopedia
Contents |
[edit] i or j
Is it necessary to use both i and j as indices for the summation of the general formulae? it appears to me that i only appears in the maths whilst j only appears in the english. True? If not, it could be more clearly explained as to the reasons for the change / use of both.
Thanks Roggg 09:35, 20 June 2006 (UTC)
[edit] Geometric mean
Example application: the geometric mean? Charles Matthews 16:33, 12 May 2004 (UTC)
- From the article (since May 2004!): "the relative error ... is simply the geometric mean of the two relative errors of the measured variables" -- It's not the geometric mean. If it were, it would be the product of the two relative errors in the radical, not the sum of the squares. I'll fix this section. --Spiffy sperry 21:47, 5 January 2006 (UTC)
[edit] Delta?
In my experience, the lower-case delta is used for error, while the upper-case delta (the one currently used in the article) is used for the change in a variable. Is there a reason the upper-case delta is used in the article? --LostLeviathan 02:01, 20 Oct 2004 (UTC)
[edit] Missing Definition of Δxj
A link exists under the word "error" before the first expression of Δxj in the article, but this link doesn't take one to a definition of this expression. The article can be improved if this expression is properly defined. —the preceding comment is by 65.93.221.131 (talk • contribs) 4 October 2005: Please sign your posts!.
[edit] Formulas
I think that the formula given in this article should be credited to Kline-Mcklintock. —lindejos
First, I'd like to comment that this article looks like Klingonese to the average user, and it should be translated into English.
Anyway, I was looking at the formulas, and I saw this allegation: X = A ± B (ΔX)² = (ΔA)² + (ΔB)², which I believe is false.
As I see it, if A has error ΔA then it means A's value could be anywhere between A-ΔA and A+ΔA. It follows that A±B's value could be anywhere between A±B-ΔA-ΔB and A±B+ΔA+ΔB; in other words, ΔX=ΔA+ΔB.
If I am wrong, please explain why. Am I referring to a different kind of error, by any chance?
aditsu 21:41, 22 February 2006 (UTC)
- As the document I added to External links ([1]) explain it, we are looking at ΔX as a vector with the variables as axes, so the error is the length of the vector (the distance from the point where there is no error).
- It still seems odd to me, because this gives the distance in the "variable plane" and not in the "function plane". But the equation is correct. —Yoshigev 22:14, 23 March 2006 (UTC)
- Now I found another explanation: We assume that the variables has Gaussian distribution. The addition of two Gaussians gives a new Gaussians with a width equals the quadrature of the width of the originals. (see [2]) —Yoshigev 22:27, 23 March 2006 (UTC)
[edit] Article title
The current title "Propagation of errors resulting from algebraic manipulations" seems to me not so accurate. First, the errors don't result from the algebraic manipulations, they "propagate" by them. Second, I think that the article describe the propagation of uncertainties. And, third, the title is too long.
So I suggest moving this article to "Propagation of uncertainty". Please make comments... —Yoshigev 23:39, 23 March 2006 (UTC)
- Seems okay. A problem with the article is that the notation x + Δx is never explained. From your remarks, it seems to mean that the true value is normally distributed with mean x and variance Δx. This is one popular error model, leading to the formula (Δ(x+y))² = (Δx)² + (Δy)².
- Another one is that x + Δx means that the true value of x is in the interval [x − Δx,x + Δx]. This interpretation leads to the formula Δ(x + y) = Δx + Δy, which aditsu mentions above.
- I think the article should make clear which model is used. Could you please confirm that you have the first one in mind? -- Jitse Niesen (talk) 00:58, 24 March 2006 (UTC)
-
- Not exactly. I have in mind that for the measured value x, the true value might be in [x − Δx,x + Δx], like your second interpretation. But for that true value, it is more probable that it will be near x. So we get a normal distribution of the probable true value around the measured value x. Then, 2Δx is the width of that distribution (I'm not sure, but I think the width is defined by the standard deviation), and when we add two of them we use (Δx)² + (Δy)², as explained in Sum of normally distributed random variables.
- I will try to make it clearer in the article. —Yoshigev 17:45, 26 March 2006 (UTC)
As you can see, I rewrote the header and renamed the article. —Yoshigev 17:44, 27 March 2006 (UTC)