Talk:Probability density function
From Wikipedia, the free encyclopedia
Contents |
[edit] "Intuitive"
Intuitively, if a probability distribution has density f(x), then the infinitesimal interval [x, x + dx] has probability f(x) dx.
Arrgh, now you tell me this has something to do with
∫ | f(x)dx |
Ω |
?
- Certainly. An integral is intuitively thought of as the sum of infinitely many infinitely small quantities ƒ(x) dx, each equal to the area below the graph of ƒ above the interval from x to x + dx, as x runs through the set of all numbers in the set A. That applies to integrals generally, not just those in probability theory. Michael Hardy (talk) 00:50, 12 June 2008 (UTC)
[edit] pdf
It would be nice to have a picture of a PDF, of, say, the normal distribution. -iwakura
Hi all:
Can someone help me in computing
Indefinite Integral (f(x)^((1/r)+1)) dx where r>=1
in terms of Indefinite Integral (f(x)) dx
Here, f(x) is an arbitrary probability density function.
Partho
what is a multimodal pdf ? the article should touch this topic. - rodrigob.
[edit] Simple English
In simple english. The probability density function is any function f(x) that describe the probability density in terms of the input variable x. With two further conditions that
- f(x) is greater than or equal to zero for all values of x
- The total area under the graph is 1. Refer to equation below.
The actual probability can then be calculated by taking the integral of the function f(x) by the integration interval of the input variable x .
For example: the variable x being within the interval 4.3 < x < 7.8 would have the actual probability of
- .
[edit] what is a probability density?
Given that it is a common mistake to interpret the y-axis of the probability density function as representing probability (as is often done with the normal curve), it would be helpful to have a common-sense description of what probability *density* is. It's clearly related to actual probability, but does it have a "real-world" correlate? How should "density" be interpreted? --anon
Answer: If "probability" is equivalent to "distance travelled" then "probability density" is equivalent to "speed". So the "probability density function of input variable x " is equivalent to "speed function of input variable t" where t stands for time. -ohanian
- One more answer: If you have a random variable, then it can take many values. But, most of the time those values are not equally likely, some of them occur more often than others. So, if a value of this variable is more likely, the density of that variable is higher at that value. If certain value does not occur at all, the density at that value is zero. This explanation is not at all rigurious, but it might drive the point home. Oleg Alexandrov 15:03, 2 Jun 2005 (UTC)
A simple example would help: Failure probablilty vs. failure rate. Any device fails after some time (failure probability==1, which is the integral from 0 to infinity), but the failure rate is high in the beginning (infant mortalility) and late (as the device wears out), but low in the middle during its useful lifetime, forming the famous bathtub curve. Ralf-Peter 20:44, 20 March 2006 (UTC)
The only description that made any sense to me was the paragraph beginning "In the field of statistical physics". I gather that somehow while the y-axis values do not represent probabilities of corresponding x-axis values, the y-axis values do represent probabilities of the interval from correponding x-axis value to that value plus an infinitely small amount. While this statement is easy to understand in the reading of it, I'm still puzzled about how an infinitely small amount can make any difference if we limit ourselves to the real number system. 207.189.230.42 05:38, 12 October 2007 (UTC)
[edit] Probability Density
I don't have time to correct it now, but the page Probability Density links to Probability Amplitude, which is about quantum mechanics. I think that should be a disambiguation page. --anon
- I redirected Probability Density to Probability density, which I made into a disambig. Oleg Alexandrov 18:04, 18 August 2005 (UTC)
[edit] Self Inconsistency
The article begins by saying that only when the Distribution Function is Absolutely Continuous, the random variable will have a Probability Density Function, but then it leaps into the PDF of Discrete Distributions, using the Dirac Delta "Function"! I don't think this is consistent :-) Albmont 18:14, 9 November 2006 (UTC)
[edit] marginal density function
if we know the joint density function f(x,y), how to get the marginal density function fY(x) & fX(y)? Jackzhp 01:37, 1 December 2006 (UTC)
- I think you integrate in x to get fX and integrate in y to get Fy. But I am not sure. Ask Michael Hardy, he should know. Oleg Alexandrov (talk) 16:17, 1 December 2006 (UTC)
[edit] addition
We know fX(x) for X, and fY(Y) for Y, X & Y are independent, Z=X+Y, then what is the density function for Z? And Z2=kX+Y. Thanks. Jackzhp 18:00, 1 December 2006 (UTC)
- It's the convolution of the two densities. Michael Hardy 19:44, 5 December 2006 (UTC)
I've added a short section on this. Michael Hardy 19:59, 5 December 2006 (UTC)
[edit] Standard deviation
It would be helpful to add the standard deviation formula for completeness Gp4rts 19:02, 5 December 2006 (UTC)
- I've added this at the bottom (the variance, not the SD, but close). Michael Hardy 19:44, 5 December 2006 (UTC)
[edit] Reals?
Is it true that all continuous random variables have to take on real values? What about the a random variable that represents a colour. Can it not have a probability density function over R3? Perhaps we could alter the formal definition to talk about ranges instead of intervals? MisterSheik 21:34, 27 February 2007 (UTC)
- That is covered in the section "Probability function associated to multiple variables". -Icek 05:47, 17 April 2007 (UTC)
[edit] Generality
I do not think the "formal" definition of a PDF given in the beginning of the article is the widely accepted general definition.
Given elements in an abstract set equipped with a measure (usually but not necessarily the Lebesgue measure), one can define a PDF f(x) over so that
for all measurable subsets .
is not necessarily a subset of or even . Any measure can also be used as reference, it is for instance perfectly possible to define a PDF with respect to area but spanned by polar coordinates.
I think it's wrong and misleading to present the case of a PDF over with respect to the Lebesgue measure as the "definition" of a PDF. Winterfors (talk) 23:54, 14 February 2008 (UTC)
[edit] Uses of PDF vs. distribution function
Would it be useful to explain in simple terms the use of this function, and contrast that with cumulative distribution functions? I gather that one is the integral of the other but beyond that I am having trouble. Boris B (talk) 03:29, 13 April 2008 (UTC)