Talk:Quasi-Monte Carlo method
From Wikipedia, the free encyclopedia
What is the exponent s in this formula?
- The discrepancy of sequences typically used for the quasi-Monte Carlo method is bounded by a constant times
Michael Ward 19:55, 22 Dec 2004 (UTC)
- s is the dimensionality of the integral, and therefore it's also the dimensionality of the elements x of the low-discrepancy sequence. The problem statement shows a 1-dimensional integral; that should be generalized to show an integral over s dimensions. Hope this helps, Wile E. Heresiarch 07:23, 24 Dec 2004 (UTC)
- Then how is that relevant to the convergance? N is a scalar quantity, or could just as easily be, so what value does it have to complicate the convergence illustration with the s? Also, why is the covnergence for the Monte Carlo method shown in this complicated form, when the form given by the Central limit theorem is so much simpler? Detail that adds value is good, detail that doesn't is just confusing the reader and making the article accessible to less people. I think both of these fall in the latter case. - Taxman Talk 14:07, Jun 14, 2005 (UTC)
- Well, (1) one of the relevant differences between Monte Carlo and quasi-Monte Carlo is that the discrepancy is a function of the dimension s for QMC but not for MC. Since the article is about QMC, it seems relevant to bring s into play. Maybe this point could stand to have greater emphasis. (2) The formula sqrt ((log log N)/(2 N)) is for the expected discrepancy of MC -- this is proportional to an upper bound on the error. If I'm not mistaken, the usually-cited 1/sqrt(N) is the expected error for MC. Hope this helps, Wile E. Heresiarch 03:25, 15 Jun 2005 (UTC)
- Then how is that relevant to the convergance? N is a scalar quantity, or could just as easily be, so what value does it have to complicate the convergence illustration with the s? Also, why is the covnergence for the Monte Carlo method shown in this complicated form, when the form given by the Central limit theorem is so much simpler? Detail that adds value is good, detail that doesn't is just confusing the reader and making the article accessible to less people. I think both of these fall in the latter case. - Taxman Talk 14:07, Jun 14, 2005 (UTC)
My impression is that Quasi-MC is not a whole lot of use because (as noted in article) it works best for smooth (ie low derivative and higher derivative) functions in small numbers of dimensions. That's exactly the kind of function where extended Simpson's rule and the like work ok and you don't need MC. One interesting case to physicists is the Metropolis algorithm, where the first derivative is infinite. Quite a few applications also have large numbers of dimensions. But i guess the promise of Quasi-MC from the theory is enough that people would like to extend it to work in more cases. If anyone out there knows these issues a bit more discussion in the article would be nice. Thanks. 203.164.222.150 02:29, 20 September 2006 (UTC)