Talk:Quantum Monte Carlo
From Wikipedia, the free encyclopedia
Contents |
[edit] Copyright problems
Writing style set off a few alarm bells, so I googled and at least a few sections are verbatim lifts from This PhD thesis. Could be that the PhD's author added it to the article? They went to the trouble of adding the relevant references from the PhD thesis as they copied over. Sockatume 21:27, 13 November 2006 (UTC)
[edit] To do
There are several topics specific to QMC that could be useful:
better DMC, including some pictures of the basic algorithm(something like they have in the RMP by Foulkes et al)
discussion of scaling with respect to the error bars(as evidenced by the comments below, it's not that well understood)
the zero-variance theorem
the difference between the mixed and the pure estimators(this goes into DMC, I guess, although it could be separated out)
--Lucaskw 00:15, 2 June 2006 (UTC)
[edit] Advantages (section title added)
I have a question for anyone in 'the know.' What is the advantage to evaluating an integral with the Monte Carlo method, as opposed to some other more systematic numerical method? Ed Sanville 03:13, 24 September 2005 (UTC)
- Well, it doesn't have an advantage per se, it's just a different approach to integrating. It turns out more efficient in some cases. Especially for precise calculations of electron correlation, but for large systems it's also very costly. Karol 10:00, 24 September 2005 (UTC)
I think MC does have an advantage vs. numerical integration. With numerical integration the calculation becomes quickly unfeasible with system size (for example: consider calculation for a system of 50 atoms in a box of three dimensions. For numerical integration one would need 150 degrees of freedom all discretized, let's say 10 grid points per degree of freedom, so the calculation would need 10 to the 150th power---unfeasible even with modern day computers). The problem is even more severe in the case of quantum systems. Monte Carlo is based on the idea of importance sampling, which means that (roughly speaking) one does not need to integrate over all of configuration space (large parts of it are irrelevant). In Monte Carlo the scaling of the required computational effort with system size is thus much more favorable, usually the second power of the number of degrees of freedom (at least for a pair potential), if the basic algorithm is used, but more favourable scaling can be achieved by various tricks. Galileo fan 11:19, 7 December 2005 (UTC)
I just want to add to Galileo fan's comments.. Scaling in MC is actually not always trivial to figure out, because it not only depends on the time required to complete a MC step(the N^2 that he/she mentioned), but also in how the variance of the quantity you want to measure scales. Interestingly enough, this can lead to scaling better than the time per step. For example, in electronic structure calculations, the variance of the energy usually scales as O(N), and the per-step time is usually O(N^2), resulting in O(N^3) scaling in total. However, if you're interested in an energy per electron, or per cell, you divide your error bars(sqrt of the variance) by N, so the variance actually scales as 1/N, and therefore the calculation is O(N). On the other hand, various things like branching in Diffusion Monte Carlo and difficulty of making moves in general can increase the scaling factor as well. It's not easy to know the scaling when designing an algorithm, and many people have gotten it seriously wrong when making a new algorithm, including missing an exponential scaling.
[edit] Article needs attention
This page is desperately asking for more content and less references! Karol 19:52, 29 November 2005 (UTC)
- Working on it. I'm a grad student working in the field..may be good practice for those darn paper intros! --Lucaskw 01:26, 12 April 2006 (UTC)
- I'll be working on some of this too. I'm also a grad student working in the field. -- jed1978 21:12, 7 June 2006 (-0800)