Talk:Iterative method
From Wikipedia, the free encyclopedia
Contents |
[edit] Old remark
"An iterative method attempts to solve a problem ... by finding subsequent approximations to the solution."
Should it not read successive instead of subsequent?
S.
[edit] Strange mention and request
Who is Stiefel mentioned in the article? Is that name inserted there just a play by somebody?
Will somebody write a page for the conjugate gradient method to which this article refers?
- The conjugate gradient method is proposed in Hestenes, Magnus R.; Stiefel, Eduard, Methods of conjugate gradients for solving linear systems, J. Research Nat. Bur. Standards 49 (1952), 409--436 (1953). I amended the article accordingly. Your request for a page on this method has been added to my to-do list (I won't promise anything). -- Jitse Niesen 16:10, 4 Jan 2005 (UTC)
[edit] History section is too shallow
First, it ignores work by Gauss predecessors, including Newton. Second, it ignores the real world drivers (astronomy, business, and military issues) behind the effort to establish quicker tabulation methods. --70.112.146.196 17:30, 8 July 2006 (UTC)
[edit] Cost of moving, cost of evaluation?
In iterative methods, there may be cases where the cost of moving from one point to the next varies with location (e.g., it might cost more to move from (1,0,0,0) to (4,3,2,1) than to (2,0,0,0)); there also may be cases where the cost of evaluation is location-dependent (e.g., evaluating f(1000,1000) may cost less than evaluating f(42,37)). In such cases, the optimal "next guess" would be different than if these constraints did not exist. Is there literature on this sort of thing? —Ben FrantzDale 04:53, 27 April 2007 (UTC)
- Can you give an example of a realistic problem where evaluating a cost function at a point requires significantly more computation than evaluating the same cost function at a different point? Oleg Alexandrov (talk) 06:22, 27 April 2007 (UTC)
-
- One thing I'm thinking about is mesh adaptation for finite element analysis. Refining the mesh lets you explore a higher-dimensional solution space but at a computational cost. Another example is minimizing the energy of a bunch of atoms. Moving a bunch of atoms is O(N). Suppose you are doing steepest decent. If you knew that the force was almost zero for most atoms and only significant for n atoms, you could move from one configuration to another in O(n) time, putting off the O(N) step until it becomes necessary. I have other crazy ideas in mind as well.
- A less-abstract example came up in real life. I was hiking with my brother in the hills near San Diego (photo). We were trying to get to the top of a hill. There was no trail, so we were bushwhacking. From a distance, it wasn't possible to tell if bushes were small and could be walked over or were head-height and tangled. We tried to get to the top as quickly as possible, so we went for the shortest route, but as we went along, the bushes slowed us down, so we followed the path that seemed locally to be optimal in terms of time and in terms of scratches from branches. If we had known a priori that basically the top of the hill was densely covered in one convex patch, we could have used that knowledge to first find the edge of the dense vegetation, then traverse that edge to find the highest easily-walkable point, then go from there to the top.
- Does that make sense? Are there mathematical tools to discuss such things? —Ben FrantzDale 07:34, 27 April 2007 (UTC)