Talk:Numerical ordinary differential equations

From Wikipedia, the free encyclopedia

WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, which collaborates on articles related to mathematics.
Mathematics rating: B Class High Priority  Field: Applied mathematics

Contents

[edit] Names

According to the St Andrews' MacTutor website, specifically http://www-history.mcs.st-and.ac.uk/history/Mathematicians/Runge.html and http://www-history.mcs.st-and.ac.uk/history/Mathematicians/Kutta.html, the names are as written by 142.177.19.200 (Carle David Tolmé Runge and Martin Wilhelm Kutta), and not as I wrote them earlier. Jitse Niesen 15:20, 8 Jan 2004 (UTC)

[edit] Gear

I removed the following item from the History section:

1968 - C. William Gear invents the first stable algorithms to solve stiff differential equations.

I suppose this refers to BDF (backward differentiation formula), which were in fact already introduced by Curtiss and Hirschfelder in the same 1952 paper where they talk about stiffness. Please correct me if I am wrong. -- Jitse Niesen (talk) 5 July 2005 16:58 (UTC)

[edit] Consistent methods

It seems that the consistence of a method is mentioned in the pages about Runge-Kutta and Adams method, but it is never defined. Is this page the right place to put its definition? Fph 12:41, 21 June 2006 (UTC)

Yes, I think so. It would perhaps fit in nicely in the discussion about order. By the way, benvenuti a Wikipedia! -- Jitse Niesen (talk) 13:37, 21 June 2006 (UTC)
Grazie! I have added some words about consistency (by the way, it seems consistency is more widespread than consistence). Someone should add a short comment about the consistency being a weaker condition than convergence to ensure the method makes (at least some) sense. I'm not sure I know English enough to write it correctly, so I'll better leave it to someone else. :-) --Fph 19:14, 28 June 2006 (UTC)

[edit] Slight Change

I think it would make the equations easier to understand if h is replaced by Δx.

Please comment on my suggestion. --Freiddy 18:48, 2 March 2007 (UTC)

Perhaps (I suppose you mean Δt). Your notation is indeed makes it easier for the reader to remember that it stands for the step size. On the other hand, expressions like hf(t,y) and hp become slightly more awkward: you get Δtf(t,y) (could be misinterpreted, though adding some spacing might remedy this) and Δtp (might need parentheses). So, I don't know. -- Jitse Niesen (talk) 03:05, 3 March 2007 (UTC)
Δtp is mostly understandable, since most people are quite used to the notation \frac{d^2f}{dx^2}. You can also just change Δtf(t,y) into  f(t,y)\ \Delta t which is just like an integral. --Freiddy 12:28, 3 March 2007 (UTC)

[edit] ON THE ACCURACY OF DIGITAL INTEGRATION ALGORITHMS

My 14 years of experience with analog computers, my 40 years of experience with feedback controls, and my 40 years of experience with simulation (both analog and digital) have given me a somewhat different perspective on digital integration than I find in the literature. A digital integration algorithm must be evaluated on how well its gain matches 1/j-omega, and how close its phase is to -90 deg. The primary cause of problems with digital integration algorithms is the phase error, not the gain error. Some years ago I tested several digital integration algorithms and found only one that gave both good gain error and good phase error. This one is the Adams-Bashforth 2. All the other algorithms were very poor. Looking at amplitude error only gives an false confidence in the algorithm. To evaluate the algorithms, we did two different tests. We first measured the gain and phase with a digital signal analyzer which we programmed in C along with the integration algorithm. This was done on a 386-25 which dates the work. Then we programmed a second order loop with no damping to observe how fast the solution diverged or how fast it damped to zero. Once again, the AB 2 was the best by a wide margin. It isn't perfect, and it isn't nearly as good as a good analog integrator, but it was the best we could find. We didn't test every algorithm, but we did test other AB algorithms, the RK algorithms, Euler's method, and probably a predictor-corrector and Adams Moulton methods. The result was always the same: AB 2 wins by a wide margin.

The only time phase is not important is when the simulation is open loop. This is not the normal case. The normal case with the solution to differential equations is that the simulation is closed loop and the phase makes a huge difference.

[edit] Midpoint Method

The Midpoint method is mantioned in the graph, but there is no mention of it in the article. Shouldn't some mention of it be made? - GeiwTeol 08:15, 19 March 2008 (UTC)