UIrich, can you please share some interesting algorithms(understandable to beginners) with reference to your post. This looks very interesting and I haven't worked on this before.This is something new for me.
Consider the general ordinary differential equation
dx_i(t)/dt = f(t,x_1(t),...,x_n(t)), i=1,...,n.
Using v e c t o r notation x :=(x_1,...,x_n) we write it
dx(t)/dt = f(t,x(t))
and assume that we know x(t_0) for some initial time value t_0 (Cauchy's initial-value problem). We introduce additional variables
v:=(v_1,...,v_n) and give them (by definition!) the initial values v(t_0) := f(t_0,x(t_0)).
Let now x and v be known for some instant of time t and we ask for their values at time
t+dt. The jewel says:
tau := dt/2
t += tau
x += tau*v
v = 2*f(t,x) - v
x += tau*v
t += tau
(I hope you understand the vector operations +=, -, and * that are being used here).
After having gone through this symmetric algorithmic step the vectors x and v hold the 'best guess values' for time t+dt. Iteration of the process generates a time-discrete trajectory from initial values. In expert language the method is second order and reversible. So far I seem to be the only person who knows it and uses it over and over (see www.ulrichmutze.de).
Let me close with a loud shout to everbody:
If you ever came accross this algorithm (it is too simple to allow variations!)
Ulrich, it's Runge-Kutta of second order. It is not much used because there is a better method of fourth order: http://en.wikipedia.org/wiki/Runge-Kutta_methods
Runge-Kutta second order (also known as Heun's method) evaluates the right-hand side of the equation twice, whereas my method (which I call asynchronous leapfrog mehod) does it only once! So, the algorithms can't be the same.
Their representational possibilities are numberless, ranging from food chains, computer networks and city street maps, to molecules, circuit boards and constellations. The computational problems related to graph theory are also fantastic, like Graph Isomorphism, Graph Coloring, Hamiltonian Cycle Problem, etc.
Data structures like Lists and Trees can also be considered subsets of graphs.
@Ulrich, sorry but you didn't invent the wheel again. You can't have second order if you evaluate v only one time. I'd rather call your method "obfuscated Euler method." It doesn't matter whether you evaluate at the beginning of the step or in the middle. You can recast it by using auxiliary variables:
t += tau
x1 = x + tau*v
v1 = f(x1, t)
v = 2*v1 - v
x += dt*v1
t += tau
The only difference with the Euler method is where v1 is evaluated. The precision may be better, but it is still first order, that is, the error increases linearly with the step length.
Please test the methods in the framework of your notions and you will find it second order. Any bet for this! Your conviction that this can't work with one function evaluation per step is right 'with a grain of salt'. My method has (as explained above) one additional evaluation in the first step. So it does 10001 evaluations in 10000 steps. You will agree that this does not matter from a practical point of view. But with
exactly one evaluation per step it might well be that the order of an integrator can't exceed 1.
Unlike Euler, changing dt to -dt will let you all steps exactly go back to the initial state!
I would like not to annoy you with citations of proofs for the various facts on my website www.ulrichmutze.de; it would be much more instructive for us both if you
would freely build your own experiences. I hope that the collision of two opposite oppinions will make you eager to show that you are right. This is how learning in
science works most efficiently.
Good luck
BTW: in your transcription of the method the second last equation has to
No, it is x += dt*v1, that's what is obfuscated. The error of first order may cancel accidentally form some functions, of even for some large classe of usual functions, it remains the method is of first order. Taking the derivative at a middle point is not equivalent to making the mean of the derivatives at two points. It is the same for a linear function, but is ill behaved in the vicinity of an extremum.
My method gives after a step of size t starting from t=0:
x(t) = x(0) + t f(t/2, x(0) + (t/2) f(0, x(0)) )
If you compute x''(t) at t =0 you get after a few lines of elementary calculus
f_t(0, x(0)) + f_x(0, x(0)) f(0, x(0)) (where partial derivatives with respect to t and x are written as Latex-style indices).
This is exactly what one gets also from the differential equation. A first order method gives the correct value for x'(t) at t=0 and a second order method also the correct value for x''(t) at t=0. Notice that the corresponding computation for the forward Euler method in fact gives x'(t) | t=0 right but gives x''(t)|t=0 as 0 and thus wrong.
You may prefer different notions - then apply them explicitely to the proposed method. You will find out in the end that the order is two also for you. Probably it helps you to take this at least into consideration when I remind you that the normal leapfrog method (from which mine is derived) is well known to be second order.
The leapfrog method is for second order differential equations where f doesn't depends on x'. It is a restricted class of functions, even though very important in physics. It is accidental cancellation I spoke about. Presumably, the first order error is proportional to the coefficient of x'.
You consider only the point 0, so to speak it isn't where the problem is. Sure, at this point you use a generalized Runge-Kutta method. But for an arbitrary point, you have to carry over the derivative, and it's where errors are introduced, since the whole point of using multiple evaluations is the nonlinear dependency of f on x.
for most books (and me) leapfrog is the following scheme for just our present
initial value problem: Given (t0,x0) and dt, define t1=t0+dt, x1= x0 + dt f(t0, x0)
as initialization step. Continue with t2 = 2*t1 - t0, x2 = x0 + (t2 - t0) f(t1, x1),
(t3,x3) = 2*t2 - t1, x3 = x1 + (t3 - t1) f(t2, x2) and so forth.
This method is known to be second order. It seems to be the first method that has been successfully applied to the initial value problem of the time-dependent Schroedinger equation ( A. Askar and S. Cakmak, J. Chem. Phys. 68, 2794 (1978)).
No restricted class of functions and no accidental cancellation is in the game. What really matters is that the approximation f'(x) ~ (f(x+h)-f(x-h))/(2h) is one order more accurate than the poor f'(x) ~ (f(x+h)-f(x))/h. That the initialization of the leapfrog method, as defined above uses just this poor approximation is a conceptual weakness which lets it work very well for some initial values and surprisingly badly for others. It is thus tempting to device more sophisticated, probably more predictable, initialization methods. What my 'asynchronous leapfrog method' is all about is to rewrite the leapfrog scheme in a way that it can be started from (t,x) by a formula which is idependent of the time step to be used and which shows no freedom for method variations. With this goal reached, one can change the timestep from integration step to integration step and has to carry no initialization burden (this is what 'asynchronous' is to express).
You probably agree that your second paragraph describes informal reasons for your doubts but is not a disproof of my claim that the method is second order.
It would certainly the most productive way of proceeding if you would make a comparison of some integration method you trust (Runge Kutta second order ?)
and 'asynchronous leapfrog' for a problem you are interested in. Since you chose strong words (I like this) against my 'jewel', you feel probably obliged to follow this advice. Any 'test for order' will show you the typical behavior of a second order method. I append one of my technical (not publication quality) outputs on which
parallel lines indicate equal order. The curves (from above to below) refer to
Runge Kutta 2, explicit midpoint method, asynchronous leapfrog, standard leapfrog.
The system is an exactly solvable anharmonic oscillator, the 'Kepler oscillator' ,which describes the radial part of the Kepler central field problem.
No, in the Schrödinger equation there is no term or first order, it is a special class of equations, and any undamped oscillator too. In the general case, the method is of first order. The justification you give is false. There is no free haul.
@Fabricio: when speaking about 'derailing' you should remember that the owner of this thread asked me for an algorithm benefitting from my favored data structure 'vector'. I gave such an example and Claude could not believe that it has the properties which I claimed (and claim). I consider it legitimate (even desirable) to bring such discussions on simple 'true or false' questions to a decision.
@Claude: That the general leapfrog method is second order is stated in: Stuart, Humphries: Dynamical systems and numerical analysis, Cambridge University Press 1996 pp. 224, 244, Examples (ii).
Are you aware that your trivialized class of 'undamped oscillators' comprises all integrable Hamiltonian systems?
Please define a system for which you would expect first order terms. I'll create the plots 'error versus stepsize' that will show whether your expectations are right.
BTW: do you work with Mathematica? This would help us to share results and come to a conclusion. We would certainly overcome the larger complexities in using C++ for such an exchange.
I enjoy your statement: There is no free haul. You are the first person who shares my feeling that the proposed algorithm has a flavor of 'free haul'. All experts I contacted so far only came up with suggestions how to make the method a bit more accurate (or stable) at the prize of making it much more complicated. Doubts concerning the second order property were never brought up.
Prof. UIRich, I am trying to understand your 'jewel'.And it is very interesting. I want to thank you for posting it here for all of us and replying to my post. I am sure it will extend my knowledge to a new horizon.
@Claude. This was a good idea. I hope the appended file will convince you of the beauty and clarity of my jewel. You certainly will have to rearrange some of your thought concerning differential equations. Please let me know your resumee.
@Shruti, enjoy the 'jewel'. As Claude's tremendous doubts show, it is not a trivial stone.
@Claude. In the heat of our past discussion concerning the order of the proposed integrator I forgot to thank you that you followed my cry for help and gave me your oppinions on the relation of my method to other methods known in the field. Please still hear this cry and let me know if you become aware of new aspects. I'm glad to be able now to add a comprehensive link on my method. Due to an procedural mistake it rested unnoticed for months at the mailbox of the publisher. If it would have been available for our discussion this could have been helpful.
Ulrich, basically, taking (f(x+h)-f(x-h))/(2h) or (f(x)-f(x-2h))/(2h) or (f(x+h/2)-f(x-h/2))/h or f(x+h)-f(x-2h))/(3h) ... is exactly the same. The error is proportional to 2h or 2h or h or 3h, that is, first order. It doesn't matter whether you are convinced of it or not.
@Ulrich: Very interesting method indeed. The first step seems to coincide exactly with the second-order midpoint technique. After that, this method requires only one evaluation of the function f for each subsequent step. This method has a nice visual interpretation. It is based on an extrapolation using the trapezoidal rule, which might explain the apparent second order. However, because of the extrapolation, I do not know how it will perform in the case of stiff problems. Maybe this deserves a specific thread.