In order to solve a time-depended PDE which method is better to use Forward or Backward Euler method,especially when we are talking about small time steps?Which of this methods are more stable?
Your question is not well posed. On a side it is well known that implicit Euler method is suitable for its stability property compared to explicit Euler method. But for describing a transient phenomenon one uses small time-steps and the advantage of the implicit over the explicit is minor. On the other hand, owing to the poor first order accuracy, I would not reccomend at all the Euler method. use at least the second order Heun method.
The advantage of using the implicit scheme is that much larger time steps are possible. There's no point in using a ``small'' timestep, since it doesn't represent anything physical. The goal is to use the largest possible timestep, consistent with stability (local error) and global error.
That the ``physical time'' is the limit of the product, n Dt, where n is the step label and Dt the stepsize, when n->oo and Dt->0 with their product fixed. The equations that are solved by any such numerical method don't care what Dt is and their solution doesn't feature it.
So whatever the method used, the results shouldn't depend on the choice of Dt at all-if they do, then the algorithm isn't, in fact, solving the original equations.
On the other hand, a ``small'' stepsize implies many steps to reach any given ``physical'' time, t = nDt and more steps mean larger accumulation error. That's why it's necessary to use the largest stepsize consistent with stablity-and why, therefore, implicit Euler's higher overhead, assuming an extra inversion step is unavoidable (which isn't for symplectic integrators, when the implicit scheme can be written as an explicit scheme) can be compensated by its larger stepsize, relative to explicit Euler, that imposes a much smaller stepsize.
not sure to understand your point. I was thing to a classical PDE, discretized by explicit/implicit Euler method (and some spatial discretization).
You never solve the original PDE, you numerical solution can be considered as the exact solution of the modified equation. In the local truncation error there is the appearence of the dt.
In case you want to describe a physical transient, it is relevant that the modified equation does not show spurious terms that introduce "artificial" diffusion and dispersion. That can be obtained by using special care in high order methods.
On the other hand, the stability constraint is a complex function of the discrete parameters and often the time step is small as dictated by the physics. As example just think to simulations of turbulence with DN/LES formulations, the time step is small aslo using implicit methods.
Only for steady problem one can search for a solution that does not depend on the dt.
The appearance of the time step is just a tool for obtaining the solution-no property of the solution of the equation can depend on its value.
If you're not solving the original differential equation, the whole discussion is pointless. The reason any numerical method is developed in the first place is to solve equations in such a way that the solution obtained reflects the properties of the original equation, not of the algorithm used in finding it! If such a distinction can't be made, the proposed solution is useless.
Once more: There's absolutely no justification in highlighting properties of the numerical method that are not relevant for understanding the properties of the original equation. The algorithm itself only matters insofar as it allows to obtain the solution in a way that's consistent with the properties of the equation itself. The value of the solution at any given time is but one aspect of this.
There's no physics-or mathematics-in the timestep of the numerical method, because the equation that's the starting point doesn't have any such timestep-time is continuous and the derivatives that appear in the differential equation only make sense in that case.
That a time step needs an upper bound is an artifact of the algorithm, not a property of the equation. That's why a scaling analysis is necessary to show what features of the numerical approximation are properties of the equation and what are artifacts of the numerical method, for instance, depend on the fact that the time step isn't zero, within the accuracy of the solution.
And that's why choosing a smaller timestep than the algorithm allows is, simply, a mistake.
I continue to not follow your points. We have physical problems where finite time and lenght characteristic scales exist and are relevant, such as in turbulence (but not only). So we have physical constraint on time and space steps to be fulfilled for physics by proper choice of the numerical parameters. On the other hand, the modified equations (that is a continuous PDE) is a well known theoretical consequence of the FD-based discretization of a PDE and it contains the local truncation error (dt and h appear). It is exactly the PDE we actually satisfy by our numerical solution when dt and h are fixed. And it is always an approximation of original PDE. This is a fact, a theoretical basis of numerical analysis. The issue is to make the modified equation an acceptable approximation of the original PDE. In this sense, low numerical diffusion/dispersion is required. But controlling the magnitude of the local truncation error does not imply automatically that also the discretization error is low in some norm. So, you comments appear somehow misleading for me. Or I am not focusing on what you would really highlight.
The time and length scales that appear in the dfferential equations don't have anything to do with the time steps and lattice spacings! The physical scales are independent of the steps and spacings!
t = lim_{n->oo, Dt->0} nDt, x = lim_{p->00, Dx->0} pDx. Nothing can depend on Dt or on Dx, everything depends only on t and x.
Acceptable means, precisely that-independence of the result on Dt and Dx. Whatever means are used, that involve Dx and Dt should all give equivalent results-within accuracy-otherwise they're not relevant, since they depend on Dt and Dx, that can't affect the equation, that knows nothing about them.
For any given choice of the spacings, Dt and Dx, the physical scales, x and t are
t = n Dt and x = p Dx. But that doesn't mean that t and x really depend on Dt and Dx, since by changing their values and adjusting the values of n and p, one can obtain the same values for t and x and it's these values that are relevant.
If the equation does have a particular time and/or length scale then it can't matter whether one is sampling it at 10 steps or a 100. From a practical point of view it's better to sample it using 10 steps, with a method that does allow a sufficiently large step, obtain the solution to that accuracy and then interpolate to obtain the intermediate values at the same accuracy.
So that's why it's better to use implicit rather than explicit Euler, because Dt can be taken much larger and implicit Euler doesn't suffer from the same global issues that explicit Euler does.
In my opinion, the answer to this question is straightforward. Forward (explicit) Euler scheme is the choice. This is so because you are already constrained with small time steps, hence no need to solve linear or nonlinear system of equations associated with implicit methods. Good luck.
Of course it doesn't-the point of the exercise isn't to solve the system of recursion relations, but to obtain a solution of the wave equation. That's why it's necessary to monitor the conservation of energy, and so on, all those properties that don't depend, neither on the values of dt and dx, nor on the method used for approximating the derivatives by finite differences.
So if you choose some values and I choose some others, it doesn't matter, since the value of the amplitude at the same value for x and t, for instance, won't depend on dt and dx, within the precision of the methods used.
There's no constraint on ``small'' time steps by the equation-only by the numerical scheme. So choose a better one, i.e. that isn't sensitive to the values of the time step and/or lattice spacing, since the final results don't, anyway, and it doesn't make sense presenting artifacts.
How an equation is solved is much less relevant than checking the consistency of the solution.
there is no energy equation to look at in my example. That is the simplest model of wave solution you can solve, the exact solution being simply f(x-u*t,0). Now use this exact solution and insert it into the modified PDE equation and check if it is satisfied.
You caa also use the Burgers equation model and the consequent kinetic energy equation deduced from it.
Of course it's satisfied! The deviations are negligible, within the precision defined. There's no other reason to be interested in the recursion relations, beyond the fact that they provide approximations to the solution of the original equation. And approximations means knowledge of the fact that the terms that are neglected are negligible. And thecway to test that is, for instance to check the conservation laws-the wave equation has them-it's an integrable system, as does Burger's equation.
It is not satisfied at all! The error is O(dt, h), the initial solution will be rapidly smoothed by the presence of the numerical diffusion, that is a continuous term (a second spatial derivative) with a magnitude that depends on h and dt.
As far as the Burgers equation is concerned, the first order FD explicit method will produce no conservation of kinetic energy!
For transient problem you cannot focus only on the higher order terms you disregard because you judged that can be neglected but you need to analyse the physical relevance of the numerical solution. That is dictated by the modified PDE equation.
Again, please demonstrate theoretically your statements.
The discussion no longer has anything to do with the question asked.
What matters is that, to O(dt.h), it IS satisfied, which means that it is wrong to make any statement that relies on higher order effects, if one is working to that order, since there are many other contributions that have been neglected. The approximation breaks down when the quantities that characterize the true solution no longer show consistent behavior. That's why it's necessary to monitor their behavior-otherwise the numbers are meaningless, they describe the iteration, not the equation, from which it came. While it might be interesting to study iterations themselves, it's necessary to be consistent.
There's no such notion as ``numerical diffusion'', since it isn't a property of the equation, but of the approximation (assuming it is of O(dt.h), too; if it relies on contributions of higher order, the statement's inconsistent, that's all). If anything like that does occur, the scheme, just, isn't stable. So it shouldn't be surprising that nothing conclusive can be deduced from it. Dealing with such issues is a waste of time, should be better spent focusing on eliminating the artifacts from the solution to the original equation.
The error analysis isn't the reason equations are solved numerically-it's, usually, the other way around. Any numerical scheme is a means to an end, not an end in itself. It doesn't make sense making statements about the behavior of the numerical approximation, but about the solution itself!
So pick whatever scheme, it doesn't matter-the end result, only, does. You'll get there faster with an implicit than with an explicit Euler, that's all. But getting there is what matters.
The differential equation to be satisfied is, already, known. There are many ways to ensure that the numerical scheme is, indeed, describing the properties of the original equation. Studying the finite-difference scheme in the way discussed in that book is one of them; there are others.
Once more: Doing the error analysis of any numerical scheme is a means to an end, towards obtaining the properties of the solution of the differential equation; not for its own sake.
So focusing on any scheme is a question of taste; presenting the properties of the solution obtained by it only matters in judging whether it's useful, or not. The values of the steps and spacing aren't relevant as such, since anyone wishing to reproduce the solution is interested in the final result, that doesn't depend on them.
Of course if the statement is that the code needed so many gigabytes of memory and so many hours of running time and the same result can be obtained with better memory management, in a few minutes, things get interesting, if the topic becomes booking time on a supercomputer, when a laptop might suffice, if the coding is smart enough.
The particular PDE is not stated so the question can only be answered in general terms.
The Forward and Backward Euler schemes have the same limitations on accuracy. However, the Backward scheme is 'implicit', and is therefore a very stable method for most problems. It usually requires to be solved iteratively, so will be computationally more demanding and possibly use more memory. If this is a problem, then the Forward Euler method is preferred, otherwise use the Backward Euler method.
Depending on your PDE the FDM explicit scheme can also be unstable because of too short time step (c.f. eq. (3.13) in http://am.ippt.pan.pl/am/article/viewFile/v69p389/pdf).
As Graham W Griffiths mentioned, the implicit scheme of FDM is more computationally expensive, but more stable.
Roughly, numerical stability is to do with how well the numerical solution matches the exact solution, i.e. whether the error grows or not. So the Backward Euler method is a stable method when solving a linear equation such as Fourier's equation. However, if the equation being solved is nonlinear, then iterations are required when applying Backward Euler, and these iterations may diverge if the timestep is too large.
As correctly stated by D. Andrew S. Rees the numerical stability property has to do with the convergence towards a solution as well as towards the physical meaning of the numerical solution when is compared to the solution of the orginal PDE. Again, the modified PDE clarifies this aspect. For example, considering my previous example of the linear wave equation, if you use a forward Euler time integration and a central space formula for the first derivative, the consequent modified PDE will say why it is unconditionally not stable: the PDE you are really solving contains an anti-diffusive term that increases indefinitely the gradient in the initial condition.
By "stable" solution we generally mean a solution that is bounded - in the "sense of Lyapunov" for nonlinear systems. However, for solutions of linear systems we are able to give more precise mathematical definitions - a system is stable if "all its eigenvalues lie in the open left-half of the complex plane".
For a scheme to be "convergent" it requires that, as the step size (dx,dt or both) reduces, the solution converges to the true solution. It can only do this if it is a stable scheme.
I don't know what is meant by sensitivity! This is not a suitable technical term.
For a general linear equation both forward and backward Euler methods are of first order accuracy so each will have the same error reduction properties as the steplength decreases. FE will have a smaller operation count per timestep than BE. For a general nonlinear equation the same is true except that iterations will need to be performed and therefore BE uses even more operations per timestep. So why should one use BE and other implicit schemes? If a system is stiff and the fast timescales play no role in the required solution, then FE is extremely inefficient because the timestep will need to be extremely small jut to get a stable solution.