While in linear time-invariant systems one can use transfer functions and poles position to decide about stability of the systems, when one deals with a nonstationary or nonlinear system, this tool is not available any longer.
The solution was to just solve the differential equation of your system and check whether the solutions converge or diverge. Because in such systems starting from different initial conditions or using different input commands may lead to totally opposite results, this implies that not only one has to know the solution of the nonlinear differential equation, but must also solve it for all possible initial conditions and input commands.
Lyapunov stability approach eliminates this need. Instead, it suggests associating an energy-like function (“Lyapunov function”) to your system. In other words, a function which increases as the norm of the system vector increases and decreases as the norm decreases. In order to decide if the function decreases, one computes the time-derivative of the Lyapunov function “along the trajectories” of the system to be analyzed.
For a starting example, consider the equation xdot=dx/dt=-x3, because it is easy to see that the system is stable. Here, we can choose a simple positive definite Lyapunov function V(x)=x2. The Lyapunov derivative is Vdot=dV/dt=∂V/∂x dx/dt=∂V/∂x xdot.
The meaning of “along the trajectories” is that this not a general derivative of the general function x2, but rather, because we know the differential equation, we can substitute xdot without having to solve the equation. The result is Vdot=dV/dt=2x (-x3)=-2x4.
If the Lyapunov function was positive definite (i.e., is positive for any nonzero x and zero only for x=0), the Lyapunov derivative is negative definite (i.e., is negative for any nonzero x and zero only for x=0).
In other words, the Lyapunov derivative tells us that, as long as x is not zero, the Lyapunov function can only decrease. However, as the Lyapunov function is bounded from below (because it cannot be less than zero) it must stop decreasing, so the derivative must end at zero.
Although fitting Lyapunov functions to more complex systems is an art, some practices helps.
There are many good books on the topic. One which I find most readable is
J.-J. Slotine and M. Li. Applied Nonlinear Control. Prentice Hall, Englewood Cliffs, New Jersey, 1991.
Very important problem for the solutions of a differential equation stability is that concerning the stability of solutions near to a point of equilibrium. If the solutions that start out near the equilibrium point p remains always near p, then p is said to be Lyapunov stable. It means, we know important property of the soluttions.
In the case there is a Lyapunov function, this is a generalization of the minimal energy principle, to study the stability near equilibrium, even for non linear ODEs.
Typically it helps understanding the qualitative behaviour, without all the quantitative information on solutions.
It seems to me the Lyapunov functions are helpful in theory, while they are difficult to obtain (whenever they exist) in practical cases.
Lyapunov stability of an equilibrium point in space is the most fundamental concept about stability of solutions of differential equations. Based on the idea, other variant types of stability can be derived, for instance, the 'swarm stability' concerned by me.
While in linear time-invariant systems one can use transfer functions and poles position to decide about stability of the systems, when one deals with a nonstationary or nonlinear system, this tool is not available any longer.
The solution was to just solve the differential equation of your system and check whether the solutions converge or diverge. Because in such systems starting from different initial conditions or using different input commands may lead to totally opposite results, this implies that not only one has to know the solution of the nonlinear differential equation, but must also solve it for all possible initial conditions and input commands.
Lyapunov stability approach eliminates this need. Instead, it suggests associating an energy-like function (“Lyapunov function”) to your system. In other words, a function which increases as the norm of the system vector increases and decreases as the norm decreases. In order to decide if the function decreases, one computes the time-derivative of the Lyapunov function “along the trajectories” of the system to be analyzed.
For a starting example, consider the equation xdot=dx/dt=-x3, because it is easy to see that the system is stable. Here, we can choose a simple positive definite Lyapunov function V(x)=x2. The Lyapunov derivative is Vdot=dV/dt=∂V/∂x dx/dt=∂V/∂x xdot.
The meaning of “along the trajectories” is that this not a general derivative of the general function x2, but rather, because we know the differential equation, we can substitute xdot without having to solve the equation. The result is Vdot=dV/dt=2x (-x3)=-2x4.
If the Lyapunov function was positive definite (i.e., is positive for any nonzero x and zero only for x=0), the Lyapunov derivative is negative definite (i.e., is negative for any nonzero x and zero only for x=0).
In other words, the Lyapunov derivative tells us that, as long as x is not zero, the Lyapunov function can only decrease. However, as the Lyapunov function is bounded from below (because it cannot be less than zero) it must stop decreasing, so the derivative must end at zero.
Although fitting Lyapunov functions to more complex systems is an art, some practices helps.
There are many good books on the topic. One which I find most readable is
J.-J. Slotine and M. Li. Applied Nonlinear Control. Prentice Hall, Englewood Cliffs, New Jersey, 1991.