Suppose that A is a matrix which is marginally stable and K stabilizes A through the fact that A+K is asymptotically stable. Then is it true that A+εK must also be asymptotically stable for any arbitrarily ε subject to ε>0? So how to prove it?
I have some extra comment after thinking ot it. the above response of mine needs some extension. It proves stability for the parameter epsilon in the interval ( 1-delta 1, 1] but not ensures it for epsilon in ( 0, 1 -delta1].
On the other hand, byy writing :
A+epsilon*K=A+K-delta*K , ( delta = 1-epsilon)
and since A+K is non singular( since all its eigenvalues stable), we can also write:
A+epsilon*K=( A+k)*( I-delta*inverse[(A+K)]*K)
so that A+epsilon* K is non singular , and stable , if
delta=1-epsilon< 1/norm[inverse (A+K)*K] ( from Banach perturbation lemma)
, eqiuivalently , if epsilon in ( max (0, 1-1/norm [inverse (A+K)*K]) , 1 ]
Dear Manuel De la Sen - Nice calculations! I am reminded of the calculations in the book - Introduction to Matrix Computations by G.W. Stewart.
My text book - Numerical Linear Algebra (Reprint to be released in Aug end) - also deals with such problems in the chapter on "Vector and Matrix Norms".
As you have given detailed calculations, I don't need to add anything!
Dear Ning: Prof. De La Sen's proof is correct! The eigenvalues of a matrix are the roots of the characteristic polynomial, which is given by a determinant function. We all know that determinant function is continuous. So, his arguments are valid. "Linearity" is not in the picture at all!
A = diag(0, 0, ..., 0) [n X n zero matrix] - This is critically stable.
Take
K = diag(-1, -1, ..., -1) = - I.
Then A + K = -I, which is stable (or Hurwitz).
Now, what is A + eps K? A + eps K = -eps I and it is very clear that this matrix is stable for all values of eps > 0, and in particular for 0 < eps < 1.
Likewise, visualize this for any diagonal critically stable matrix. Work out a proof yourself. Then try general cases. Please use MATLAB - calculate eigenvalues for the perturbed matrix. Do numerical simulations, get some intuition and then work out a mathematical proof.
Professor has given a very nice proof for different cases. Study both.
I tend to agree to the second proof by Prof. De la Sen. But the result is conservative to exclude very very small ε. Indeed, the actual problem I am seeking to solve is to prove that A can be stabilized by arbitrarily small ε, (to avoid any possible actuator saturation). This problem seems not that simple as it looks.
I myself have a proof.
Suppose that there is a common positive definite matrix P which can construct Lyapunov function for both dynamic systems x'=Ax and x'=(A+K)x.
Then
ATP+PA=-Q1
(A+K)TP+P(A+K)=-Q2
where Q1 is positive semi-definite and Q2 is positive definite.
So KTP+PK=Q1-Q2, which is negative definite.
So (A+εK)TP+P(A+εK)=-Q1+ε(Q1-Q2) must be negative definite!
The only limitation of my proof is that a common P is required...Actually I think this limitation can be removed.
C(1)>0(symmetric, positive definite) : otherwise, A+K would be singular, then it cannot be a stability matrix.
C(epsilon)>0 for epsilon >0 since A(transp)*A>=0, (epsilon**2)*K(transp)*K>0, epsilon*(K(trans)*A+A*K)>=0 ( the sum of a positive definite matrix with two semidefinite ones)
eigenvalues of C(epsilon) are real non negative ( since symmetric positive at least semidef pos) and are strictly increasing as epsilon increases since
C(epsilon)/d(/epsilon)=2*epsilon*K(transp.)*K +K(transp)+A*K>0 for epsilon nonnegative
so its eigenvalues are real , nonnegative and grow with epsilon.
It can be also considered to relax
(K(transp)*A+K*A) pos. definite to semidefinite if, in addition, K is nonsingular since then
K(transp)*K is positive definite and C(epsilon)>0 for epsilon >0.