The two concepts are closely related but not completely identical. We may take a deep look at them from two perspectives.
1. The first is from the perspective of pure physics. Passivity can be considered as universal in mechanical systems, e.g., robot manipulators and satellites. Specifically, a satellite in orbit, without active control, is stable in terms of both its position and velocity and in addition the velocity of the satellite would finally decay since the system mechanical energy (kinetic and potential energy) gradually decreases due to the atmosphere drag forces (whose is role is similar to damping control). This phenomenon is due to the fact that a satellite in free motion is (output strictly) passive with respect to its output (i.e., velocity). Another common example is the bicycle and it would finally become still on a horizontal road if we do not insert energy by our legs. Passivity may also be thought to be the will of the system (free) to stabilize itself.
2. The second perspective is based on the passivity formula. Passivity is typically defined in terms of the input (u) and output (y) of a system, namely
int_0^t yTu dr >=E(t)-E(0)
where E(t) is the system storage. The interpretation is that the injected energy from the external input is not less than the variation of the system storage. "Not less than" here implies that the injected energy by the input is partitioned into two parts: 1) variation of the system storage and 2) the nonnegative second part. This intuitively means that there is some energy dissipated (by the system) and only part of the energy is converted to the system storage.
The passivity formula, however, does not naturally implies stability of the system under the external input (u) but it does imply that the free system (i.e., without the external input) is stable. The stability of the free system, yet, has a weak connection with the typical equilibrium-based Lyapunov stability since the system storage does not explicitly specify that it is defined based on certain equilibrium of the system.
If the external input is allowed to be designed, the connection between passivity and equilibrium-based Lyapunov stability can be explicitly established (in part). For instance, with a negative output feedback for the external input, output regulation (to its equilibrium zero) can typically be achieved.
In summary, we might say that passivity often implies (equilibrium-based Lyapunov) stability, but it also tells many other significant things concerning the physics/dynamics of the system.
I hope the above points would complement the existing answers and be of some help.
Passivity implies stability but the converse is not true. See for instance our work: R. F. Ngwompo, R. Galindo, Passivity analysis of linear physical systems with internal energy sources modelled by bond graphs, Proc. of the Inst. of Mech. Eng. Part I: J. of Systems and Control Engineering, DOI: 10.1177/0959651816682144
You received definitions of passive systems, which are right.
To make the relation to stabilty more rigorous and formal, assuming that a system is Linear Time-Invariant (LTI)
xdot=Ax,
it cam be shown that for any it is stable iff for any Positive Definite Symmetric (PDS) matrix Q there exist a PDS matrix P which satisfies the Lyapunov equation.
PA+A'P=-Q
Now, it was shown that a passive LTI system
xdot=Ax+Bu
y=Cx
satisfies more than a stabilty relation. In other words, it satisfies the couple of equations:
PA+A'P=-Q
PB=C'
It can be shown that such a system is not only stable, but also minimum-phase and also that it remains stable for any output feedback gain, even arbitrarily large.
For a SISO system, it implies that it is of relative degree 1, which in turn implies that it has n poles and n-1 zeros.
That's why passivity implies stabilty, yet not vice-versa.
(Things can be extended to Linear Time-Invariant and to Nonlinear systems)
If we talk about the transfer function of an LTI system,
G(s)=B(s)/A(s)is the ratio of to polynomials in s, A(s) and B(s) and it has poles and zeros. The poles are the roots of the denominator polynomil A(s), the zeros are the roots of the numerator B(s).
It can be shown that, if all poles are located in the left half-plane (LHP), such as p=-1 or p=-2+3j or p=-2-3j, then the system is stable. Otherwise, it is unstable.
If the zeros are in the LHP, thn the system is called minimum-phase. Otherwise, it is non-minimum phase
(If you plot the amplitude and phase of the transfer function, it passes smoothrly throug the values of the zeros. Otherwise it has "jumps" which make it nonminimum phase)
(Different professions may use different names. My main background is Control Systems. What is yours? Because the phase also jumps at the poles location, physicists call minimum-phase only systems with all poles and zeros in LHP. For Control, poles define stabilty and zeros minimum-phasedness. )
In the state-space representation
xdot=Ax+Bu
y=Cx
(where A and B here are just other standard notations, not to be confused with A(s) and B(s) of the transefer function) poles are the eigenvalues of the "system matrix" A.
Zeros are more difficult to compute, yet can be computed after some algebra with A,B,C. The name zero comes from the fact that one can show that sinusoidal inputs at the frequency of "zeros" are not transmitted to the output.
A stable system can be minimum-phase (MP) or nonminimum-phase (NMP). If you do a step response of and MP, the output not only that does not diverge, yet immediately starts going up towards the final value. If you do the same with NMP, you may see that the output first goes down before going up, yet does not diverge.
Dear Dr. Itzhak Barkana I am very appreciated your help, very thanks
My background is power electronics engineering. Due to the nonminimum-phase character of boost converter, direct regulation of output voltage based on Passivity-based approach cannot be obtained. To solve this problem there is indirect method by regulating inductor current first, and indirectly it would regulate the output voltage. For this reason I am asking about nonminimum-phase. So the new question is, why is difficult to make nonminimum-phase system to be stable directly?
(First, although I may play Prof or Dr., I am an Engineer and, except for strictly official occasions, I am just Itzhak for all, no need for Prof., Dr., Sir or Mr. :-)
I have been mostly dealing with dynamical systems, such as robots, planes, etc., and not much with power systems. Any new problem is new and your problem may have its specifics and I cannot just guess were the NMP issue plays a role in your specific problem.
When we want to control a system, we close some loop and expect the closed-loop system to be stable. A NMP system has negative coefficients in the numerator and this may lead to feedback in the wrong direction.
But now, assume that you just have a given system and only know that its main gain may vary with the load between, say, 1 and 10. You test the system and make sure that it remains stable for all fixed K values 1
Let me add that this is one of the best questions that you can ask yourself if you can understand passivity from first principles. As it have been mentioned, one of the LMI conditions for passivity implies stability. However, if we think about the fundamental definition of passivity, the boundedness of the integral between input and output, I would recommend to think in an unstable first order system.
My example would be as follows. Let us consider the system G=1/(-s+1). So if we just check frequency conditions, it looks like a passive system as the phase is between -90 and 90. However, take the input u(t)=1 between 0 and T. If I am not being silly, the output of the system is y(t)=1-exp(t). So the integral of u(t)y(t) between 0 and T is T+1-exp(T). Evidently, it is not possible to produce a lower bound of this integral, therefore the system is not passive.
Yes, your example is a good illustration for LTI sytems through the transfer function. That's why LTI passive systems are also called Positive Real, which refers to the property of their transfer function that you just showed.
Stabilty is a less demanding property and only requires that the poles of transfer function be located in the left half-plane (LHP).
If we recall that the transfer function is the Laplace Transform of Impulse response, then this can be seen from the simple example 1/(s+1), which is the transform of exp(-t), while 1/(s-1) is the transform of exp(t). The first exponential converges (i.e., is stable) while second diverges.
A second order TF 1/(s^2+as+b) would result in a convergent sinusoidal function if the roots have negative real part (are located in LHP) or a divergent sinusoidal if the roots have positive real part (are located in RHP)
Because any TF can be separate into terms with first and second order polynomials in the denominator, than roots in LHP imply stabilty.
If the stable system also has zeros in RHP, it is called nonminimum-phase, because of the effect these zeros have on the phase, yet it remains stable.
Passivity is a strong property. A passive system can be shown that it is stable and minimum-phase, yet this is not enough.
In the state space representation
xdot=Ax+Bu
y=Cx,
the sytem is stable if for any positive definite matrix Q, there exists a positive definite matrix P that satisfy the Lyapunov stability equation
PA+A'P=-Q
In a passive system, there must be such P and Q that simultaneusly satisfiy the two equations
PA+A'P=-Q
PB=C'
These kind of relations can also be extended to time-varying and nonlinear systems.
As the detention of the passivity (a passive system cannot store more energy than is supplied to it from the outside, with the difference being the dissipated energy). For a simple RLC circuit connected across the voltage source as attached, at which case this system can be considered as an impassive system, i.e. storage energy becoming more than supplied?
The example of Mustafa is always passive and stable. All the elements are passive and the input/output pair is power (see the work of Beaman or Li and Ngwompo). Also, the relative degree is one, and so, all the above conditions of passivity are satisfied. Hence, the only possibility for this system to be non-passive is that it has relative degree two for a non-real input like the derivative of voltage with respect to time.
The interconnection of passive systems is more interesting. Some interconnections are covered in the book of B. Brogliato, R. Lozano, B. Maschke and O. Egeland, Dissipative Systems Analysis and Control, Springer, 2007. It is proved that such interconnections are passive if the subsystems are passive. However, it is not true for all the interconnections. In the cascade interconnections of two R-L systems with no loading effect, the problem is that the continuity of power is loss in this connection, i.e., the first subsystem is not delivering power to the second subsystem. Due to the assumption of no loading effect the first subsystem provides a signal (no-power) that modulates a source (or actuator) which scale the power and deliver it to the second subsystem. So, what is wrong in the classical analysis since as you stated the relative degree does not change?.
I just made a few simple computations and only for some better intuition, and I don’t think that anything was wrong. Still, it has no intention to provide any sort of General Solution for the General Problem and I don’t know what sort of general connections they may mean. One must look there.
Passivity has bothered me for a very long time, yet my interest is in the context of Control systems, where nothing can be more active than the system itself. Still, under some conditions, Control systems can show passivity properties and these are very important in the context of system stability, in particular with nonstationary control parameters (such as adaptive control, etc.). Because Control systems are not “naturally” passive, next steps required using stabilizability property in order to force the system to satisfy passivity (or as I called “almost” passivity) properties, which than can guaranty stability with adaptive controllers.
However, I think this is beyond the scope of the question here.
Again, for illustration, I took two simple networks and added them in series or parallel and saw that the relative degree remains 1.
I guess that the book may mean more complex situations.
As I said, passivity is an important property in active systems, such as Control systems, yet it is not naturally satisfied and so, it gave me (and many others) enough stuff to do for quite a few good years.
The relative degree 0 or 1 is a necessary condition but is not sufficient. An unstable system of relative degree 1 is not passive. Passivity tests in the frequency or state space approaches are given in the work of P. A. Ioannou, Necessary and sufficient conditions for strictly positive real matrices.
Of course. However, we should not mix topics: the discussion here was on RLC networks, where each one was supposedly passive and only their combination was in question.
Actually, as i wrote, unstable active systems have been my main issue ever since 1980. RG tells me that you might have shown some interest in my paper "Adaptive Control? But it is so simple!" This could be a good introductory review.
In my works, you can see how we ended managing to force systems that could be both unstable and non-minimum phase to end satisfying passivity conditions and thus, to guarantee stability with nonlinear (in particular adaptive) controllers.
As it seems to be an interesting on this topic, can I ask you what do you think passivity theorem provides for linear systems? As the passivity of S implies that the H-inf norm of (S-1)(S+1)^{-1} is smaller than one, being controversial, what do we lose if we bin the passivity theorem? Coming from RLC systems, it seems natural to define passivity, but do we just use it for tradition or because it provides some advantage over the SGT? Do you know of any example which can be proved with the passivity theorem but where SGT fails to ensure stability via loop transformation? @Mustafa, sorry for hacking your question!
When you ask "can i ask you?" I can only guess that you mean me. :-)
I am not sure that I get "what do we lose if we bin the passivity theorem?" Maybe there is a typo in "bin" here.
In any case, you are right that passivty comes from realizability of RLC systems, yet when we talk about linear and time-invariant (LTI) systems, i don't think we can add much.
It all started when people wanted to, or just had to deal with nonstationary parameters. The linear sytem can be stable for any fixed gain K , if 1
(I meant what if we remove the Passivity theorem from our core knowledge?)
Agreed. One may think that for LTI systems, SGT is enough. However, there is a quite rich literature on Positive Real systems, Strictly Positive Real systems, Strong Strictly Positive Real systems, Weak Strictly Positive Real, and so on... Why do we have all these definitions for LTI systems when you just do a loop transformation and job done?
As an nonlinear guy, I would use the IQC theorem to study your problem. In IQC, my condition on the linear system is quite clear, and I don't use any of the above definitions. Evidently, I can't cope with poles in the imaginary axis, whereas passivity theorem can. However, the price to pay of developing absolute theory with passivity theory (or Lyapunov) is that something as simple as Popov criterion gets seriously affected. Most of the books in nonlinear control state that the Popov multiplier (1+\lambda s) required $\lambda>0$, where the IQC theorem leads to a Popov criterion any lambda. This fact was well known in the russian literature in the 60's, but our modern books still state $\lambda>0$ :-(
With all due respect (this time to myself :-) I am not in charge with everything people may say and/or write :-)
Nevertheless, if you spend some time understanding Passivity, you will get better understanding of systems properties and stability, in particular nonstationary and nonlinear.
This is based on my own experience. At some point in time, I even wanted to "throw away all that nonsense," because I "almost" had a proof that the linear systems remain stable with variable gains within the "admissible domain."
I wrote "almost," because it ended being one of my best and happiest mistakes ever. Yes, sometimes you must be lucky to err. It led me to learn everything I know about passivity.
Yes, Popov was the first to show when a system does maintain stability with variable gains, yet when you really understand Popov’s Criterion, it is just a special way to show that the plant is… Yes, Yes, Positive Real, i.e., Passive.
Now, this is not to understand that I am looking around for Passive systems, which, except for class-room examples, might be found in a better world, not in ours.
I mean that I was forced to learn how to find ways to force the real-world plants, even unstable and non-minimum phase, to satisfy passivity conditions.
>As a nonlinear guy, I would use the IQC theorem to study your problem.
Sorry, but, if you really read at least the last publication on Adaptive Control, it is way ahead of IQC.
BTW, I would not want my starting joke to sound as a criticism for other people works. When you need a solution and find a solution, you take the solution and use it. If you later find another solution, maybe better or simpler, then use it now, if it is not too late. Still, people were happy to solve their problem when they needed the solution and then, other people, or maybe same people, went on, so now you can see them all and can choose between various concepts, definitions or solutions.
As about saturation, you'd better know how much you can do and try to avoid saturation.
;-) I don't think that it is a criticism at all. There are several solutions in control. The main issue of using non-lyapunov techniques, such as IQC, is that although they provide quite good analysis, it is difficult to use them for synthesis of controllers. Some progress in design is being done, but still a lot of work to be done.
My question was more about stability analysis of LTI systems. If it is good to develop new bits of theory, I think it is also good to analyse what you win from these developments. Sometimes, it is theoretical advantages, in other occasion, it is just better understanding. As a graduated in Physics , I keep an inhered interest for unifications ;-)
Integral Quadratic constrains (IQC) is a stability criterion in the frequency domain for nonlinear systems. It is able to translate a nonlinear stability problem into a convex search, i.e. LMI. I would suggest that Jonsson's lecture notes is the best possible starting point:
The main advantage over Lyapunov methods is that provide robust analysis for free. Another important advantage is the ability of combining several different nonlinearities. However, it is difficult to generate synthesis methods when multipliers are fully exploited.
The concept of IQC is very related with Passivity, Small Gain Theorem, and input-output dissipativity. It is normally stated that IQC is a unification of all these theorems, with their different versions using multipliers and loop transformation. In general, it is true, but there are some subtitle details to be considered, as usual!! ;-)
The two concepts are closely related but not completely identical. We may take a deep look at them from two perspectives.
1. The first is from the perspective of pure physics. Passivity can be considered as universal in mechanical systems, e.g., robot manipulators and satellites. Specifically, a satellite in orbit, without active control, is stable in terms of both its position and velocity and in addition the velocity of the satellite would finally decay since the system mechanical energy (kinetic and potential energy) gradually decreases due to the atmosphere drag forces (whose is role is similar to damping control). This phenomenon is due to the fact that a satellite in free motion is (output strictly) passive with respect to its output (i.e., velocity). Another common example is the bicycle and it would finally become still on a horizontal road if we do not insert energy by our legs. Passivity may also be thought to be the will of the system (free) to stabilize itself.
2. The second perspective is based on the passivity formula. Passivity is typically defined in terms of the input (u) and output (y) of a system, namely
int_0^t yTu dr >=E(t)-E(0)
where E(t) is the system storage. The interpretation is that the injected energy from the external input is not less than the variation of the system storage. "Not less than" here implies that the injected energy by the input is partitioned into two parts: 1) variation of the system storage and 2) the nonnegative second part. This intuitively means that there is some energy dissipated (by the system) and only part of the energy is converted to the system storage.
The passivity formula, however, does not naturally implies stability of the system under the external input (u) but it does imply that the free system (i.e., without the external input) is stable. The stability of the free system, yet, has a weak connection with the typical equilibrium-based Lyapunov stability since the system storage does not explicitly specify that it is defined based on certain equilibrium of the system.
If the external input is allowed to be designed, the connection between passivity and equilibrium-based Lyapunov stability can be explicitly established (in part). For instance, with a negative output feedback for the external input, output regulation (to its equilibrium zero) can typically be achieved.
In summary, we might say that passivity often implies (equilibrium-based Lyapunov) stability, but it also tells many other significant things concerning the physics/dynamics of the system.
I hope the above points would complement the existing answers and be of some help.