There are negative feedback arrangements (control systems), where the loop gain phase crosses the -180deg line twice (in different directions, of course) - and the magnitude of the loop gain magnitude is still greater than 0 dB. At the point, where the magnitude is "1" the phase is again below the critical value of -180deg.
According to the Nyquist theorem, closing the loop will lead to a stable system.
Question: Is there any intuitive explanation as to why those frequency points where the gain is higher than 0 dB and phase -180 degree doesn't keep on adding to itself and lead to instability?
The Nyquist plot infers closed loop properties of a unity gain feedback system from the open loop response, plotting the real versus the imaginary parts of GH.The number of clockwise encirclements=# of closed loop poles in the right half plane - # of open loop poles in the RHP.
The open loop plot circling the (-1,0) point once or twice does not necessarily mean that the closed loop will. The Nyquist criterion captures the intuition that the compensator dynamically neutralizes all destabilizing sign flips and gain increases in the critical region.
The explanation for any particular plant depends upon its relative degree, and the number and positions of its open loop poles and zeros. You can have all sorts of combinations of gains and phase shifts. Stable systems can also have sign flips if there are right half plane zeros.You can also have nyquist plots of unstable systems with right half plane zeros that do not have sign flips, and never get near the (-1,0) point. See the attached plot.
The simplistic but conservative intuitions of gain not greater than 0dB and phase shift greater than 180 degrees in going around the loop with interconnected systems have been carried into nonlinear systems analysis in the form of the small gain theorem and the passivity theorem. I believe these should be of great interest to circuit designers.
Its main utility is in being able to graphically assess the robustness of various control strategies from the open loop frequency response of the plant, which is what is typically available in industrial environments, where fundamental physics modeling is often too expensive. It has also been the motivation for the development of robust control techniques (Gunter Stein and John Doyle in the Honeywell Labs) which extend some aspects of the idea to MIMO systems.
We cannot perform frequency response experiments for open loop unstable systems, as there is by definition no steady state. Experiments with unstable systems are performed only with stabilizing feedback generally (tuning of flight control or engine control is an example). Read Gunter Stein's article, 'Respect the Unstable' in the IEEE Control Systems Magazine for instance. So in general, Nyquist analysis has been useful for designing control for systems with poles at or near the imaginary axis--oscillatory poles or neutrally stable poles (s=0). It is also useful when the plant is stable and you don't want the controller driving it unstable because of a lack of robustness (due to uncertainty of experimentally determined parameters, inevitable on any manufacturing line).
I do not believe classical frequency response based control design methods applicable only to SISO systems are useful when we have state space methods that capture all the different kinds of constraints of performance, stability and robustness that a controller must satisfy for MIMO systems and digital controllers. Analog control design methods are applicable in those situations where there is no time to digitize a signal, calculate control actions and then bring them back to the system, MEMS devices, high frequency circuits, and antennas being the most obvious examples. I believe we can develop a far better circuit theory for arbitrary signal size, and greatly automate the recondite process of analog circuit design with all we have learned in control system design so far.
While you may not face non minimum phase systems or systems with unstable zeros in circuits, you could consider circuits as control systems, where the relative placement of sensors (measured/desired outputs) and actuators (current or voltage sources or prime movers/batteries/energy harvesting) determines these poles and zeros (only for small signal analysis around equilibria in general for circuits). Detailed modeling of the switching or rapid transitions inherent in circuits means only nonlinear or hybrid modeling can capture the characteristics of circuits thoroughly.
Hello Dr. Kartik B Ariyur,
thank you very much for your long contribution. However, as it seems I have described my problem not clear enough. What I am looking for is an explanation (I wrote: "intuitive") in the time domain.
Let me explain again what I mean using a simple example of a 3rd-order system with 3 poles only. Let`s assume a loop gain of LG=1.1 at a frequency fo where the total phase shift is -360deg. We know that such a system is unstable - and we can intuitively explain this situation to beginners:
Because the signal portion that is fed back is LARGER than necessary for producing the corresponding output signal, there will be a build-up process leading to instability (saturation or oscillation). I think, this is a rather simple but logical explanation without the necessity for using the tools of system theory (pole location, Nyquist theorem, phase margin, etc).
Now - the cernel of my question is: Why does this explanation (in the time domain) not apply to the system as described in my first post ?
As I mentioned above, the explanations with loop gain and total phase shift are simplistic and conservative, as they are based effectively on one dimensional dynamics, which can be easily understood by the beginner. Your assertion about a third order system is correct if the system only has poles, and the open loop system is stable. They do not apply to systems where you have phase shifts between sensors and actuators, which you see sometimes in waveguides.
The time domain explanation also is in the answer above--that the stabilizing compensator dynamically neutralizes all destabilizing sign flips and gain increases if the closed loop system has at least local exponential stability. An F-117 (stealth fighter) won't fly if its electronics are fried. It will drop clean out of the sky. However, it is also possible for systems to be stable without exponential convergence to the equilibrium--this stability can be weak, as in various kinds of asymptotic stability, or it can be far stronger as in systems with nonlinear damping (dx/dt=-x^3). Similarly, you can have instabilities that are far more explosive than just relatively gradual exponential growth or oscillation. Stars get thrown out of galaxies for example.
Again - thank you for your reply.
Nevertheless - and with all respect - I could not detect an answer to my final question:
"Why does this explanation (in the time domain) not apply to the system as described in my first post ?"
Dear Lutz,
Thank you for your question that inspires intuition. I will think about it.
Regards,
Bilal
In my humble opinion,
Kartik has offered a "mathematical description"
of a widely recognized event.
....... however,
His post did not point to an "explanation" of the event.
I am intrigued by Lutz' question which seems to be
about "Why" the event is an event.
... Understanding "Why" becomes
something to "hang my hat on" for a long study.
Lutz' approach resembles the "thought experiments"
proposed by the great German physicist A. Einstein,
who gave the world an example of "Intuition in Action".
I hope I am right,
so then this could be a really good Question on RG.
I am afraid we might be seeking a simple answer for a not so simple question. :-)
Sorry if this will look as some long lecture, yet your question is very fundamental and I can only hope my words below help understanding and do not add more confusion instead.
We may try to give a simple intuitive explanation for a simple open-loop case, such as k/(1+s) or even k/(s(s+1)(s+2)). However, can you try a simple explanation for
k(s-1)(s+2)/(s+3)(s-4)(s-5)???
I can't.
So, we move to finding the eventually unstable poles of the closed-loop system. Here, the genius of Nyquist comes to use some "abstract theory," the complex numbers theory, and to finally allow us to use the Open-Loop Transfer Function in order to find any eventually unstable Closed-Loop poles.
The complex function T1(s)=s-a has just one zero in a known region and if we let s vary clockwise along a closed-path around s=a, the argument of the function
T(s)=s-a makes a full clockwise encirclement of the origin s=0.
The complex function T2(s)=1/(s-a) has just one pole in a known region and if we let s vary clockwise along a closed-path around s=a, the argument of the function T(s)=1/(s-a) makes a full counterclockwise encirclement of the origin s=0.
If we move along some closed trajectory that contains no zero and no pole of the function T(s), the argument of the complex vector T(s) may move up and down and may even encircle some region, yet there will be no encirclement of the origin.
If we move along some closed trajectory that contains Z zeros and P poles of the function T(s), the complex vector T(s) will encircle the origin N=Z-P clockwise encirclements.
Now, if the Open-Loop is G(s) and the closed loop is T(s)=1/(1+G(s)), it is easy to see that the Closed-Loop Poles are the zeros of 1+G(s). However, G(s) also has poles, which happen to be the Open-Lop poles of G(s). So, assuming that we know G(s) and know if it has no poles in the Right-Half-Plane (RHP), we would have to plot 1+G(s) along a contour that encircles "all possible zeros" of it in RHP. Because we don't know their location, we encircle the entire RHP: in other words, the complex variable s moves from –oo to +oo along jw axis and then completes the encirclement along the half-circle of infinite radius. The number of clockwise encirclements N would give the number of zeros of 1+G(s), or the number of Closed-Loop poles, N=Pc.
Now, if we know that the open loop G(s) has Po unstable poles, they would lead to Po encirclements of the origin in the opposite direction. Therefore, the total number of clockwise encirclements of the origin is N=Pc-Po, or the number of closed loop poles is Pc=N+Po.
OK, so far we talked about the plot of 1+G(s) and about encirclements of the origin.
For pure (and very important) convenience, Nyquist found ( and now we also find) it easier to shift everything and to plot only G(s), yet to recall that everything was shifted and, therefore, we now count the number of encirclements of -1, or more exactly, of the point {-1,0}.
Bottom line, contrary to what we may be misled to think, Nyquist plot allows us to simply plot the Open-Loop G(s) and yet, to think of the Closed-Loop T(s)=G(s)/(1+G(s)).
As I wrote, I can only hope this adds something to the understanding and not to the confusion.
Best regards to all,
Itzhak
Itzhad and Kartik has provided 2 very good answers, so it is going to be difficult to add any think new. However, let me try to add two thoughts:
1/ I think that the intuitive interpretation about the Nyquist plot is very bad to think about an intuitive version of the Nyquist criterion. We think about the Nyqust (or bode) plot as the gain and angle shift of the steady state output when the input signal is a sinusoidal input. As it has been mentioned, if the system is unstable we need a stabilising controller to generate this interpretation, but we still have it. However, stability is not based in the sinusoidal part of the input. We normally ignore boundary conditions of the differential equation as we ignore ROC of transfer function. In my opinion, they are at the heart of the stability issue. So it is very difficult to find intuitive interpretations. To me, stability is about the following problem: In our mental experiment, we set zero input and ouput output from (-\infty,0), and at t=0 we change the input. If after some time, the input is set to zero again and assuming that the system is causal, can we ensure that the output will go back to zero? I assume it must be really difficult to find an intuitive explanation of the Nyquist criterion when intuitive interpretation of the Nyquist plot is so different. As it has been said, it is not about one point of the Nyquist plot, it is about the encirclements of the critical point, and I don't see how to translate encirclements + number of unstable poles of the open loop into time domain conditions. But I am not really smart, so...
2/ On the other hand, I enjoy Safonov's approach to stability, where the interconnection of two systems are stable if their graph is "separated". However, it requires to think about the system in a slightly different way as we normally use. It is like we put in a bag all the combination of input and output of a system and then we put in another bag all the combination of input and output of the other system. If we can't find any common element different to (0,0) in both bags, then the f/b system will be stable. It is simple to interpret the IQC theorem as this kind of separation. I remember a discussion about the Nyquist criterion in Vinnicombe's book with is related with this graph separation for linear systems, but I am afraid that it is not quite intuitive. Again it mention the duality between causality and stability. Willems paper in SIAM'69 is also nice, but the full Nyquist criterion was not developed by them:
http://homes.esat.kuleuven.be/~jwillems/Articles/JournalArticles/1969.2.pdf
Best regards,
Joaquin
I like to thank the authors of the last two contributions. In this context, let me say that (a) I know about the contents and the background of the Nyquist theorem and (b) perhaps I am too optimistic (naive) in my hope that a simple intuitive explanation - without using the Nyquist theorem - would be possible for the described scenario (stability in spite of two successive 180deg crossings; downward/upward).
Again: Every student (before he has heard something about Nyquist) will accept that a system with feedback will go into saturation (voltage build-up process) if (at a certain frequency) the loop gain is real and >1. Why does such a simple concept not apply in case of two consecutive 180deg crossings? What about 3 crossings?
Of course, I can accept that it is not allowed to use such a simplified idea, but I would like to have a justification for it.
Thank you.
Lutz,
If it true that a '180deg crossing' is 'feed-back'
then we are into the area of
iterative re-calculations, such as allowed in Spice.
Our brains can-not calculate and re-calculate iteratively
with the brute power provided by Spice simulation
in the Time Domain.
That said,
there should be a ' basic rationale ' or ' first method ' ,
utilized by any ' first crossing of 180deg ' ,
prior to any iterative recalculation for feed-back
as required during the Time Domain .
My thought is a question :
what does Nyquist say about feed-back loops
before any re-calculation required by feed-back loops ?
The Nyquist plot seems so intuitive
and we forget that it is an interative plot ,
which expresses Nyquist Criteria during a Time Domain.
Our intuition may not be able to see thru iterative math.
Similar to what Joaquin wrote,
that a different ( split plot ) thinking might be used
to see different aspects of Nyquist.
Similar to what you wrote
' Why does such a simple concept [Nyquist]
not apply in case of two consecutive 180deg crossings... '
...
at which point ,
I suggest we do not think past this exact point .
When the signal is ready to cross-over
Then ask "What is happening ? now ? "
Our intuition may not be able to travel into iterative math.
-----------------------------------------------------------------
... an example :
From my several projects this occured:
I follow Spice every day, and think in those terms.
In my "projects"
I have so many mixing wave-forms
and multiple-feedback loops
and phase shift reactions
that it will never be possible for me to pursue
the iterating interactions the way Spice does.
However,
BEFORE this Spice process starts,
while I am in the DESIGN thought pattern,
... I have an intuitive idea
that certain conceptual events will mix
and produce certain general effects.
Years back, I would spend days calculating
the varieties of interactions, one stage at a time.
Then I obtained the SPICE, and was able to sequence
and allow iterations to occur,
and began working with building block ideas.
...
My "first Method " ideas were valid enough
for the investigation to begin.
...
It all starts with the "first Method" ideas,
and the remainder is just the required 'home-work'
to analyze the measurements and revise.
I know that is lengthy,
Analog is Time-Continuous
and so is my mind.
Glen - thank you for the "time continuous" reply. I understand what you have written - and I agree to everything.
Nevertheless, I am not yet satisfied. The precondition of my question was the assumption that somebody (a student) - who never has heard the name "Nyquist" - could come to me with that question. So - in the answer I am looking for the name "Nyquist" must not appear. Seems to be a problem.
Thank you
If I am allowed to quote Einstein (at least, the way I remember this moment), he said: "Presentation of things should be as simple as possible, yet not any simpler."
I am afraid that there could be some confusions here and the intuitive explanation may be dangerously misleading.
Nyquist theorem and plot are one thing and frequency response is another thing, even though they are related because they all talk about the transfer function.
To make sure we talk same thing, frequency response means the time response of a system to sinusoidal input commands (of stable systems, f course).
However, instability does not need any sine or any other input signal. If a system has poles in RHP, then starting from any initial condition (except for the perfect, ideal, mathematical and abstract 0) it will blow up without needing any sine or any other external signal.
The transfer function 1/(s-1) has a pole in RHP and we know that it is unstable, because its time-domain equivalent is exp(t).
The transfer function 1/(s+1) has a pole in LHP and we know that it is stable, because its time-domain equivalent is exp(-t).
The Nyquist Theorem and the Nyquist Plot are tools that allow us to use the plot of the Open-Loop transfer function G(s) and yet, to think and draw conclusion about the closed-loop T(s)=G(s)/(1+G(s)).
if a student who has never heard about Nyquist wants to know if the closed-loop system of G(s)=10(s-1)(s+10)/(s-3)(s+5)(s-20)) is stable, my most qualified answer would be: DON"T KNOW! HAVE NO IDEA!. Let's learn some Nyquist, please.
Hi Lutz,
This is a great question.
I'm sure my attempted explanation needs some work, but maybe some food for thought:
When a feedback loop's phase crosses -360 deg (mod 360 deg) in a negative direction (increasingly negative phase with higher frequency) -- the local group delay is positive (negative slope of the phase) -- and growing oscillations can occur for gain>1. The positive group delay means the output envelope of the growing oscillations is the same as the envelope present at the input some time ago, all consistent with a growing oscillation / instability.
However, when the feedback loop's phase crosses -360 deg in the positive direction (increasingly negative phase with higher frequency) -- the local group delay is negative, and growing oscillations do not occur for gain>1. Interestingly, a growing exponential envelope is not consistent with gain>1 and a negative group delay.
Thus, to me it seems that one intuitive explanation may use the knowledge of the slope of the phase (equivalent to direction of encirclement) as well as the phase and gain descriptions...
hope this is helpful
cheers
I believe the answer to your question (the non-applicability of the one dimensional notion of gain scaling and phase addition around a loop) is in the Nyquist diagrams appended to my first answer which are direct counterexamples.
The only intuition we have is that of consistency of existence. This is the basis of the scientific method of Newton. So perhaps the correct phrasing of your question is whether there is a simpler explanation than the one available. This simpler explanation will also be mathematical but involve fewer steps of reasoning from familiar facts.
This is because anything communicable is a subset of mathematics as contradictions imply all things. Physics is the tiny subset of mathematics which describes our knowledge of phenomena, as all measurement is counting.Therefore we cannot differentiate mathematical understanding from physical or intuitive understanding.
Scott, Thank you.
I can only imagine that
[ ... a student ... who never has heard the name "Nyquist" ]
might want to start here . It is good 'food for thought'.
I visualized the idea of positive / negative applied feedback phase mixing with the original signal, right away.
Lutz will be be back soon, and he is in charge.
I never told my students that a simple approach
was not sufficient to get started.
@Glen Ellis,
If we want a dialog here, we'd better try to read the answers, not just pick some words.
My main research has been around a crazy idea that the terrible Adaptive Control can be simple (and safe and efficient and beautiful) and, more recently, that even the complex nonlinear system analysis can be simple.
What can be simpler than explaining that 1/(s-1) means instability because it is just the transform of exp(t)?
Nevertheless, can this lead the student (or any of us) to understand whether the closed loop of G(s)=10(s-1)(s+10)/(s-3)(s+5)(s-20)) is stable or unstable? Can we just simply guess why the phase (and amplitude, actually) of G(s) may run up and down, why it may close circles many times without meaning anything related to stability? This is, of course, unless we do understand the meaning of the graph of the complex function 1+G(s) rotating the origin {0,0} and then the Nyquist plot simplification, which allows us to just plot the open-loop G(s) and relate it to {-1,0} in order to draw conclusions about the closed-loop?
Can we otherwise explain why, because the open loop transfer function G(s) already has two poles in RHP, the plot of G(s) would have to show two encirclements in order to imply eventual stability of the closed-loop? Why, if it does not, we might have to increase or decrease the loop-gain in order to get stability?
Explaining that real understanding of the Nyquist Theorem and the Nyquist Plot are meant to tell us how many closed-loop poles could be in RHP and so, it contains the "secret" of graphical analysis of stability (which in simpler cases can be translated into Bode plots, while in more complex cases must be translated into Nichols plots) is supposed to scare students? Not my experience.
I meant: "even the complex nonlinear system stability analysis can be simple"
Gentlemen, thank you again for all of the interesting and valuable contributions.
Today, I must admit that - primarily - I am impressed by Scott`s idea that the loop gains phase slope (resp. the group delay) may play an important role in context with my question. Some days ago, I had already the same idea but I was not yet able to transfer this idea to a kind of intuitive „stability statement“.
In this context. I like to give reference to an article
https://www.researchgate.net/publication/255171263_On_a_rigorous_oscillation_condition?ev=prf_pub
which deals with a more rigorous oscillation criterion (Barkhausen`s condition is a necessary one only.). In this paper I have tried to demonstrate that it is the loop gain phase slope (at the 360deg crossing) which determines if instability will result in oscillation or saturation.
For this reason, I have the feeling that it looks really promising to take the slope of the loop gain phase response into consideration. Perhaps we can find an answer without the necessity to apply the Nyquist plot.
Data On a rigorous oscillation condition
I studied the paper posted, but it appears the area does not operate at the rigor used in the modern control systems and signal processing literature that began with Shannon and Kalman. It is more in the spirit of the classical results of Black, Nyquist, and Bode.
General proofs about oscillation, or proving a limit cycle are not possible beyond systems with two states, which is covered by the famous theorem of Poincare and Bendixson. The Center manifold theorem covers only local properties around an equilibrium, and behavior in center manifolds is far more complex than just oscillation.
All electrical circuits are stable. You never see them blow up.
A stable system is
EITHER
a circuit with a DC bias point after the switch-ON transient of
the DC power source
OR
a steady state oscillating response because a DC bias point can
not be found.
All the best to all of you
Dear Lutz
Thank you very much for the paper mentioned above.
"On a rigorous oscillation condition"
I have a comment:
Ideal operational amplifiers can not be used for oscillator
design because of the virtual shortcut of the input terminals.
You miss the information about the sign of the input signal. A
perfect amplifier with constant gain should be used. When the
perfect amplifier is replaced with a real operational amplifier
you have the necessary nonlinearity needed for steady state
oscillation.
ERIK
Hi Erik - thank you for your comments. But I must admit that I feel a bit "lost" reading your comments. Is there anything mistakable, unclear or inconsistent in my article?
Hi Lutz,
No, just a remark which I think you should add.
If the amplifier is assumed to be perfect linear with Vout=A*Vin then your equations become just a little more complicated.
I must admit that reading the original question and then ur various answers, I come out pretty much confused and I hope no student looking for "simple" explanations of stability and instability reads these arguments.
First, at least as far as i thought I understood the question, it was about repetitive changes in Transfer Function phase and asking about a simple explanation without needing Nyquist. Here, I thought that the "simplest" explanation would be to just understand Nyquist. BTW, Nyquist himself was an Engineer at Bell Lab and, like everybody then, was aware of the dangers of high gains, so he reduced the gain until got... instability. This led him to try a better explanation than direct intuition.
So, like many of us, he found found out that, while ideas, experience and intuition are important and even vital for good engineering, in other than standard cases, one may need something more.
As I thing I mentioned, I happen to deal with Control system, where anything that you used to know until yesterday is not enough any more today, because you need things to keep being faster and faster and also more and more precise, so... you end needing many of those "just theories."
But then, one reads that people here seem to deal with electrical circuits which are all stable. In this case, you need... nothing.
But then, again, people talk about hard limits (saturation) and oscillations, which are nonlinear phenomena and have nothing to do with Nyquist. Except for the second order example of Poincare-Bendixson, I don't know any simple way to guarantee stability of an oscillator, unless one tries taking the pain of dealing with Lyapunov functions (assuming that the selection really fits the system under investigation) and derivatives and manages to show that the derivative ends being zero along some steady limit cycle (success possible yet not guaranteed).
So, quite a few different topics for a "simple" question regarding Nyquist and phase changes.
Nevertheless, good discussion and I only hope no one here will ever have bigger troubles. :-)
Scott,
I find your approach to be the simplest most basic,
although I would NOT pay a consultant to tell me that.
--------------------------------------------------------------
I NEVER told any of my students
that they could NOT start at the beginning.
--------------------------------------------------------------
Eric, It is good to read your posts again.
You have me reading so many papers !
Hi Glenn,
Thank you for your comments.
If we assume that our systems are LINEAR, then Nyquist and Barkhausen tells us that the systems are stable if the poles are in LHP and unstable if the poles are in RHP (left and right half of the complex frequency plane).
We use LINEAR models for our systems, so we can make simple analytical investigations.
All real world systems are NONLINEAR, so if we want to make use of our LINEAR theories, we must treat them as TIMEVARYING LINEAR SYSTEMS.
We should tell our students about our ASSUMPTIONS when we set-up MODELS for our systems.
I am afraid that I have to agree to Itzhak`s contribution (thank you).
And - as it seems - I was perhaps too optimistic (or naive?) in hoping that a simple intuitive answer could be found. Here comes my „intermediate summary“.
What I have learned from all your answers is the following:
Just noting a loop gain magnitude >1 at the phase cross-over frequency is not enough to say what the circuit will do after closing the loop. It may oscillate or it may saturate or it may be stable - depending on the slope of the phase function, the number of phase crossings, etc.
(In this context, it is to be mentioned that the definition of an amplitude margin reqires one single phase cross-over only. This is equivalent to the phase margin definition, which requires one single gain cross-over only).
This assertion can be supported by the following „intuitive“ explanation: If we expect a build-up process (self-excitement) at the phase cross-over frequency (with loop gain>1) after closing the loop, we are assuming that a signal at this frequency does already exist within the loop. Resulting from a switch-on transient, this might be the case for a simple 3rd-order system with a „smooth“ phase response (and a single phase crossing frequency). But we are not allowed to automatically assume this also for a more complicated feedback system (as in our case with 2 or more phase crossings).
For this reason, we need to solve the diff. equation of the system (finding the Eigenvalues) to get a picture of the circuits behaviour in the TIME domain (where the stability properties are defined). In particular, we need to know if the solution contains expressions exp(k*t) with k>0.
However, as we know, it is much easier to find the k values in the FREQUENCY domain (real part of the closed-loop poles).
As a consequence, we are not required to switch to the frequency domain (Nyquist plots) for answering the stability question. However, it is strongly recommended to do these analyses in the frequency domain (quick and reliable procedure).
So my answer to the student`s question (closed-loop stability for loop gain>1 and two phase-cross-over frequencies?) would be: For a reliable answer we need the Eigenvalues of the diff. equation, which can be found easily in the frequency domain (Nyquist criterion).
Eric,
Yes.
I had forgotten for a moment
that Real World Systems are Non-Linear.
Lutz has an clear compilation of previous posts,
which I need to read through more slowly.
Oh, thank YOU, Lutz.
Assuming that one takes the pain and finds the closed-loop transfer-function and its poles, or the solution (or just the eigenvalues) of the differential equation (never a bad exercise), is this all? Stable or unstable? How is one going to know that a small change of just one gain may finish stability, or vice-versa, that some change may bring your system to stability? And what about performance? What if we need something more than just some gain?
To make sure, i hope it is clear that al of the above is before we even dare to mention non-linearity. It takes some work and pain to see that nonlinear (when parameters depend on the system state) is not necessarily equivalent with time varying (when they only change in time).
AFTER encouraging students (and customers) to use their common sense, I also reach the stage when we all must also learn. In particular, I would like to avoid them from thinking, even 50 years later, that "Nyquist plot is about open-loop, while the closed-loop plot could behave differently." Instead, I would insist and try to help them understand that the "small" shift that relates the open loop to {-1,0} i exactly what is needed to know everything about the closed-loop.
Thank you again. I already thought that by merely trying to explain the intuition behind Nyquist I became the Enemy of the People! :-)
To Markus J. Kögel:
thank you for your contribution. However, some questions remain:
From the equation
u(t)=-y(t)+r(t)
we can derive that the feedback signal is subtracted at the summing junction (phase inversion) . However, from the equation
y(t)=-ku(t) (gain is k and phase -180°)
it is clear that k is an inverting amplifier.
Thus, we can conclude that you have described a feedback system with positive feedback (two signal inversions within a closed loop). And your conclusion is "stable" ??
As another indication for an unstable system, your equation
y(t)= k/(k-1) sin(t)
gives a positive gain k/(k-1) - in contrast to the assumption of an inverting forward amplifier.
@M.J. Kögel:
At first, the function T(s) in your last post differs significantly from your first example (which I have commented), where the gain of the active element was simply "-k" (fixed factor with 180deg phase shift).
Secondly, it is known that the closed-loop with the given T(s) is stable. But the question was, if we can decide this without detailed stability analyses (frequency domain).
Just one comment on your initial question: the only bad question is the question that is NOT asked.
No question is too naive and if it helps us to understand that we were wrong, even better.
I always encouraged students (or anyone else, including here, on other RG topics) to ask questions, any questions and to keep asking as many times as needed.
Besides, taking into account so many and so different answers to one question, one can hardly call it naive. :-)
Itzhak - thank you for giving me some "psychological" help. I have some other questions in my store. Perhaps within the next days...
Welcome, although I think it was more about my stand than about you.
When does one become an experienced Engineer? When she/he does not feel any more shame to say "I don't understand."
When does one become an expert? When she/he does not feel any more shame to simply say "I don't KNOW."
Of course, assuming that one still wants and then also continues doing anything that is needed to know.
@Lutz: Given a transfer function, H(s)=1/(s+1), do you know its impulse response? Frequency domain is great, but causality must be assumed. The Nyquist plot is not enough, boundary conditions are needed. If H(s)=1/(s-1), causal, and input u(t)=sin(t) for all t\in(-\infty,\infty), what is the output? There is no exponential in the solution that I compute by hand!
I think that the Nyquist theorem is a wonderful piece of art, but I don't think we can try to find an intuitive view of the crossing and these stuff, because what the Nyquist theorem is providing is the noncausal behaviour of the inverse operator of the closed loop system.
Sorry for such a pessimistic answer! Anyway, if you find a easy interpretation, let me know!!!
"Frequency domain is great, but causality must be assumed. The Nyquist plot is not enough, boundary conditions are needed."
Yes - full agreement. Simple example: Unintentional interchange of the inv. and non-inv. inputs of an opamp during circuit simulation (AC analysis). That means: Pos. dc feedback instead of negative feedback. The magnitude response looks rather "normal" - so no indication for instability. We need a simulation in the time domain to reveal the problem.
How did you call your question. Lutz? Naive? :-) Anything but naive.
When I dared to answer your question, I did not assume that anyone here does not know Nyquist. However, interpretations related to Nyquist contribution and do seem to confuse our minds.
Joaquin, before even mentioning any Nyquist, as long as there are things that we don't know and that we can and want to learn, I can see no reason for pessimism.
First, let us deal with your simple examples.
If by H(s) you mean the transfer function of your Plant and all you want to know is the behavior of this plant, then the transfer function H(s) is by definition the Laplace Transform of the impulse response h(t) of the plant. What problem with causality do we have here?
H(s) is also another, sometimes convenient, representation of the first-order system
x'=-x+delta(t), where delta stays for the unit impulse function. This is in case the plant was at zero x(0)=0. What is the role of delta(t)? As its integral is 1, it would move the plant from 0 to 1 at time 0+ and do nothing else in continuation, as the plant continues its time response from x(0+)=1. So, the time response is the same as x'=-x (with no input) x(0)=1.
OTOH, H(s)=1/(s-1) is the transform of exp(+t), it is unstable and the corresponding system would just blow-up for any initial condition, except for the ideal perfect point x=0. (Maybe this is another issue, but that's why the forefathers of stability analysis taught us that stability of an equilibrium point is not checked AT the equilibrium point, but rather in its close neighborhood. ) Because the system is unstable, there is no frequency response here and adding any input command would not change the divergence of the system.
But now, Lutz talks about sign change. Now, I must understand that your H(s) could mean only the open-loop circuit and you want to close the loop to end with closed-loop control.
Here, Nyquist plot is meant to tell you if your closed-loop system T(s)=H(s)/(1+H(s)) is going to be stable or unstable and, if not, how much you might have to increase or decrease the gain to get stability. As the open-loop of H(s)=1/(s+1) has no unstable poles (poles in RHP) you must get no encirclement of {-1.0}. If you don't, you then can see how much the gain could increase before it does encircle (gain margin, etc.). In this specific case, you see that you could let the gain K of H(s)=K/(s+1) be as large as you want, without making the closed-loop unstable.
If there is a gain change that you don't know, too bad, If you do now, however, then Nyquist gives you a different plot (if you only thnbk of amplitude, it remains the same yet for other phase values) and will tell you if the closed loop is stable or unstable.
Now, for the open-loop unstable H(s)=1/(s-1), Nyquiat plot is again to get conclusions on the closed loop. If the plot does not encircle {-1,0| is it alright? NO. Because the plant has ONE open-loop pole in the RHP, you would have to have ONE counterclockwise encirclement of {-1,0} in order to get stability.
I wonder: Does this explain anything or just adds confusion?
Itzhak - yes, I know, There is a simplified criterion (stable open loop) - but in some specific cases we have to apply the complete (general) Nyquist criterion. However, don`t be afraid - I think, your contribution does not increase or cause any confusion.
@ Dr. Ilhan Polat,
Thank you so much fpr your valuable contribution.
And - yes - you meet exactly the point of my concern (intuitive way). And you have presented a very illustrative explanation why we - resp. our brain - must fail in some cases. I am afraid, this will apply also to my brain. Thank you again.
Lutz vW
Dear Lutz,
concerning:
"For this reason, we need to solve the diff. equation of the system (finding the Eigenvalues) to get a picture of the circuits behaviour in the TIME domain (where the stability properties are defined)."
"The diff. equation of the system" is NONLINEAR so I hope you agree with me that this picture is an INSTANT picture which only tells us whether the signals are increasing or decreasing.
Best regards
ERIK
"There is a simplified criterion (stable open loop) - but in some specific cases we have to apply the complete (general) Nyquist criterion."
I think it is my turn to be confused. Actually, it reminds me an old joke which ends "No, No. I do understand the question; I just don't understand the problem."
If I have a very complex Open-Loop transfer function and I want to know if the Closed Loop is stable or unstable, I go to MATLAB and, to be able to write actual transfer functions, I write
s=tf('s')
and then I can write any TF that I want, such as Markus'
K=5
Gs=K* (s+5)* (s+6)/((s+0.1) (s+0.2) (s+0.3))
I used a general K, so I can change it if I don't like what I get with first value K=5. Now, I write
figure
nyquist(Gs)
and get the Nyquist plot. As the open-loop has no poles in RHP, I should see no encirclement of the critical point {-1,0}. Because the amplitude reaches high values or the order of 10000, while we want to know what happens around the small magnitude of 1, we must show the plot at a few scales. After careful examination, we see that there is no encirclement and the closed loop system is stable indeed.
This leads us to the next step, Nichols plot, which in principle is the same, yet plots the amplitude at logarithmic scale (i.e., in dBs) versus phase
figure
nichols(Gs)
This solves the problem of linear scaling and, because the line goes below the critical point {0 dB, -180 degrees) and shows the stability of the closed loop without any doubt.
We can change the gain and see that the closed loop remains stable for any large K, yet for small gains, such as K=0.01, for example, we get instability.
It is my naivety for sure this time, yet what is the problem?
Cite
1 Recommendation
Lutz von Wangenheim
Hochschule Bremen
Itzhak - please, be patient. I will answer in 3 or 4 days from now because I am away on travel.
Cite
Lutz von Wangenheim
Hochschule Bremen
"The diff. equation of the system" is NONLINEAR so I hope you agree with me that this picture is an INSTANT picture"
Hi Erik - misunderstanding?
I spoke about a system which can be described with LINEAR diff. equations.
Cite
Erik Lindberg
Technical University of Denmark (emeritus)
Hi Lutz,
-----------------------
cite:
" Hi Erik - misunderstanding?
I spoke about a system which can be described with LINEAR diff. equations."
-----------------------
OK. You are of course allowed to use this assumption.
To me "LINEAR diff.equations. are only useful as models for
circuits with a time invariant DC bias point and small signals.
Cite
Itzhak Barkana
BARKANA Consulting
Have a nice trip, Lutz.
For whenever and whoever has time to read:
The simple gain amplifier y(t)=ku(t) and its closed-loop may actually show us why and when the gain and phase may affect stability.
If the loop is closed "in proper phase" with negative feedback and the closed loop is y(t)=k/(1+k) u(t), then the system is stable, no matter what the gain is. If k is very large, then y is approximately u, if k is very small, then y is approximately ku and if k=1, you have the 3 dB attenuation of the signal or the 6 dB attenuation of its power.
If there is a change in phase y=-ku, you get the closed loop y(t)=-k/(1-k) u(t) and here we can see how the various gains are affected by the phase inversion, i.e., by the fact the the 0 degree phase has become the -180 degree phase. We have a negative output for a positive input, yet, if k is large, we have approximately y=-u, at low k we have approximately y=-ku and only when k is about 1 do we get real "problems" which result in very high outputs, in particular y=infinite when k=1.
This may give some indication why, when one plots the frequency response, high and small amplitudes are not adversely affected by the various phase values and why the "problematic" region is around the amplitude 1 (or 0 dB).
Also the amplitude remains the same for a sign inversion, yet, if the amplitude was 1 at 0 degree phase and 100 at 180, now it is 1 at 180 and 100 at zero, so the Nyquist (or Nichols) plot looks pretty different.
Cite
Itzhak Barkana
BARKANA Consulting
A "small" error. For y=-ku, and the closed loop y=-k/(1-k) u, the result at high gains remains approximately y=u, so at these values we do not see any effect of the gain change.
Cite
Lutz von Wangenheim
Hochschule Bremen
"To me "LINEAR diff.equations. are only useful as models for
circuits with a time invariant DC bias point and small signals."
Hello Erik - to me, linear (or linearized) systems are valid for "small" signals at a FIXED DC bias point only (which are linearized around THIS bias point).
Cite
2 Recommendations
Erik Lindberg
Technical University of Denmark (emeritus)
Hello Lutz,
Concerning:
------------------
Erik: "To me "LINEAR diff.equations. are only useful as models for
circuits with a time invariant DC bias point and small signals."
Hello, Erik - to me, linear (or linearized) systems are valid for
"small" signals at a FIXED DC bias point only (which are linearized
around THIS bias point).
------------------------------------
I agree 100 pct with you as usual. Please let me try to explain my thoughts concerning your very great question:
"What is the intuitive explanation for the stability of a specific
feedback system ?"
Question: "What is stability ?"
https://en.wikipedia.org/wiki/Stability
http://www.merriam-webster.com/dictionary/stability
http://www.dictionary.com/browse/stability
A: "What is a specific feedback system?"
I start with the basic assumption of an electrical lumped element model of the system.
This model is equivalent to a set of implicit nonlinear differential equations and algebraic equations in the time domain.
It is very difficult to make analytical solutions for this model.
If we observe no fixed DC bias point we can say that an intuitive answer to your question is that the circuit is searching for a bias point. We observe steady state oscillations. We ask the question: "Why do oscillators oscillate ?"
B:
If we observe a time invariant DC bias point (which is the same as a FIXED DC bias point) when we apply the constant DC power source then we may assume small signals so we can linearize the equations system.
Question: "Where is the border between small and large signals ?"
Now we can make use of the classic linear circuit theory with the Laplace-transform of the linear differential equations into algebraic equations in the complex frequency domain.
The intuitive answer to your question is that the system is stable if the poles are in LHP and unstable if the poles are in RHP.
C:
The kernel in our SPICE programs for the numerical solution of the equations for the system is the solution of a linear set of equations. At each integration step in the time domain, we have an instant small signal model for the system corresponding to an instant DC bias point.
We may interpret the nonlinear system as a time-varying linear system. If we calculate the eigenvalues of the linearized Jacobian of the differential equations we can find the poles for the instant small signal model. If the poles are in RHP the signals will increase (instant unstable). If the poles are in LHP the signals will decrease (instant stable).
With this approach, I hope we can obtain insight concerning the behavior of the circuit and answer the question: "What is the mechanism behind the oscillations we observe ?".
For first and second order oscillators the mechanism is "multivibrator" based on real poles. For third and higher order oscillators limit cycles may bifurcate into chaos and we may observe "multifrequency" behavior.
All the best
ERIK
Cite
2 Recommendations
Itzhak Barkana
BARKANA Consulting
Erik,
"We may interpret the nonlinear system as a time-varying linear system. If we calculate the eigenvalues of the linearized Jacobian of the differential equations we can find the poles for the instant small signal model. If the poles are in RHP the signals will increase (instant unstable). If the poles are in LHP the signals will decrease (instant stable)."
When we cannot have exact analysis, any approximation is valid.
However, as a general rule, nonlinear systems, where system parameters change as a function of the system itself (state-variables), such as an amplifier where its gain is a function of its current, is not the same with time-varying systems, where parameters only change as a function of time.
Second, even for the time-varying systems, you may check that the "time-varying poles" remain withing the LHP. However, although it may remain stable, nothing and nobody actually guarantees that the system remains stable.
I wrote "time-varying poles" within quotation marks because there is not such a thing. You may use a state-space representation of the time-varying system, x'=A(t)x+B(t)u, where the system matrix A(t) is time-varying and check that its time-varying eigenvalues remain within the LHP, yet, maybe contrary to our immediate intuition, this thing does not guarantee stability. Some famous names thought they do, yet counterexamples show the contrary. I mean, you may still happen to get stability for some particular variation, yet a small change my totally destroy it.
While the concept of time-varying eigenvalues is valid, poles and zeros characterize transfer functions and there is no TF in non-LTI systems.
All the best,
Itzhak
Cite
2 Recommendations
Itzhak Barkana
BARKANA Consulting
Erik,
I wouldn't like to be misunderstood. What you write about time-varying eigenvalues is our common and natural intuition.
As I wrote on another RG discussion, at some point in time I even had a proof that time varying eigenvalues in LHP would guarantee stability. People were telling me about counterexamples, yet I had a "proof." This lasted until... I manage to build my own counterexample and then I was also able to find out that my "proof" was only "almost" a proof.
In retrospect, I call this my best and happiest error ever, as it then forced me to learn so much about stability of non-linear systems, on passivity, etc., that it was worth it.
Cite
3 Recommendations
Erik Lindberg
Technical University of Denmark (emeritus)
Dear Itzhak,
Thank you for your comments.
Concerning:
------------------------------------------
However, as a general rule, nonlinear systems, where system
parameters change as a function of the system itself
(state-variables), such as an amplifier, where its gain is a
function of its current, is not the same with time-varying
systems, where parameters only change as a function of time.
Second, even for the time-varying systems, you may check that
the "time-varying poles" remain within the LHP. However,
although it may remain stable, nothing and nobody actually
guarantees that the system remains stable.
--------------------------------------------
Please do not confuse my approach with:
---------
"Time-Varying Systems and Computations"
Authors: Dewilde, Patrick.; Veen, Alle-Jan van der.
Year: 1998
Pages: 459 s.
Type: Book
Publisher: Kluwer Academic Publishers,
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=A177956A52B675882726E64F57AA0CE9?doi=10.1.1.418.2182&rep=rep1&type=pdf
-----------
"The LDU-Decomposition for the Fundamental Matrix of
Time-Varying Systems" 5 authors, including P. van der Kloet
and F.L. Neerhoff, Delft University of Technology
available on ResearchGate:
https://www.researchgate.net/publication/228989617_The_LDU-Decomposition_for_the_Fundamental_Matrix_of_Time-Varying_Systems
-----------
My approach is a simple engineering approach. The solution of
the nonlinear differential equations is the step response of
the DC power signal at time infinite of course. Because the
kernel of the solution is the solution of an instant linear
system i.e. we have an instant small signal model. The poles of this instant model can only tell us whether the signals are increasing or
decreasing. It is, of course, a low-pass filtering because of the
finite minimum integration step.
Best regards
ERIK
Article The LDU-Decomposition for the Fundamental Matrix of Time-Var...
Cite
Itzhak Barkana
BARKANA Consulting
Erik,
As I said, when you don't have a better treatment, any approximation and any idea is welcome. Many times, the Math tells us that "If...then..." while usually engineers have to deal with "...and what if not?"
Using a linearization approximation for your system is alright, yet from your previous message I understood that you analyse the time-varying system and its varying poles at various times and, if the poles remain within LHP, you can draw the conclusion that the system us stable and this is not alright, as I myself happened to be burned when I thought it was.
Cite
1 Recommendation
Itzhak Barkana
BARKANA Consulting
As it often happens, the discussion started moving in different directions. This is not necessarily bad, yet could lead to even more confusion.
As I understand it, the original question that Lutz asked was about stability of linear time invariant (LTI) feedback systems. In other words, you know your open-loop systems and want to know if the closed-loop system is stable.
Here, people seem to be looking for a "simple" intuitive idea versus the assumable "difficult" Nyquist analysis. Here, although intuition and simple explanation are good for an initial introduction analysis of simple systems, my experience taught me that understanding some Math (and maybe eliminating the apparent mystique related to it) is worth the effort as it finally only adds intuition for other than simple cases (as Nyquist himself, a "simple" engineer, also realized).
So, first I also added the simple gain example and the "intuition" related to it. It shows that the phase, be it 0 or -180 degree, does not much affect high gains and small gains, yet has a tremendous effect at the gain of 1 (0 dB). However, it is actually dangerous to try simple explanation in more than simple examples, in particular when the phase moves around 180 yet is not exactly 180 and the gain moves around 1, yet is not exactly 1.
Instead, a simple Nyquist plot tells us all about the resulting closed-loop system. Still, because the gain may reach very large values, while we are interested to see what happens around {-1.0}, it could be difficult to see all information in one Nyquist plot. In Markus's example, for one, increasing the scale, one sees an encirclement around the critical point and may reach the conclusion that the closed loop system is unstable. However, at a larger scale, one sees another encirclement in the opposite direction, which brings the total number of encirclements to zero.
That's why, although Nyquist approach must be understood, as it contains all explanation, it is less than convenient as a tool and so, people moved to the logarithmic scale. In not too complex cases, the tool is Bode Plot, where both the amplitude and phase look close to straight lines. However, again, one can reach almost no conclusion when the phase and gain start moving up-and-down, not to even mention the complication of unstable and/or non-minimum phase open loops. For this the most convenient tool is Nichols Plot, where the amplitude in dB and the phase are shown on same plot and, no matter how wild variations we may see, everything I alright if the center part around (0dB, -180 degrees) remains free..
Then, even if the normal negative feedback is stable, yet instead of the negative feedback there was an error and the feedback actually is positive, the curve changes, yet Nichols plot still tells you all you need.
But then, the discussion started moving to nonlinear and time-varying systems and this is a totally different issue. All I wanted to say here is that, if we can trust an LTI approximation around some equilibrium point, the analysis above is still alright in some neighborhood around the equilibrium point. However, as soon as the nonlinearity and/or time-variance are not negligible, we need some other tools and cannot talk poles in the context of stability.
Bets regards,
Itzhak
Lutz: "There is a simplified criterion (stable open loop) - but in some specific cases we have to apply the complete (general) Nyquist criterion."
Itzhak: I think it is my turn to be confused. Actually, it reminds me an old joke which ends "No, No. I do understand the question; I just don't understand the problem."
Hi Itzhak - I am back. My answer can be very short. I do not understand your confusion. All I wanted to say is that there is a "simplified" criterion (applicable for stable loop gain functions) and a "general" criterion which must be applied in case of RHP open-loop poles (instable loop gain). That`s all.
Furthermore, I completely support the contents of your last contribution above.
(Let me add that - in particular - I love the last part of your sentence: Many times, the Math tells us that "If...then..." while usually engineers have to deal with "...and what if not?")
I am an Engineer, after all.
As I wrote, I learned that common sense and ideas are not enough.
On the other hand, after having to deal with some "heavy" Math, i was shocked seeing that forgetting your common sense and automatically applying "well-established" rules could be much more dangerous and can lead to total nonsense, including mathematical nonsense.
So, good engineering must stay somewhere in the middle and cope with both worlds.
So, good engineering must stay somewhere in the middle and cope with both worlds.
Yes - I am always trying to follow this guideline also. Therefore, I always try to get a "feeling" for some effects which - very often - result in an intuitive explanation for the effects observed.
I somewhat hesitate to mention again the "critical" question "how Ic is controlled in a BJT". Nevertheless - and without any math or charged carrier physics - I simply dispute that a small current (very few electrons) could directly control a large current (large amount od electrons). That`s beyond my wildest imaginings. This is intended to be only an example for not forgetting "our common sense".
Ithhak - you wrote: "...usually engineers have to deal with "...and what if not?"
However, sometimes another question arises: "...surprisingly, it works - why ?". In this context,I have a specific example in my mind - I am curious when I will decide to write it down.
Itzhak,
concerning
------------------------------------------------
Using a linearization approximation for your system is alright,
yet from your previous message I understood that you analyze
the time-varying system and its varying poles at various times
and, if the poles remain within LHP, you can draw the
conclusion that the system us stable and this is not alright,
as I myself happened to be burned when I thought it was.
-----------------------------------------------
I do not draw this conclusion. I just observe how the
eigenvalues move around in the complex frequency plane. I hope
this observation could give me some information concerning the
mechanism behind the behavior of the system. Instead of
observing voltages and currents we should observe charge and
flux.
Lutz,
I am sorry that I misunderstood your question. I should have
seen that your basic assumption was a LINEAR system. I want to
point out that the open loop system and the closed loop system
are two different systems. It is the stability of the closed
loop system which is of interest. To me, linear systems with
poles in RHP are not stable.
Question: Is it possible to create a linear system with poles
in LHP which is not stable ?
Second, even for the time-varying systems, you may check that
the "time-varying poles" remain within the LHP. However,
although it may remain stable, nothing and nobody actually
guarantees that the system remains stable.
ERIK - may I ask you: Have you any simple and intuitive example for a system with time-varying poles?
It is my turn to give you a break, as I will be travelling gor next 10 days..
My control issues force me to enter pretty strongly in the stability issues.
However, I had no intention to mix into networks, which is your expertise.
Lutz,
concerning
----------------------------------------
Second, even for the time-varying systems, you may check that
the "time-varying poles" remain within the LHP. However,
although it may remain stable, nothing and nobody actually
guarantees that the system remains stable.
------------------------------------------
I agree with you 100 pct.
concerning
------------------------------
ERIK - may I ask you: Have you any simple and intuitive example
for a system with time-varying poles?
--------------------------------
The pendulum clock is assumed to be linear for small angles
where sin(x) is "equal" to x.
The complex pole pair is very close the imaginary axis in LHP.
The necessary nonlinear component is the escape mechanism which
introduces an impulse of energy in each period. The size of this
impulse determines the amplitude of the oscillations so that the
losses are compensated.
If you pull the string of the loads (increase gravity) you will
see how the clock answer back in a desperate way. The complex
pole pair goes out in LHP maybe down to the negative real axis
where it split-up into two real roots.
Erik - thank you for the example, which is a non-linear one, right?
I was travelling for a while and you got some rest. :-)
Yes, systems happen to be stable, even nonstationary and nonlinear, so varying system eigenvalues do not necessarily lead to instability. On the other hand, do not guarantee stability either.
I think you may find some interest in the example attached.
Deafening silence! :-)
No interest seeing how nonstationary gains, just because they are not fix, may lead to total instability, even though they only vary within the so called "admissible range?"
After being sure it ought to "work" and after I even had an "almost" proof, for me this was a terrible shock! Now, I call this my happiest mistake ever, as it forced me to learn so much about nonlinear systems.
The danger is exactly that those systems may "work," yet only until they may also stop working. That's what makes stability with nonstationary systems a not that simple issue and requires proving stability for each particular way of variation.
Hi Itzhak, thank you for reminding me of your example of a nonstationary system.
Two questions from my side:
* Why do you speak about a non-linear system? Where is the non-linearity?
* Intuitively, it seems to be clear why the amplitudes are growing (last example):
The gain variation has (nearly) the same frequency as the "eigen frequency" of the system. Did you perform also tests with other frequencies?
Hi Lutz,
This is what I ask people: to come back with any qestion/comment/.objection. :-)
This is only the first introductory example of nonstationary gains. which mnaged to "disturb" my previous image of hving "safe" control, if you only kept the gains within the "safe" range.
In such a simple example, indeed, the danger is that it looks "intuitive" but why? If you just supply an external sinusoidal signal at same frequency, the system would only oscillate at that frequency. Why should a given internal sinusoidal gain variation maintain stabilty at a given amplitude, yet just a bit more would lead to total destruction, although you took great care to keep it in the "safe" region? How would you decide about the "safe" admissible variation amplitude?
The problem for practictioners is exactly that: finding a good example is difficult, yet in practice, if gains vary, they ultimatley find the way to lead to divergence. A great and very intuitive rule in Adaptive Contyroil, the so-called MIT rule, computed the system gain as a function of the tracking error. The intuitive and pretty ingenious idea was that, if the error tries to increase, the gain should also increase and then decrease, if large values are not neded any longer. This worked very well, was implemented on a plane which flew with it nicely, yet only until... it crashed.
The idea was not wrong, and I even called it ingenious, yet you must make sure you can also prove that the specific system with the specific gain policy rermains stable. This leads you to Lyapunov, LaSalle, Passivity (or Positive Realness in LTI), etc.
In such a simple example, indeed, the danger is that it looks "intuitive" but why?
For my opinion, one can intuitively expect problems because each point on the gain function can be seen as a new bias point - allowing (repetitively) new transients. This leads to divergence when both frequencies are equal.
If you just supply an external sinusoidal signal at same frequency, the system would only oscillate at that frequency.
Would it "oscillate" or simply act as an amplifier?
Would it "oscillate" or simply act as an amplifier?
This turn of discussion only amplifies the problem of "simple" intuitive examples.
Still, if you have a tracking system with unit output feedback, then the output would more-or-less follow the sinusoidal input, maybe with some amplitude error and maybe also with some phase delay. If it is designe to amplify, it will amplify.
BTW, as it affcets what you have known, I think you must digest it well, in cluding all plot. If you expand the figures, there is not that much similarity between the gain variation and output, except that both look periodic. The frequency is quite diferent, not to metion the form.
Again, this is what led me to all other systems, where people have some nonlinear policy, no specific frequency, just the variation of gains within the carefully maintained "safe" domain and they just blow-up.
Itzhak, I think the case you have described is a typical example for a system that is excited (activated) in its own natural frequency.
(Remember the old example of a bridge which was destroyed after a group of soldiers were marching over it with a repetition rate which was identical to the natural frequency of the bridge).
Lutz, I am afraid that we talk about totally different things.
The bridge is NOT the case here. First, if the bridge could be well damped, it would not oscillate, yet it cannot be because the constraint of length versus width. Also, if its material resitance could stand large motions, it could oscillate forever and it would not break. However, its resistance breaks and its parts break appart with the oscillations, in particular beyond some amplitude of motion. This is NOT divergence!
In our case, we have a simple system with no limits and, if it would oscillate at amplitude of 1 for the input of 1 (or whatever its exact input-output gain is), it could as well oscillate at the amplitude of 10^6 for an input of 10^6. You have its model and can try it. (Actually, this is the way I start trusting things that seem to go against my simple initial common sense.)
The fact that the mere gain change leads to divergence, although any constant gain in between the minimum value to the very maximum value would maintain stability, is not that simple. Although ideas and common sense are vital in Engineering (in Control, for sure), people were shocked to realize that ideas and common sense alone are not sufficient in nostationary systems and that some pretty heavy Math is also needed to guarantee stabilty.
Sorry, my friends. I tried to bring to your attention a bizarre phenomenon from the world of nonstationay or nonlinear.
Instead, I am reminded that an oscillating bridge might break (not if it is from steel - and even wood - when it would just oscillate) or that an oscillatory input might be amplifed by a circuit. None of the above would result in ever increasing oscillations like in my bizzare example.
I called it "bizarre" because our common sense is first of all linear and time-invariant. After 1, 2, 3, we expect to see 4 and so, 6 is a surprise, not to mention -2.
So, we know that pole-position fully defines stability and are surprised if leting them vary may affect stability, even though, once "poles" vary, they cannot even be called "poles" any more.
Most certainly I failed.
"Lutz, I am afraid that we talk about totally different things. The bridge is NOT the case here."
Yes, Itzhak - of course, it is not the same effect. In my example, we have an external periodical excitement whereas in your (nice and interesting) example the gain within the loop varies with a periodical shape.
However, in both cases, the disturbing effect has a frequency which is identical to the natural frequency of the closed-loop system. That is the commonality of both cases. Therefore, the bridge example went suddenly into my mind.
I think, the most important question is if such a case is realistic. For my opinion, such an unwanted gain variation can happen due to unwanted feedback effects (affecting the gain value).
However, in both cases, the disturbing effect has a frequency which is identical to the natural frequency of the closed-loop system.
How do you see this? I see double frequency of the gain as compared with the system.
Besides, a pretty large gain oscillation has no destabilizing effect, while for just a bit more the amplitude would increase without limit.
I think, the most important question is if such a case is realistic.
This example is only meant to give simple illustrtion to the fact that, contrary to what some great names thought, letting the gains change within some "admissible" domain is safe. It is not. You get the results after had been obtained, while I, to give you a simple example, I guesssed that a given oscillation might affect stabilty and I did not succeeed.
Then, sudenly, a bit more took everything to the skies.
While this example is artifuicial, nonlinear and adaptive control methods have this problem every single day. While they do not think of any fixed frequency, the adaptivre gains manage to find the "proper" path to blow things up, if it is there. See the plane crash. It stopped Boeing from even touching adaptive ccontrol for some 40 years or so.
"How do you see this? I see double frequency of the gain as compared with the system."
Comparing in your figures the transient response for k=0.1 and the sinus for gain variation, I can see that the two frequencies are identical. are they not?
"This example is only meant to give simple illustrtion to the fact that, contrary to what some great names thought, letting the gains change within some "admissible" domain is safe. It is not."
Itzhak, I suppose you are referring to the various stability criteria which exist. However, I think, these criteria are, of course, correct because they are applicable to steady-state conditions and time-invariant parameters only.
I already said sory, yet maybe I should just appologize for disturbing the world of LTI.
OK, I thought I got you a glimpse into the complex world of nnstationary/nonlinear systems and instead, we start analyzing my (after I got it, now pretty trivial) example.
No, No No! Once you know that, due to external or internal conditions, the gain my vary between 1 and 10, for "safety" and "robustness" you should check its stabilty for any fixed gain value between the limits. Then, the opinion was that you can let it work and stabilty is guaranteed although it may change at will. The result is no, or at least not necessarily (not all nonstaionary systems diverge).
Itzhak, I have the feeling you misunderstood something.
You did not "disturb" the wonderful world of "LTI". In contrary!
Your example has clearly demonstrated that our common LTI approach does NOT cover all situations which may exist. However, it seems to be important to know under which external conditions and constraints certain rules or criteria are applicable or not.
But such an approach is not (and must not be) new for us. In principle, this applies to all formulas and rules we are using because all these equations and curves contain simplifications (neglection of parasitic and other disturbing effects) which not always are allowed.
OK, I feel much better. :-)
Actually, I am coming from same world and I was sure that (paraphrasing somebody) i could say "I am in Control here!"
Great names such as (the Russian) Aizerman and Kalman thought so.
As I said, in spite of some counterexamples, your truly thought that had a proof. It was pretty well constructed, I also found out that some counterexamples were wrong, so everything looked "almost" alright. But then, I found my own counterexamples and other examples in the literature which were no wrong.
The problem is that, with all due respect to this example, getting too many conclusions from one example is again dangerous. Of course the varying gain may do something bad to the system, find some special frequency, etc. However, the fact that even at the "bad" frequency one sine of a given amplitude maintains stability, while just a bit more makes everything blow up is the main point.
However, it seems to be important to know under which external conditions and constraints certain rules or criteria are applicable or not.
I wish I had a simple answer. The only answer is that linear is the first order polynomial, while nonlinear is all the rest. So, once you cannot assume LTI, you have no choice but to address the specific plant and the specific conditions or policy.
Disclaimer: it does not imply that people do not use nonstationary gains and policies and do not show that they work. They do and systems (may) work and with "work" one cannot argue, even though the guarantee is not there.
But such an approach is not (and must not be) new for us. In principle, this applies to all formulas and rules we are using because all these equations and curves contain simplifications (neglection of parasitic and other disturbing effects) which not always are allowed.
In LTI sytems, the approximation hoilds as long as the frequency response of the model is not far from the frequency response of the real system. In my example, gain variation killed the system even though it would have stayed asymptotically stable for fixed gains.
Also, adding some stable modes to the LTI system would not affect its stability in general. However, as you move to nonsationary gains (e.g., nonlinear or adaptive control), any unmodeled dynamics may just kill everything even if you proved that the policy maintained sdatbilty with the nominl model.
Hi Lutz (and anyone with interest),
I just saw our correspondence again and I had more thoughts.
My previous example was meant to show people what I myself suddenly "discovered" and what managed to "destroy" my previous confidence that "I am in Control here." The usual practice is to test your plant at various possible fixed loop gains and then let it go freely though those gains as the operational conditions require, only making sure that they do not violate the “admissible” limits.
The first idea (“conjecture” as it was not proved) of rather famous names, was that this was guaranteed.
However, while some systems seemed to work with nonstationary gains, catastrophes happened. Then, people managed to come out with counterexamples, which showed that even if the system was asymptotically stable for any fixed value within the admissible domain, things may not be stable once the gains were allowed to vary within same “admissible” range. Because first counterexamples were using switching gains, other people thought that continuous variations may still be safe (such a claim I got even these days).
Gain scheduling is used and actually is a very successful technique and I take my hat off to what it had managed to do in flight control, etc.
Still, this is not a guarantee and I think usually people are trying to make sure that the gains do not change too quickly from one value to another, so that the linear approximation more or less holds.
How the system responds when you change the gains is an open question. Even your observation that the time-response seems to be related to the frequency of the “stabilizing” gains is not clear.
You thought that I was changing the gains at the natural system frequency, yet even at this frequency the system worked pretty “nicely” until at some amplitude stopped being nice.
Still, to eliminate any doubt, I changed the frequency from sin(t) to sin(1.1t) and sin(1.2t) and first I thought that the phenomenon indeed vanished. However, this was only until I increased a bit the amplitude of the gain variation, still within the same “admissible” limits, and again got total divergence.
In adaptive control, where the gains are changed by some algorithm as a function of the tracking error, there are situations when both the gains and the tracking errors end varying very little around some average value, instead of ending with constant values, as the linear thinking would guess.
The total destruction in my example is only one result, yet some limit cycle, etc., are also possible.
All I want to say is that there is no guarantee that varying gains within the “admissible” limits maintain the expected asymptotic stability that any of the fixed values would provide.
Lutz von Wangenheim
Hi Lutz (and anyone interested),
Some long time ago we were discussing stability when parameters vary within a supposedly "admissible" range. My example showed that the "admissible" range does not guarantee stability in nonstationary systems. Still, my example could be interpreted that the parameters change at some particular frequency.
No, I found a paper, where the eigenvalues of an LTV system remain fixed and located in the left half-plane. This should lead us to the conclusion that the system must be stable. OK, it is not.
Because the paper had no room for explicit computation and only presents the results, I worked all the detail in the attached file.
Have Fun!