I have designed a control law based on sliding mode control for a grid-tied inverter, the controlled variable follows the positive reference values well, but when it comes to negative reference values it doesn't work.
In principle a sliding mode controller should be indifferent about the sign of the reference value. I suspect the problem is a specific interaction between your plant model and your (implemented) control law. Note that a proper implementation and simulation of a sliding mode controlled plant is non-trivial.
Debugging hints:
Can you prove that your sliding surface is asymptotically stable for all (interesting) reference values?
What is the reduced dynamics, i.e. the system dynamics if the system is constraint to the sliding surface?
What happens if you choose initial values near the desired equilibrium?
Can you exclude that the undesired behavior is an artifact of the simulation?
If you have the control-law in feedback form (explicit in state variable x), as u=u(x), you may plot u(x) versus x, to see whether it lies in the second and fourth quadrants of the Cartesian coordinate system. For example; u(x)=-x, or, u(x)=-x^3, have the typical trend and behavior of a stable feed-back control-law for stabilization purpose, it means they remain in the second and fourth quadrants. Moreover, if the feedback-law and state-variable are numerically available as a time-signal, then you may plot u(t) versus x(t) similar to a phase-plane portrait, and again check whether it remains in the second and fourth quadrants of the coordinate system. I guess your feed-back control-law does not remain in the second quadrant for negative initial-condition or negative reference-signal. Please check it up again, and verify the condition.
Update: A correctly designed control-law should be anti-symmetric about the vertical-axis; y.
Thanks all for your comments Saeb AmirAhmadi Chomachar, Carsten Knoll, and Zeashan H. Khan, they are really valuable comments. It turns out that the problem is not in the control law of the SMC. I modeled the system in Simulink and I was using a saturation block at the control output, it was working fine with the conventional PI controller, but it turns out that I have to increase the saturation level when I used the SMC. that is why SMC was not successful in tracking the reference value. Thanks all
The 2nd/4th quadrant sector criterion is only true for continuous-time system x' = u(x), x ∈ 𝑅n, if n = 1. Many control theorists have come up with counter-examples to show that the criterion is insufficient to conclude absolute stability for n ≥ 2. It is a very interesting problem in Aizerman's and Kalman's conjectures.
Meanwhile, Ahmed Mohammedosman should avoid using the saturation block in the Simulink/Discontinuities library to replace the signum function. The saturation block works like a "clipper" on the y-axis. If you unknowingly increase the clipper limits, say from ±0.5 to ±1.5, which exceed the amplitude of the input signal, say sin(x), then might as well don't use the saturation block at all.
Anyhow, you can construct the Piecewise-linear Saturation function mathematically:
Thank you for your comment. I only skimmed the part of your comment where you explained the piecewise-linear saturation function. However, with regard to the initial part of your comment about absolute stability, I see some reservations to express my idea.
In my opinion, optimal control is undoubtedly, the indubitable paradigm of absolute stability. Any other conjecture and notion of absolute stability is a misconception, and on a loose base, NOT on a sturdy logical foundation.
To support my claim, it will be helpful to have a look at the SIAM paper by Freeman and Kokotovic, a screen-shot of which, I have appended as an attachment to my comment. In the image shown, you see, two different control-laws; (2) , (5). The control-law (2) which has been illustrated as a solid line, is based on feedback-linearization, and the other (depicted by dashed-line) is an optimal feedback-law. Moreover, as you see in the figure, the optimal feedback-law totally remains in the second and fourth quadrants, but the feedback-linearization-based control-law, crosses into first and third quadrants. Any control researcher, obviously knows that, a control-law which is based on feedback-linearization technique, will get destabilized for some modes and regions/ranges of operation (e.g. for different initial conditions). I think such destabilizing effect in feedback-linearization-based controller, is due to it crosses into first and third quadrants. While optimal control-law which totally remains in the second and fourth quadrants, exemplifies and features the best paradigm of absolute stability.
Summary: In my opinion, as I had already underlined in my previous comment, the condition that, possibly the control-law designed by Mr. Ahmed Mohammedosman, does not stay in the second and fourth quadrants, might be a reason for the sliding-mode controller not to work properly and fail in stability, through some ranges of operation as for negative reference values.
I have picked a screen-shot from the paper cited here as:
Inverse Optimality in Robust Stabilization,
R. A. Freeman and P. V. Kokotovic, SIAM Journal on Control and Optimization, 1996.
I think Mr. Ahmed Mohammedosman has found the solution. Thanks for the article. The article provides an example using the 1st-order system x' = – x³ + u + w(t)·x, where u = x³ – 2·x is the control law via feedback linearization, and w(t) is the disturbance satisfying |w(t)| ≤ 1.
However, you have mentioned that "Any control researcher, obviously knows that, a control law which is based on feedback-linearization technique, will get destabilized for some modes and regions/ranges of operation (e.g. for different initial conditions)".
Maybe I'm not one of them, or I inadvertently misconstrued the technical part of the statement. Perhaps we may consult with Prof. Zeashan H. Khan and Prof. Mohamed-Mourad Lafifi.
V = ½·x² > 0 for all x with x ≠ 0
V' = x·x'
V' = x·(– x³ + u + w(t)·x)
V' = x·(– x³ + x³ – 2·x + w(t)·x)
V' = x·(– 2·x + w(t)·x)
V' = – x²·(2 – w(t))
Say K ≡ 2 – w(t). Since |w(t)| ≤ 1, and 2 > 1, then K > 0.
Therefore,
V' = – K·x² < 0 for all x with x ≠ 0.
The result shows that the asymptotic stability should hold for any x ≠ 0 (for different initial conditions), despite u = x³ – 2·x crosses the 1st and the 3rd quadrants. I think we cannot conclude the stability from the control law alone. The stability has to be evaluated from the closed-loop system.
Thank you for your comment. You have mentioned that "I think we cannot conclude the stability from the control law alone.". But in my opinion, the first step in analyzing the stability of the closed loop, is to investigate the control-law alone. For example, input-to-state stability (ISS) has been mainly developed to study state stability based on control-law features. For example sometimes, the control-law would be singular for some values of x, although the holistic system might be asymptotically stable. I remember, in my academic studies in college, one professor taught about feedback-linearization for a robotic hand manipulator. I do not have a paper citation for that problem, but the control-law based on feedback-linearization was singular for some values of theta θ, for example for θ=k*Pi/4=k*π/4, where θ was the angular deflection of robot manipulator, something like an inverted pendulum angle. This singularity of control-law for some specific values of the state-variable θ, was mentioned by the professor as one of the deficiencies of feedback-linearization in general and common. I think, in theoretical control theory, particularly in nonlinear control, investigating directly the general features of the control-law is salient before employing Lyapunov stability criterion.
Thanks for your sharing. Your professor was right and so do you. In the robot manipulator example, I think your professor referred to the kinematic singularities. It is a special case when dealing with rotational motion, and it don't mean all control laws via feedback linearization will destabilize all systems at certain x, at least it won't happen in the example provided by Freeman (1996).
For example, the system
x' = x³/cos(x) + u
has singularities at ±π/2 for –π ≤ x ≤ π.
Naturally, the control law
u = – x³/cos(x) – k·x
also has singularities at ±π/2 despite the evaluated closed-loop system shows
x' = x³/cos(x) – x³/cos(x) – k·x
x' = – k·x
is "stable".
Anyhow, if we are unsure of something (not in accordance with the Scientific method), then we usually do not generalize using the determiners such as "Any" or "All".
Edit: Your quadrant criterion for the control law is indeed an interesting problem to investigate. Here is the 1st-order system adapted after Freeman & Kokotovic (1996):
x' = x³ + u + cos(x)*x
How would you stabilize the system using your Snap Control and HECT Control, as well as satisfying the quadrant criterion for the control law (u)?
Thank you for your comments and derivations which are elaborating. I did not claim that the control-laws which do not remain totally in second and fourth quadrants are totally destabilizing. I only hinted that, the control-laws which cross into the first and third quadrants, are prone, vulnerable, and susceptible to give way to instability. With regard to the system you exemplified as: x' = x³ + u + cos(x)*x, it would be helpful to know that, my naïve HECT theory is associated with a time varying feedback-law; u=u(t,x). Therefore, it would not be convenient to plot the control-law to see whether it totally remains in the second or fourth quadrants or not, although after computer simulation (probably by Matlab), we can numerically plot u versus x and then check up the criterion for specific problems. Moreover, the control-laws provided by my Snap Control idea, are purely state feedback-laws, but I am not to verify whether they abide by the criterion or not (they remain in 2nd or 4th quadrants or not?), since I have not enough time to try it for the system you exemplified as: x' = x³ + u + cos(x)*x.
The control-laws which cross into first and third quadrants, are actually providing positive feedback for some ranges of state variable, and as you know, positive feedback is prone to instability.
Please see my idea for your question, as I have presented below:
For the nonlinear benchmark system, you exemplified as:
x' = x³ + u + cos(x)*x,
The associated control-Lyapunov function CLF is assumed as:
V=x^2/2,
Then using HECT, a closed-form time-domain dissipation of the CLF (V), gives:
V_dot=-V*exp(exp(-x^2)),
x*x_dot=-x^2/2*exp(exp(-x^2)),
x_dot=-x/2*exp(exp(-x^2)),
then referring again to system evolution ODE:
x³ + u + x*cos(x)=x_dot=-x/2*exp(exp(-x^2)),
then the control variable is found as:
u=- x³-x*cos(x)-x/2*exp(exp(-x^2)),
I have attached the plot of u(x) versus x, for your perusal. I have no Matlab software on my PC, for simulation in time-domain, but I guess it might be stable also in time domain. As you see in the plot, the control-law remains in the second and fourth quadrants only.
Warning: If you directly insert the snap feed-back law; u=-x*exp(exp(-x^2)), into the system evolution ODE, you will get the result as:
x' = x³ - x* exp(exp(-x^2))+ cos(x)*x,
while for this case I have illustrated the phase-portrait (x_dot versus x), and please see in this case, instead of u, the state-rate; x_dot, crosses into the first and third quadrants, possibly causing instability for some ranges of operation, however, as illustrated in my ResearchGate project page for Snap Feedback Control, in this case, the feedback control-law; u=- x* exp(exp(-x^2)), remains totally in the 2nd and 4th quadrants:
Thank you for the plots. I proposed u = – k*(x³ + x), where k > 1. The proposed control law u (green curve) satisfies your quadrant criterion x·u(x) < 0 for x ≠ 0. The gain k can be a constant value but I prefer to add a little static nonlinearity k(x) = e–0.5*(x/0.4)² + 1 so that k is high when |x| < 1, and low when |x| ≥ 1.
Thank you for your comment and detailed exposition. But I do not fully understand, what do you want to imply, by your recent comment. Are you corroborating my idea and conjecture, or providing a counter-example? As I see, it seems, you have proposed a control-law based on my criterion, it means the control-law remains in the 2nd and 4th quadrants:
x·u(x) < 0 for x ≠ 0,
and then you made an illustration that the benchmark system is globally stable for such a control-law; u=u(x), because:
At first, I didn't intentionally design the control law based on your criterion. In part of your Snap control law, you cancelled out the unwanted nonlinearities via the feedback linearization-esque technique – x³ – x*cos(x).
So I postulated there may be a simpler control law that can stabilize the system and to dampen the nonlinearities at the same time. It just happened that u(x) = – k*(x³ + x) satisfies your quadrant criterion, x·u(x) < 0 for x ≠ 0.
I'm not competing with you. Both control laws basically produce the same effort for the most part. Yours uses slightly lesser energy. By the way, when did you discover the quadrant criterion?
Edit: Perhaps you can write down the theorem or a conjecture about the quadrant criterion of the control law u.
I should discriminate between my Snap control methodology and my HECT methodology. Through the Snap control methodology:
we directly insert a control-law in control-argument of the system evolution ODE. The control-law should obey the proposed criterion, it means it should remain in the second and fourth quadrants, or in other words: x.u(x)
It's too early to give up on your Snap Control. If you slightly modify u to become
u = – k·[x³ + cos(x)·x]·exp(exp(–x²)) with the gain k > 1, then you will see. You could write down the theorem or a conjecture about the quadrant criterion of your Snap control law.
I am glad to hear you called my Snap Control idea as a theorem or conjecture at the elementary levels of its development. Your recent comment, crystalizes such an encouragement, as it exemplifies the correctness of the criterion when employed to control a nonlinear benchmark system. But as I have already underlined in other posts, I am preoccupied by many of my ResearchGate projects, and as I find some time, I will try to make a formal statement of the theorem and make a mathematical proof for that. Anyway, I am thankful to you, for propelling me to shape my ideas as mathematical theorems. I am in contact with you, and I will get more advice from you as I proceed with my ideas. Regards.
As a matter of fact, and for your own insight, it would be better to modify the proposed control-law and bend it towards the horizontal axis (x-axis) for large x (when x goes to infinity). In simple words, when x gets too large, control-input should not diverge, but it should get rather restricted and constrained. Upon bending the control-law towards the x-axis, it would be probably analogous to optimal feedback features (semi-optimal) and I call optimal control as hyper-stable, while hyperstability is sought in some specific control applications. This is what I had already called also as input-to-state stability (ISS), a few threads above this discussion. The ISS property actually guarantees the control-input does not get very large when the sate gets very large, while this is also related to robust control and robust stability or simply robustness in control terminology. For example your proposed control-law is stabilizing in the sense of pure stabilization, but when x gets too large, then also the control-input u(x) gets too large, and this is what could give rise to instability. Through other less relevant viewpoint, the actuator deflection is rather restricted and constrained, and not capable to produce any large control-input. You should damp your proposed control-law for large x.
I have attached, a screenshot to my comment, here, for your perusal.
The paper is cited here as:
Inverse Optimality in Robust Stabilization,
R. A. Freeman and P. V. Kokotovic, SIAM Journal on Control and Optimization, 1996.
I had already cited this paper in my previous comments under this discussion.
I am looking forward to have your comments and viewpoints.
Thanks for your concern. You have to provide a proper example to show exactly what you mean by "when x gets too large, then also the control-input u(x) gets too large, and this is what could give rise to instability". Without that, it is difficult to compare x' = x³ + u + cos(x)*x with Freeman's "very special model" x' = – x³ + u + w*x, |w| ≤ 1. In fact, have you noticed that Freeman's control law also has the nonlinear feedback cancellation term x³, when –x³ provides a naturally stabilizing nonlinear damping as |x| >> 1?
Meanwhile, you could show us how to apply Freeman's control law for x' = x³ + u + cos(x)*x. This allows us to investigate if Freeman's idea expends little control effort for large signals for all cases or only the very special case. Awaiting two good things from you:
1. An example to show what gives rise to instability as x gets very large (excluding singularity cases).
2. Freeman's inspired inverse optimal control law for x' = x³ + u + cos(x)*x.
The proposed control law (see figure) can also handle Freeman's "special model" x' = – x³ + u + w*x, |w| ≤ 1 without introducing the positive feedback or cancellation of a beneficial nonlinearity. If you want to check the stability, you can assume that w = cos(x).
For the case of Freeman benchmark system, the Hamilton-Jacobi-Isaacs (HJI) formula
for the evolution ODE:
x_dot=-x^3+u+wx
yields the optimal feed-back law as [1]:
u(HJI)=x^3-x-x*sqrt(x^4-2*x^2+2).
But if you remove the disturbance parameter from the system evolution; w(t)=0, then the optimal feedback control-law based on the Galerkin method would be [2]:
u(Galerkin)=-round_V/round_x=x^3-x*sqrt(x^4+1).
Warning_1: There is a sign difference in the system evolution ODE considered by Freeman, and that considered by Georges. The cubic term has opposite mathematical sign, one is negative, and the other is positive.
Moreover, if you simply plot the control-law functions; u(HJI), and, u(Galerkin), versus x, you will confirm that, they are similar in their mathematical behavior and trend. I have plotted several optimal control-law functions for different linear or nonlinear scalar systems, and they appear to be completely similar in their general features. They are hump-like near the origin of the coordinate system, and converge for large x (when you move along the x-axis of the function plot in positive or negative direction), as the value of the control-function; u=u(x), will be inclined towards the x-axis, as x approaches infinity. But non-optimal control-laws are divergent when x approaches infinity. For concreteness, consider u=-x^3, is of such trend, as you move towards either +∞, or -∞, the value of the function; u=-x^3, would clearly diverge. Meanwhile, optimal feedback control-laws do not exhibit such unrestricted trend, while as I have already mentioned, they hump near the origin and bend for large values of the state variable x. This is typically called robust stabilization or robust optimality.
Warning_2: My conclusion is not general, and I have seen optimal feedback control-laws which do not converge for large x (they actually diverge), but I think for many optimal control-laws, particularly those for scalar systems which are already asymptotically stable as: x_dot=-x^3+u, the trend is visible. I have attached a plot illustration from my original optimal control methodology appended to my ResearchGate project at the URL:
In the plot, you see the general trend of hump near the origin and convergence for large x for the optimal control-law surface which is associated with a nonlinear underactuated MIMO control system.
References:
[1] Inverse Optimality in Robust Stabilization, R. A. Freeman and P. V. Kokotovic, SIAM Journal on Control and Optimization, 1996. Page 1367.
[2] D. Georges, Solutions of Nonlinear Optimal Regulator and H∞ Control Problems via Galerkin Methods, European Journal of Control, Volume 2, Issue 3, 1996, Page 225.
Thanks for the articles and your efforts to make explanations. Are you implying that Freeman's and Galerkin's "robust stabilization" control methods only exhibit the "heartbeat-like hump" for the already-stable system like x' = – x³ + u?
Actually, I'm a little confused. Are you thinking that your Snap Control needs improvements so that it performs like Freeman's and Galerkin's controllers?
I'm sure you notice that x' = – x³ + u and x' = x³ + u are two different systems. The former is autonomously stable due to the term –x³, while the other is unstable due to x³. For the latter, I think that your control problem statement is "How to compensate for destabilizing term x³ using an optimal SNAP control law that exhibits the "hump" pattern without feedback cancellation and risking instability when |x| >> 1 (gets large)?". Focusing on this research direction may lead some discoveries.
– – – – – – – – – – – – – – – – – – –
Edit: Do you remember that you mentioned about the control input signal may become unstable when x gets very large due to the cubic term –x³ in the proposed Snap control law? Then, you advocated Freeman's optimality idea to attenuate the control effort as |x| >> 1.
In other words, for the already-stable x' = – x³ + u, when |x| >> 1, the control effort tends to zero, u(x) → 0, thus allowing the system x' = – x³ to "self-stabilize" itself. However, for some unknown reasons which you didn't provide, if the quality of the cubic term f(x) = x³ deteriorates and becomes "unstable" as x gets larger, then it implies x' = f(x) will become unstable as well because Freeman's optimal control law has little or no jurisdiction over the region |x| >> 1. Therefore, the argument of reducing the risk of instability as x gets very large is self-defeating!
As a case in point, I have made a hypothetical cubic term for the system x' = f(x) + u, where f(x) = – (x³)/(2·exp(–(x/(5/0.940743))⁶) – 1), and assumes that the control designer models f(x) = – x³ on the operational range of interest –3 < x < 3, because she or he cannot capture the true behavior of f(x) on the entire –∞ < x < ∞. If you plot the true f(x), you will notice that the quality of –x³ term rapidly deteriorates after |x| > 4, and the sign flips at ±5. See figure. Try applying Freeman's optimal controller:
I accept that, I made a mistake. Since, I observed, even for a MIMO nonlinear system (Brockett system), the control-function three-dimensional surface is with hump-like trend, as I illustrated and cited as my ResearchGate project in my previous comment, then I generalized the observation to any optimal control-law. But it seems, it was a mistake.
Moreover, I have attached an image to this comment, which shows the control-law functions for the case of Freeman benchmark system, when the evolution ODE has positive cubic term in one case, and in another case, it has negative cubic term. The control-law function differs in a minus sign in the two cases, and if the positive cubic term is compensated, then the control function will not show any hump-like feature, as illustrated in the plots for:
u1=x^3-x-x*sqrt(x^4-2*x^2+2),
u2=-x^3-x-x*sqrt(x^4-2*x^2+2).
Thank you for your investigation about the problem. It has been helpful to read your comments as they illustrate the mistake.
Don't mention it. It was a healthy exchange of results, information, and opinions on control theory. I also hope that Mr. Ahmed Mohammedosman can benefit from the shared knowledge and educated opinions on this forum. Till next time.
Dear Yew-Chung Chak and Saeb AmirAhmadi Chomachar,
Indeed it was a very rich exchange of ideas and fruitful discussions which for me personally is beneficial and certainly also for our fellow researchers, we would not have thanked you enough for your high level debate, once again a big thank for you .