Why the state weighing matrix is taken to be positive semidefinite and not positive definite in optimal control problems - like LQR(Linear Quadratic Regulation) and LQR (Linear Quadratic Tracking?
Penalty matrices including Q(state x) and R(control u) are positive semidefinite to satisfy the solution of the Algebraic Riccati Equation (ARE). We must note that the Riccati equation can have more than one solution, but there is only one solution that is positive semidefinite.
For more details and information about thi subject i suggest you to see links and attached file in topic.
-Development of a State Dependent Riccati Equation based ... - MAiA
The matrices Q and R imply weightings on the state variables or on the control signals. A simple Q is Sum(qii*xi) for all i's, which would put weightings on all state-variables and result in a diagonal matrix Q. Sometimes, you (or your adviser :-) are only interested in a few state variables, say x1 and x2 out of n=20 variables.
Using only a few weightings in Q, makes it singular. As the computations do not imply any inversion of Q, this is allowed, yet Q must still be POSITIVE semidefinite in order to get a solution.
As R is inverted, it can only be positive DEFINITE.
Itzhak Barkana what will happen if R is PSD? In this case, I think we cannot get the stabilizing feedback controller, since we need to compute the inverse of R. However, do you believe that we can guarantee to achieve the optimal solution? (I mean, if we convert the finite-horizon LQR to QP, is there proof regarding the convexity of the QP? as sometimes, we do not matter finding the asymptotically stable response, but just to get the best optimal solution)
Elnaz Firouzmand, even before we talk about the actual computations, Optimal Control tries to minimize the state vector and, in the beginning, it only used to put a weighting, our Q, on the state x. Then, although this might have worked for simple problems, then people observed that this may result in enormous control signals, that my even totally diverge. This is what led to also "penalizing" the control, or in other words, to put a weighting on the control u with R. If R is only semidefinite, then some control signals are no penalized and they may still reach very high values.
Itzhak Barkana yes, I believe so. In my problem formulation, I have constraints on u and I do not care if the input signals hit the upper limit values. Actually, I have no reason to even assign a very little amount to minimize some elements of my input. That is why my R matrix is PSD. I wanted to know if we can guarantee that the resulting QP is convex?
Elnaz Firouzmand, Optimal Control is part of our basic knowledge, yet I am afraid I don't have an answer to your specific question. The Ricatti equation contains an inv(R) and also the feedback gain is inv(R)B'P and I guess that if R is not strictly PD, you may not get solutions.
Itzhak Barkana That's right. I might think by assigning some elements of R relatively low in comparison to Q and other elements of R, the PSD issue will be solved. Therefore, I have a PD R and also a unique solution for Ricatti. Then, under stabilizability condition for a system, by writing the LQR in the batch form for a given finite horizon, the resulting QP is convex.