A system is stabilizable when it is uncontrollable but the uncontrollable subsystem is stable. Hence, stabilizability is less stronger than controllability. If the system is stabilizable then it is possible to find a controller for the controllable subsystem. The uncontrollable subsystem stay in open loop but this is not an important problem because it is stable.
Notice that the property "stabilizability" and the "action of stabilization", i.e. to find a stabilizing controller for a unstable system are not the same thing. The attached paper deals with the stabilization of the controllable system by using feedback, but this is not necessarily a problem of stabilizability. On the other hand, the controllability condition required in the paper could be, maybe, relaxed to stabilizability.
You've got a good ponder, but the answer is still unsatisfactory. If stability, and controllability, interconnect, why not for stabilizability and controllability. I know difference of 'action of stabilization' and stabilizability as you mentioned. But simply through other rephrase, I am looking for an answer, to the question 'when a stable system is controllable?' through a conceptual window, not a rigorous math standpoint.
We can understand it with an example, let us say a person is sitting inside a car and trying to push it, he cannot push it which indicates the lack of controllability. But the same car can be pushed from outside when we apply external force. So, with feedback the system is controllable .
I think, by feedback, you mean a force or control input fed from back (of the car), whereas feedback has another context. Also, your problem exemplifies how Newtonian dynamics interpret the conservation of momentum. Anyway, it was a good example but it is still unsatisfactory. What I'm looking for as an answer, is to be a bit more intuitive, as is the image picked from the URL: http://slideplayer.com/slide/8166732/
I would say that a system is controllable when the input command can control all state-variables of the system. If we restrict the discussion to LTI systems, in control design, this property allows you to use state-feedback and move all system poles anywhere you may decide to.
A system is not considered to be controllable when you cannot control all its states (or fix all its poles).
It still is stabilizable when you can control all unstable poles. The others are stable and so, one may afford to ignore them.
Controllability is much more powerful, though. You may have a “stable” yet oscillatory mode and must live with the oscillations because its lack of controllability does not allow you to control it. Or, the opposite: you have a very “stable” yet slow pole and have no way to affect its slooooow response to some occasional initial value.
"If stability, and controllability, interconnect, why not for stabilizability and controllability". Because the stabilizability is one of the interconnections between stability and controllability.
A system is controllable if you can move all states variables to the equilibrium by using the available inputs. If the system is controllable,
it does not matter the stability, you can always stabilize it (e.g. inverted pendulum). You never will design a controller to destabilize the plant.
If the system is completely uncontrollable but stable, than let it alone nothing will happen (e.g. the planetary system). If the system is completely uncontrollable and unstable, than nothing to do.
If the system is only uncontrollable and unstable but the uncontrollable part is stable (this is strictly the definition of stabilizability), the unstable part can be stabilized by using a feedback over the controllable states.
If the system is uncontrollable and unstable and the uncontrollable part is also unstable, nothing to do.
A standard car with 4 wheels is stable and controllable. A standard car with 3 wheels is unstable but controllable, i.e. it is stabilizable because all unstable modis are controllable (you only need to change the balance). A standard car with only 2 wheels are not stabilizable because the unstable modis are not controllable.
Here, in the image above, I have shown the delicate interconnection of stabilizability and controllability. If the stabilizer input trajectory brings state from a point x to origin, as the input itself settles down to origin, then by applying the same stabilizer input (but in reverse direction), we could take state from origin to the point x (x could be the whole set of states for case of global stabilizability and controllability), and simply control the already stabilizable system. Regarding this, 'asymptotic stabilizability implies asymptotic controllability', and moreover, 'asymptotic controllability implies feedback stabilization' as one-to-one mapping from state set {x} to control set {u}. That's it.
Certainly, you are not the only reader of this post. Anyway, thanks for taking your time, participating in this thread discussion, and communicate your idea and insight for all those who are interested to learn more.
Also I should add it that, with cart I meant cart, not car. You might know that 'cart and pendulum' is a benchmark for control systems analysis.
I thought by card you've misspelled cart, not car.
You are so correct to spot that I am 'reinventing' well-established concepts of system theory. In my view, I am trying to reanimate system theory concepts and make abstract ideas look more tangible through simpler theoretical framework.