Dear Chitta Behera, May be my answer will help you.
"Classic theory of control processes the control problems via step-by-step approach. This theory is convenient and allows a simple and a better handling procedures on single loop models. Where optimal theory of control adapts a global approach.The approach applies optimal algorithms. Now a days, optimal control theory is a part of modern control theory."
Classical control is a conventional control methodology using P,PI,PD, PID controller. Good thing is it gives robust controllers. The problem however becomes quite difficult for multivariable case therefore modern control theory was developed using sigma plots . Optimal control is based on rationale that control loop is optimized in such a way that with minimum input maximum output is obtained.. that is a cost function is made and parameters are optimized to as to give required results with minmium input. LQR (and LQG) are examples of controllers.. this is a new technique as compared to classical control but it does not ensure robustness. and additionally here working on mimo systems is not difficult, method accommodate multiple variables..
Classical control deals with the methods that were developed a long time ago which as still being used today. Control control has the basic theories and methods and is the foundation of all other types of control method. Therefore, you will have to understand first classical control before you look into other type, e.g. optimal control, digital control, robust control, etc.
Optimal control, on the hand, is an extension of classical control in you which you answer the question: "How do I design my system (or controller) to ensure that I optimize a certain set of variables?" For example, if you are considering the control of a high-power synchronous motor, the optimal control question could be "How do I design my controlle such that the power that is given to the motor is minimized". Therefore, you are optimizing your system by optimizing the power to a minimum.
@Chitta Behera, even in Wikipedia, since your major is not in control science, You can find the information about as You call Classical control and Optimal Control. Follow the links http://en.wikipedia.org/wiki/Optimal_control and http://en.wikipedia.org/wiki/Control_theory#Classical_control_theory
I do think it will be helpful to You, Please, do not hesitate to ask whatever You need in control area.
In Control science you will attempt to reach the system output (y) to a Set Point (r).
In Classical Control you will do this Step By Step (time by time) and adjust your classical control such as gives you the proper Control signal (u) . Then you will apply this Control signal (u) to your system and system output (y) will arrive to the Set Point (r).
This here there are two points:
1)) Often you know the future Set Points and in each time it is possible that you design Control Signal for future times too.
2)) As you know, in most physical processes you have limitations in applying this Control Signal (u) to your process. So applying this Control signal (u) to system is not possible and its reasons can be refereed to high cost of producing of this control signal, high amplitude of this control signal, impossibility in producing of this signal and etc.
In this situations for handling these two matters , you can use OPTIMAL CONTROL.
In brief, Optimal Control makes Control Signal (u) for a period of future times and in this way defines a Cost Function (C.F).
It minimizes this C.F. and by this minimization not only produces proper control signal for a period of future times, but makes a control signal which is appliable to system and producible.
I hope this my explanations can gives you a view about Optimal Control.
In summary, there are three things that generally motivate the use of optimal control i.e. the answer to the question "why optimal control (as opposed to classical control):
1. Multi Input Multi Output (MIMO) problems
2. The existence of constraints both in states or control inputs
3. The inherent optimization procedure
The method to handle the above issues in classical control is either unavailable or at best available in an ad-hoc basis.
Using this framework, you can have actually asked better question: which one will take less time to reach set point with minimum control expenditure?
As for the cost of computing, which optimal control are you referring to? The most widely used optimal control, called LQR (where you have linear system model and quadratic cost function) should not be considered expensive by today's standard.
The classical feedback control refers to control by P,PI, PID controllers, the most widely used still in industry. The traditional focus has been on the decentralized control where the plant manipulated variables and control variables are paired with each other in SISO fashion, each loop acts as independently. When the interaction among the variables is strong, of course a single multivariable control is preferred but that becomes somehow difficult to tune by the operators. In order to decouple the outputs from each other, decoupler could be used but as a rule of thumb the more complex the controller becomes, the more vulnerable it becomes to failure and uncertainties.
the limitation of the classical feedback control is that it cannot handle constraints and follow a constant set point in varying disturbances and the in the changing conditions it may not necessarily follow the optimal point. In order to avoid instability in the process, the process is also carried out far away from the optimum point in classical feedback control.
To address the issues mentioned above with classical feedback control, optimization based control was proposed in 1970s which is called Model Predictive Control. In MPC one could incorporate constraints on the states and inputs and can carry out the process very close to the optimal point because of constraints handling in MPC. of course the big advantage is also its Multivariable nature, you dont need to pair the inputs and outputs and treat it as SISO loops. The optimum control inputs are calculated by the optimizer and implemented.
The time to reach the setpoint quickly depends on two factors: the penalty weight you select for the control input and the measurement . If you do not penalize the control inputs, it will change by leaps and bounds but you should take care of the constraints in reality, like valve opening speed or pumping power of your device. In classical feedback control, you will certainly take the measurement at the point where you are controlling your input. If the dynamics are fast you dont need to worry. if the dynamics are slow, as I guess from your question, you can use a cascade controller by taking the faster measurements in the loop. The same you could also do with the MPC by selecting the measurement with faster dynamics and predicting your control inputs if they are highly co-related.
As far as speed is concerned, I think both can perform well depending on your tuning parameters but the difference comes from economy: which one is more economical, will certainly go in direction of MPC.
For example, consider the root loci in classical control. You can adjust the gains and move your system's poles only along the branches of the root loci. In optimal control, you can move your poles anywhere in the s-plane. For example, in standard optimal control of an LTI system, the gains can be scheduled to make the system unstable temporarily and then to stabilize the system so that a particular objective (say, a trajectory) can be tracked with some minimum specified cost. The cost could be on input size, response characteristics, etc. Classical control, per say, does not account for many real-life problems in control system implementation; e.g., actuator saturation, actuator slew rate, etc. The objective is not just to track a specified objective with hypothetical inputs, but to find an actual solution which can be implemented. Modern control theory offers avenues to solve real-life problems.
It depends on how you view the above question point, so a number of answers may well be applied to.
Classical control relates more to techniques implemented before the era of state space control. Much use of transfer functions, and frequency response, hence very successful in SISO systems.
Optimal control, more related to state space (i.e. modern control methods)- very much related to Dynamic Programming as well. LQR is a nice simple example of optimal control type technique, i.e. whereby a performance index is minimized. However, note that many modern control techniques work on finding sub-optimal solutions rather than the true optimal.
Although optimal control is generally more effective with state space models, it is not exclusive to them. The essential difference is that classical control methods were developed before digital computing and are hence mostly graphical methods (root-locus, Bode, Nichols, Nyquist) because that was the easiest way to perform computations in the days of paper and pen. Optimal methods overwhelmingly rely on extensive numerical computations and were thus developed after digital computers became available.
Routh and Lyapunov are both stability tests rather than controller design methods. PID controllers are generally used for systems that are simple enough to be tuned on-line by hand or by simple experiment and hence generally do not need computations.
Classical control tries to solve control problems in the frequency domain with a graphical approach. Optimal control solves control problems with constraints, typically in the time domain (state space), but things may also be defined in the s-domain. Optimal is broadly speaking more computationally expensive as you would be trying to optimize with respect to a constraint and the methods used are recursive...just google Bellman "curse of dimensionality", ""dynamic programming", etc.
In optimal control problems, the specifications are first cast into a specific index or cost function and the control is sought to minimize the cost function, which are generally established under the framework of state space.
In classic control problems, the controllor are analyzed and designed using frequency response method and root locus method. The mathematical basis of classical control theory is Laplace transform.
I was (pleasantly) surprised to be announced by Research Gate that my name was mentioned by the dear friend Ljubomir Jacic in relation with a question that I was not participating in.
So, I went and read the question and the answers.
I hope this does not aggravate anyone, yet it seems to me that the answers here represent the points of view of people who practice optimal control, yet not classical control.
My experience has forced me to use both and so, as also I don’t believe in the general solution for the general problem, I think the best thing is to learn as much as one can learn and then apply as the circumstances require. We should not forget that classical control was the first and main tool that allowed moving machines, robots, planes, etc.
Yes, the simple PI and PID controllers are largely used in the industry, yet only in simple (and maybe most common) cases, where the plant is "almost” alright and only some appropriate proportional gain and damping gains are missing to give you a stable behavior and then some integral effect may also be needed to avoid steady-state errors.
However, making a plane fly is not by a PID controller and so, practical control has been forced to use lots of complex classical control design methods. In general, frequency analysis of more complex plants indeed usually ends with very complex classical controllers. This was based on what practitioners felt that could be measured and known about the plant, be it a motor, plane, robot, etc. As the real plant can be of order 10, 50, or even 200, people used the best approximation that could provide to them a good idea about its properties and allowed getting some workable model of the plant, namely, its frequency response. Even if computational issues force you to use a reduced order model for your control design, if the frequency response of the real plant (that you can measure) remains close to that of your reduced order theoretical and nominal model, the ultimate behavior of the plant remains pretty much satisfactory.
State-space representation is only another, sometimes more convenient, representation of systems. One learns to move from one representation to another and then to use the most fit for any particular need. Optimal Control, where one defines a specific criterion (cost) that should be minimized, went very smoothly under state-space. However, not only it assumes that one knows the exact order of the system under control, but it also assumes availability of all state-variables of the system. This is a very strong assumption. Because in reality one can only measure a few signals, next idea was supplied by observers. The idea was that observers could reproduce the system state variable and then, could use the reconstructed values as if they were the state-variables to build the desired controller. This was very well-received, yert only until other developers observed that control based on observers was not very robust. This lead to an entire field of research called Robust Control., etc., etc.
OK. I almost made a message into a dissertation, so I should better stop.