I think the problem with PID controllers is not with this or that algorithm used for its tuning. When such controller has to be used to stabilize temperature in wide range, say 4-300 K, then you usually have to divide this range into several smaller partitions (4-10, 10-20, 20-80, 80-300, or similarly). The optimal PID parameters appear very often quite different for each subrange, sometimes even their orders of magnitude vary wildly. After all, you have to tune your equipment once and use it indefinitely long after that. So, what's the difference which algorithm will be used? 1 minute or 1 hour running time - who cares?
Assuming that the hypotheses are the same in both cases, due to the random nature of both procedures, the best result depends statistically on the number of attempts.
Thank you dear Cesa'reo, when a minimized cost function is required, the problem is the different minimal values we obtain from run to run, however i will try to do some statistical analysis for a large number of attempts.
I think no matter how good the tuning strategy is, or in this case what algorithm you will use to optimize the tuning parameters, the main problem with the PID is that it will be unable to cope with process nonlinearities. If you need to control the process for a wide range, I would think that some sort of multiple model adaptive control approach would be appropriate for such cases. This is by far in my opinion the most practical strategy. Although there are more fanciful algorithms out there for such adaptation, their practicality has yet to be seen.
In my opinion, what would be the best method for PID tuning depends strongly on the actual observed/controlled system. It is not possible to generalise the answer except, possibly, for systems already studied or with similar characteristics. Moreover, details of implementation may be quite significant to determine performance. Therefore, I think that the answer probably requires the actual testing of the strategies to be compared, under typical conditions, unless useful clues can be drawn from the literature.
A possible comparative evaluation strategy would be to study the relative performance of both methods against a comparable no-control run, taken as reference.
It might be also of some interest, particularly if it is not feasible to implement both the above mentioned control strategies, to implement also the classic Ziegler-Nichols step response and/or frequency response methods, which 'ignore' the (minimized) cost function previously referred to. Nevertheless, it may be possible to calculate its value after the observed variables and to follow its variation with time. The same may apply to a reference no-control run.
Thus, seeking for a possibly quantitative evaluation measure of performance based on the 'cost function' previously referred, one could divide the integrated area of the cost function (considered positive and plotted against time) by the comparable area obtained for a no-control reference run. This dimensionless ratio would decrease (presumably toward zero) when the performance of the control method improves, increasing toward one as performance becomes worst. Values above one would point to a detrimental control effect, compared with what would be the system 'spontaneous' response.
Thank you dear Carlos, it is interesting notes. In fact, I have found that the conventional PID tuned by GA for enormously number of trails gives statistically almost constant minimum value of the used index, while with other techniques it is not the case.
In practical terms neither of them is good because both involve high number of tests over the process as a part of the iterative algorithm. Because these test must normally be done on a process during operation, the tests themselves disturb the process. As a result, from the practical point of view it is better to use methods that involve just one test or at least a small number of tests.
So I think it more harmful Tuning is using genetic algorithms. In many cases when they state the equation or transfer function is higher order is difficult consegiur a good result using GA. In practical termimos is sometimes impractical to use genetic algorithms, because it its objective function to minimize take a long time to be achieved and would be more expensive to implement.
when it is difficult or impossible to determine an exact solution, numerical methods are the alternatives. GA with its statistical properties provides an excellent technique. In the case of off-line design, the execution time is not a critical issue, and also higher-order or even nonlinear system do not insert any difficulties in applying GA. What I mean by my question is weather any one finds a specific advantages in using foraging strategy; otherwise why researchers keep searching new method if we have already the best or the acceptable method of tuning the PID !
I doubt both methods can be used in real-time. They are good for toy problems. Any solid real apps using these "bio-inspired" "natural computational intelligent"? Please point me if you find any convincing case study (non-toy problems).
I have try the use of PID tuned by GA as a speed governor in real single-isolated power generator with parameter uncertainties as well as real HVAC system. The results was sufficient enough to implement the designed controller. However, for foraging strategy, the don not apply it until now for real problems.
Thanks. If not online tuned, in offline, you need a model to run GA. You get a dream PID that works nicely. Good. But this is only for the model used, not the real plant itself plus its variability. There is no guarantee this PID will still work as expected. That said, I like MPC. I personally think we do not have strong reason to do GA or like search for optimal PID offline. Furthermore, if a problem is convex, will GA be really even needed? Again, if any online tuning PID using GA, used in real applications, I'd like to read and learn.
Dear colleagues, what I notice is that sometimes the word "tuning" is interpreted as offline design using models, which is not a correct interpretation in my opinion. Nobody would probably confuse "tuning a guitar" and "designing a guitar" which represent totally different activities. But this confusion often happens with respect to a controller. I think that we need to come to some common grounds first and say, for example, that by tuning we mean only online change of parameters of a PID controller, which is actually implemented and controls a plant or process, and offline variations of controller parameters using models should be called a design. Having said this, I would again mention that I am pretty sceptical about use of both GA and foraging approaches for tuning because in every iteration we need to asses performance, which should involve tests on the plant/process. These tests usually require some time and disturb the process itself. No doubts that the discussed approaches can be successfully used in design. Probably, to some degree, if not requiring too many iterations, the discussed approaches could be used for tuning too, but I am not aware of practical applications.