1-there is no restriction on using EA's and they can be used in solving any problem.
2-EA's are computationally expensive. i.e. the number of function evaluations would be higher than those of gradient based methods when a convex problem is solved by EAs;
I want really to say thanks for all these responses, now, I think that I best understand the problem.
So, for the rate of convergence of EA is it better than the rate of gradient based methods? and which method is the best in term of performances?
just I need more information about the difference between EA and gradient based methods, if there are any works talking about or any comparison even if a small one.
There are algorithms with provably optimal convergence based on gradients/subgradients (e.g. in that book, other algorithms by Nesterov, OSGA by M. Ahookhosh and A. Neumaier, proximal point methods, etc.). Those can solve many convex problems with millions of variables.
If you do not have gradients or subgradients available, it is still better to make use of convexity, than to ignore that valuable information, since in optimization "structure is everything". So if you insist on using EAs you should incorporate convexity in the algorithm.
For multiobjective optimization there are several methods. See, e.g., the book of Stephen Boyd
Most of studies about metaheuristics for multi-objective optimization are focused on population-based metaheuristics, such as evolutionary algorithms. When adapting a single-objective algorithm into multi-objective, one of the most important issues is how to adapt the selection operator. Comparing solutions in single-objective problems is simple, but in multi-objective problems the concept of dominance is usually used (it is said that one solution dominates another if it is as good or better on every objective).
Since in multi-objective optimization the main goal is to find multiple and diverse solutions in the Pareto front, the use of explicit diversification techniques such as niching methods (e.g. sharing, crowding and clustering) is also more common in multi-objective than single-objective optimization.
Please note that for _convex_ multiobjective problems (roughly) every Pareto optimal point can be found by scalarization (see S. Boyd's book Section 4.7), and every scalarized problem is a convex scalar problem.
Yes we can. But evolutionary algorithms usually take more time to find solution. And it's hard to determine their accuracy, convergence speed etc. Gradient methods give solution with exact accuracy. EA give solution with approximate accuracy. Gradient methods are very good for problems they are designed for. Evolutionary algorithms are multipurpose.
So, the answer is: yes we can use evolutionary algorithms, but if is's appropriate to use gradient methods, one should use them.
Another thing is the convergence in polynomial time. EA are approximations and the convergence time is not guaranteed therefore it is not widely practiced. But it can be promising if combined with the deterministic methods. In my knowledge simulated annealing has shown good results and can be a good candidate for hybrid approaches.
You can use an evolutionary algorithm to find a good solution, but in general (exceptions exist), such algorithms are not guaranteed to provide the optimal solution. Algorithms based on gradient search and the Lagrangean yield are guaranteed to yield the optimal solution if your problem is convex. Therefore, if your problem is convex, you should consider if you are willing to (possibly) sacrifice optimality.
There are quite good algorithms for convex optimization and in the convex case you have guaranteed convergence to minima. Tested open-source programs (e.g. Ipopt) exist, so why do you not want to use these?
This is for the case that your problem funktions are differentiable and gradients are available.
If your problem ist not differentiable (but still convex) and you could provide subgradients, you could use e.g. a bundle method like Christoph Helmberg's code.
If you combine these algoriths with a suitable scalarization of the multiobjective problem (e.g. weighted sum, displaced ideal, reference point) , you could produce one efficient point with each run using one set of parameters.
In my opinion the aim of computing ALL Pareto-optimal points is out of reach for nonlinerar problems (even for linear MO problems this would be quite hard).
But since a multiobjective model shows that a compromise is searched between different objectives (based on other criteria, which are not expressed within the model), a decision maker has to choose ONE of the pareto-optimal points. That's why I regard it an unneccesarily high aim to produce all of them. A practical approach would be to prepare an interactive procedure for a decision maker - presenting one solution at a time includig its consequences on all objectives. The decision maker has to be given the possibility to 'play' with the parameters of the saclarization and to observe the consequences of their changes on the solution (ind order to get an idea what is possible within the model and what sacrifices he has to accept in other objectives when trying to impove one or several of them).
The use of stochastic search algorithms (evolutionary, genetic, ant colony ...) is not advised, since you got no guarantee of convergence and in most models these need much more function evaluations than gradient-based methods (which was expressed in the previous answers).
The only excuse in my opinion to use such search procedures is when your objectives are just the result of a black-box simulation and no grandient information is available. In this case though you would normally not know whether the model is convex.
yes at the first time I did not have a clear view of the problem even of a solution,
now, after your and the other researchers responses', I think about a master-dual decomposition, my problem is convex but my functions are not differentiable,
at this time I'm trying to read about lagrangian decomposition, but I think I should look for the other methods too like Dantiz-Wolf ...
May it be gradient or Lagrangean method ,the classical approach always help to understand the problem,then to formulate and use the algorithm to solve the problem.These are computationally efficient for small scale problems.
For larger models we take the help of evolutionary algorithms which also solve the problem but time complexity is equally important.So as per the need one may proceed.
If the functions are non-smooth then of course a gardinezt-based classical method for convex optimization might not work as the theory says, though I made the observation an an applied project that even when using sign and abs-functions in the model methods for smooth convex optimizations (like Ipopt, MINOS and CONOPT used from GAMS) did not perform bad. And I made the observation that it is be useful to restart one method from the best point of another (in our case the combinations: first SNOPT or CONOPT, followed by IPOPT allowed to find solutions none of the solvers could find from a default starting point). It never came to my mind to try searching with GA or other methods, since in the essence it was a feasibility problem we were facing and pure searching seemed not promising at all. But this might be due to my understanding of optimization. I know that in the end of the 70's and the beginning of the 80's GA was kind of 'fancy' - as were expert systems in another time and 'big data' is now. This is like a fashion - trends come and go for sure.
This is not saying that stochastic search is of no use at all - if you just got a black box model, what else could you do? But if you got futher infomation and could at least deliver a subgradient, why not use the information? Would you just erratically try lots of different points when searching for a certain place - if a GPS is available which tells you whether a move brought you nearer or farther away?
Depending on the size of your problem you might try a bundle method for nonsmooth optimization, e.g. Christoph Helmberg's code
We are using for Lagrangean decomposition within our code for stochastic two-stage scenario decomposition for linear MIPs. Though I have to say that in most cases pure spatial branching is faster with our test problems... But we have the advantage in scenario decomposition of two-stage stochastic problems that with spatial branching only a frachtion of the scenario problems has to be re-solved without the use of the Lagrangean dual, what helps speed up the solution process.
This might be different with the problems you have in mind. For the nondifferentiable part of your problems something like this algorithm might be helpful. There might be algorithms which are better suited for the problems you have in mind.
At least I am happy that the discussions in this forum might help you thinking different about your problem - despite the fast that we only have a vague idea of it.