Usually GA and different evolutionary algorithms, like particle swarm optimization (PSO), has a "big" stochastic component. This means that you need find a statistical convergent solution with many simulation. Not only one.
If you have not time to waste I suggest to use a deterministic approach.
Conference Paper On the use of synchronous and asynchronous single-objective ...
Thank you very much but what about the dimensions of the search space if the problem has a large dimension. I think I can not use PSO for large dimensions (large number of genes on chromosomes) of search space, is it true?
We have used Genetic algorithms in conjunction with a cascading fuzzy logic system for extremely large spaces with considerable success. I'm attaching two links below with more details. To the best of our knowledge given the complexity and scale of the problem studied, finding an "optimal" solution may not be feasible.
http://ceas.uc.edu/news-1415/ernest-sae14.html
Conference Paper Learning of Intelligent Controllers for Autonomous Unmanned ...
since GA belongs to a non-deterministic class of algorithms the optimal solution you get from GA may wary each time you run your algorithm for the very same input data. So it is rather sub-optimal then. How close the subsequent solutions are to each other is dependent on the convergence of your GA (which in turn depends on problem-specific operators and so on).
The Genetic algorithms are non-deterministic methods. Thus, the solutions they provide may vary each time you run the algorithm on the same instance.
The quality of the results depends highly on:
-The initial population.
-The genetic operators (crossover, selection, mutation) and whether they are well-suited for the problem you're solving.
-The probabilities of crossover and mutation.
Some of the latest implementations of genetic algorithms are quite efficient, even for problems of large sizes. As I said, it depends on the way to tune your method's different parameters.
A genetic algorithm can indeed provide an optimal solution, the only issue here is that you cannot prove the optimality of the latter unless you have a good lower bound that matches the solution you got.
P.S : Of course, this doesn't apply if you do have the optimal solutions of the instances of your testbed. Then, you can use the latter for comparison and performance evaluation purposes.
The biggest limitation of GA is that it cannot guarantee optimality. The solution quality also deteriorates with the increase of problem size. However it can generate good quality solutions for any problem and function type.
The Pareto front, aka the non-dominated solution set, is a fully optimal solution in multi-objective (Pareto) sense. When multi-objective GA/EA(evolutionary algorithm) is used the result you get is only an approximation (!!!) of this set. And each time you run the algorithm a bit different approximation may be returned. However, I still think GA/EA is a robust solution for multi-objective purposes. I use it myself and obtaining a bit different Pareto approximations for the same input data is not a big problem, since the approximations are close to each other.
For complex problems, It may converge at local minima. Exploitation/exploration ratio is important in this case. Diversity by exploration is to be maintained.
To add other comments above, while solving complex optimization problems (like problems from the NP class), if the diversity mechanism does not work properly, GAs often prematurely converge into local optima.
Choosing appropriate parameter settings such as crossover, mutation probability are important issue. In addition to the problem of choosing an appropriate mutation step-size that should be adapted according to the fitness function value of the current generation
Basically in GA, new population is generated from the existing parents. Moreover randomness involvement happened in population at the time of solution initialization. Therefore it fails to create exploration in the solution search space.
GA is suitable for low computational cost fitness function , but for high computational cost fitness function or high dimensional problems , it is very expensive method.