The word complexity is a little bit vague here. The complexity is majorly specified by your optimization problem rather than the algorithm. An optimization problem can be linear, quadratic, or nonlinear. It can be a global or local optimization problem. Moreover, the problem can be convex or non-convex. These are the parameters that change the complexity of an optimization problem and as a result the employed algorithm.
However if by the complexity you mean which algorithm is simpler to be implemented for the same problem, I believe that it again depends on your own skills and slightly on the problem type. By the way, I found GAs simpler to be implemented in most of the problems I have been involved.
Some time ago I have shown the average case (expected) complexity of simple GA in terms of objective function evaluations. It is
O (N^(3/2) log N),
where N is the number of bits in a single chromosome, NOT the number of unknowns.
Of course, GA is a stochastic algorithm and therefore one may get a final solution after a single "shot". Nevertheless, continuation of calculations beyond the given estimate is a waste of time (usually, exceptions may happen). When each gene has the same length then the above formula is valid with N equal to the number of unknowns. Note, that in this sense GA is less complex than solving a system of N linear equations (with complexity O(N^3))!
See chapter 6
Marek W. Gutowski
Biology, Physics, Small Worlds and Genetic Algorithms
in monograph "Leading Edge Computer Science", ed. Susan Shannon, Nova Science Publishers, ISBN: 1-59454-526-X, Hauppage, 2005
For unknown reasons neither my PDF copy nor PostScript version of this paper is not accepted by RG (syntax error: JSON.parse). Ask me, if you want to see it.
Chrisitan is right when he states that complexity in the sense of "number of elementary steps" does not apply to metaheuristics. Metaheuristics are usually compared empirically through CPU time or objective function evaluations. Actually, on scientific benchmarks like
function evaluations is the metric used to measures the "efficiency" of an heuristic algorithm. Compared to CPU time, the use of function evaluation as a measure allows to avoid aspects as the features of the computer where the experiment is run and implementation details, which directly influence the running time; focusing only on the ability of the algorithm to search the solution space.
As for which algorithm to use (GA, PSO, ES, ACO, ...) it will strongly depend on the problem at hand. As stated in the "No Free Lunch Theorem" there is no one algorithm that outperforms all the others on every problem.
The best way to compare algorithms is knowing their time and space requirements through computational complexity theory. But this is very difficult with stochastic algorithms.
In the case of Evolutionary Algorithms there are some benchmark indices. These indices can also be used to other heuristics. If you apply your algorithm over benchmark problems using those benchmark indices, you will have a suitable way of comparing metaheuristics. Please see a work is this direction and some useful references on the url: http://link.springer.com/chapter/10.1007/978-3-642-34654-5_28
By the word "complexity" if you mean performance, I think the Marginal Improvement per CPU (MIC) or the Average Marginal Improvement per CPU (AMIC) (introduced by Osman 2004) might be a good criterion. You may see "Osman, I. B. (2004). Metaheuristics: Models, Design and Analysis. Fifth Asia Pacific Industrial Engineering and Management Systems Conference, Gold Coast, Australia, Asia Pacific Industrial Engineering and Management Society." I hope this helps.
The paper contains some aspects about Complexity of algorithms.
Jorge A. Ruiz-Vanoye, Ocotlán Díaz-Parra. An Overview of the Theory of Instances Computational Complexity. International Journal of Combinatorial Optimization Problems and Informatics, Vol. 2, No. 2. pp. 21-27, May-Aug (2011). ISSN: 2007-1558.
To evaluate performances of metaheuristics (only empirically) , you can use benchmark functions (for more details, please see http://www.lri.fr/~hansen/Tech-Report-May-30javascript:-05.pdf ) and you must to set for all algorithms the same parameters like : maximum number of function evaluations, dimension D, number of independent runs, etc.
Then, to compare the performance of your algorithm with the other state-of-the-art algorithms, you can perform a Friedman test (for more details, please see http://juangvillegas.files.wordpress.com/2011/08/friedman-test-24062011.pdf).
Dear Lhassane, Thnx I ve got the second link but the first one is not working (http://www.lri.fr/~hansen/Tech-Report-May-30javascript:-05.pdf) Will you be kind enough to email me pls? Thnx in advance.
I would say, to compare several metaheuristics in term of complexity, maybe an objective function evaluations and comparison might be feasible. However, in term of performance, I would says using the benchmark problems as Lhassane Idoumghar formerly suggested, or my personal opinion, is the Travelling Salesmen Problem (TSP) benchmark and Knapsacks problems benchmark. These two is one of the popular way to evaluate the performance of a meta-heuristics whether it is "reasonable" or not, in the AI community for decades.