First, the performance of a metaheuristic depends on its parameters' fine tuning, so the same metaheuristic can provide either good or bad results, depending on how you tune your parameters.
Second, every metaheuristic involves a probabilistic component, so there is no guarantee whatsoever on the quality of the solution you get (even if it may be very good).
Some may say that the class of hybrid metaheuristics is the best because they combine different methods so as to get the best of each, but then again, parameters' fine tuning plays a crutial role.
Lastly, one resorts to metaheuristics when exact methods are not practically applicable. For the problem you mentionned (TSP), there are extremely efficient exact methods that can solve large scale instances in a "reasonable" amount of time and the advantage here is the guarantee of optimality. For further details, see "In Pursuit of The Traveling Salesman : Mathematics at the Limits of Computation" by William J. Cook, Princeton University Press.
Throwing out names of metaheuristics does not help anyone - none of these methods guarantee to find even a near-optimal solution. There are free solvers on the internet, based on solid, tested mathematical optimization theory - use them!
It's depend on the nature of the problem because according to the "No Free Lunch Theorem" none of the algorithm perform equally on the finite dimensional problem. Secondly, performance of any Metaheuristic optimization algorithms mainly depend on the parameter tuning and they don't have guarantee of the solution. All the algorithms provides near optimal solution only.
There is no such thing "Best Metaheuristic". The performance of the Metaheuristic depends on many factors, esepecially, "parametring" and the nature of the problem. In a nutshell, I think you should, first, review the literature to be able to choose the best way to solve your problem.
I think it is rather evident that using a methodology based on - say - branch-and-bound-and-cut is far better to use than a metaheuristic - almost always. Why? Because you are guaranteed to find an optimum every time (check!), and you also - for free! - get a lower bound that can be used to cut parts of the search tree and also - if you wish - terminate when the gap between the best feasible solution and the best underestimate is small enough.
One must remember very carefully that the No Free Lunch theorem is a worst-case statement - and it is not that likely that you will encounter an extremely nasty problem instance EVERY time. For almost every problem the mathematical programming tools we have developed are not only faster - they also provide a certificate that the last solution is optimal - which NO metaheuristic can offer.
One should not wave the No Free Lunch flag too much - your own problem instance is very, very unlikely one of the hardest ones. Before embracing the NFL theorem you should always investigate mathematical optimization tools, like branch-and-bound and branch-and-cut. And your statement is wrong - whatever metaheuristic you apply you will not reach an optimum - barring a two-node network or similar toy instance.
all the previous answers are correct, but you should also consider the fact that somme metaheuristics are suitable for discrete variables problems, such as TSP (SA, GA, ACO, LAHC, ...) and some for continuous variables problems (PSO, DE, ABC, EA, ...), and some "not so difficult" problems do not need the use of metaheuristics (as the previous answer says, using branch and *, local searches, simplex, dedicated heuristics, ...).
Anita Sajwan While metaheuristics can be implemented, we will not know how good - or bad - the output is.
We also know that the No Free Lunch theorem does not imply that every time we use branch-and-bound we need to pass all feasible solutions: we have quite effective ways to eliminate proven non-optimal parts of the search tree, based on the relaxations that we use. The industries that we collaborate with always use mathematical optimization tools, and as far that I know, they do not use heuristics for any genuinely important task, as it could be catastrophic.
Michael Patriksson, I think you don't read my answer properly. I have already said that Metaheuristic don't provide the guarantee for the solution (i.e. optamality or feasibility).
No Free Lunch Theorem is used only for the performance comparison of Metaheuristic as the question was raised that is not about the quality of the solution.
Saman: note that NP-hard problems should be attacked by mathematical optimisation methods, and definitely not through meta-heuristics - they are not to be trusted at all. Buy/borrow a recent text on combinatorial optimisation instead.
It's foolish to think that a metaheuristic will provide you with an optimal solution - it was never intended to! Samira is wrong - of course. None of the above-mentioned tools will find it, while math opt will - in due time.
I also do not understand how the repeated mumbo-jumbo from metaheuristics enthusiasts can be ignored - we need to upheld the principle - and practice - that math opt is the way to go.
Why do the meta-people community think that their favourite method is a winner, when none of them guarantee even a ballpark distance to the optimum (contrary to math optimisation tools THAT DO find the optimum!)?
And to make it worse the No Free Lunch theorem is taken out of context - it can only be invoked if the method does find an optimum. And we all know that metaheuristics don't.
There is no 'best' algorithm. It totally depends on the problems, whether it iscontinuous or discrete, number of variables you have, and ... . For TSP as a specific case, if the size of problem is not too large (something around 20 or less) ant colony optimization with a basic local search on the found solutions would give you the global optimal solution. Again, it totally depends on the problem and there is no 'best' technique that guarantees you will get to the global optimal solution.
Sometimes it is like we were stuck in the 80's! In those days we could not solve TSP for more nodes than, say, 50, but now the B&B codes can handle much better a really large-scale TSP. It is downright criminal to propose a metaheuristic for TSP - as the codes we have to our disposal are so much better. You're living in the past century, it seems - or are you so lost that you cannot even use a B&B software??? The rest of us run B&B for rather complex problems, and we're doing just fine! I am sorry that you do not catch up.
TSP is no longer a particularly hard problem for practical purposes. There are numerous formulations of it and some of them yield very efficient branch-and-bound implementations. There is no need to use metaheuristics for the TSP. As for the general case of the NP-hard problems, whatever method works better depends on the structure of the problem. You can't say in general that one method works better than another. You just have to try different ones and determine for yourself. Even then, depending on the parameters of the problem, there will be different performances, you have to consider the average performance over all instances.
Actually, it depends on the characteristics of the problem that should be solved such as the dimension size, level of complexity, and so on. If the size of your problem is small and evaluating the objective function is computationally cheap, it would be great to start by
Find minimum of unconstrained multivariable function using derivative-free method
For the TSP, the best meta-heuristic by far is the "iterated Lin-Kernighan" heuristic of Keld Helsgaun. It gets optimal solutions for almost all TSPLIB instances. Details are here:
http://akira.ruc.dk/~keld/research/
The heuristic is so good that it is now incorporated into the leading exact algorithm CONCORDE. The heuristic is run right at the start of CONCORDE, in order to get a strong initial upper bound.