Ok, it really depends on the problem. Discrete or continuous? If discrete and taken straight from real-world, forget standard methods and implement yours. You can use combination of methods but dont forget about problem specific knowledge. If it is a benchmark problem (TSP, MKP, etc), take a look at literature. If it is continuous space, it depends on the shape of the objective and the number of constraints and the size. If the runtime is important and everything is linear, use linear programming, if not, go for PSO methods (just pick a PSO that is locally convergent like the guaranteed convergence PSO or Mutation linear PSO, see A hybrid particle swarm with velocity mutation for constraint optimization problems). Note that PSO in its standard form is not locally convergent which means it can not find even a local optima in general case. If the runtime is important, use CMA-ES or ES family. However, having constraints makes CMA-ES a bit inappropriate (see my paper on Locating potentially disjoint feasible regions of
a search space with a particle swarm optimizer). Then you can have a hybrid method. As you can see, there is no good method, goodness is very subjective (runtime, good solutions, local optima vs global vs no optima but good performance and stable), and so on. Do not use methods without having good understanding about your problem and the methods.
Ok, it really depends on the problem. Discrete or continuous? If discrete and taken straight from real-world, forget standard methods and implement yours. You can use combination of methods but dont forget about problem specific knowledge. If it is a benchmark problem (TSP, MKP, etc), take a look at literature. If it is continuous space, it depends on the shape of the objective and the number of constraints and the size. If the runtime is important and everything is linear, use linear programming, if not, go for PSO methods (just pick a PSO that is locally convergent like the guaranteed convergence PSO or Mutation linear PSO, see A hybrid particle swarm with velocity mutation for constraint optimization problems). Note that PSO in its standard form is not locally convergent which means it can not find even a local optima in general case. If the runtime is important, use CMA-ES or ES family. However, having constraints makes CMA-ES a bit inappropriate (see my paper on Locating potentially disjoint feasible regions of
a search space with a particle swarm optimizer). Then you can have a hybrid method. As you can see, there is no good method, goodness is very subjective (runtime, good solutions, local optima vs global vs no optima but good performance and stable), and so on. Do not use methods without having good understanding about your problem and the methods.
Go for PSO.Its give very fast solution. You can go for some other algorithm like, gravitational search algorithm, Cat swarm Optimization, grey wolf optimizer,glow worm optimizer.
Form my knowledge, i found PSO gives very fast and accurate solution.
Depends on your problem...for discrete I recommend GA and PSO encoded as strings; while for continuous...use GSA, PSO and ACO. If u also seek a global solution, then use Simulated Annealing
Basically, this is the theorem behind the answer "it depends on the problem" that a number of people have given.
That said, I would suggest starting with one of the well-studied conventional metaheuristics, e.g. GA, PSO, ACO, simulated annealing, Tabu search. A lot of new metaheuristics have appeared recently, but these are poorly understood and often poorly verified. In short, using them might put you at a disadvantage if you hope to publish your results in a respected journal or conference. I'm not saying they are bad, but they are mistrusted by a lot of people. Have a look at this paper to find out why:
I can say you one thing that the algorithms which you have mentioned, are all meta-hueristic population based algorithms. And all contains some algorithmic parameters which are needed to be properly tuned before applying to a specific problem. For example, PSO has inertia weight, social cognitive parameter , in case of GA there is crossover rate and in case of ACS there is limit. So, without going for such algorithms, I would suggest you to use TLBO algorithm which is free from any algorithmi-specific parametrs and i am sure it will be produce better results.