I would be surprised if you will find any "hard" rule on this since hyperparameters of optimization routines are typically quite application specific. For meta-heuristic algorithms you can often find the rule of the thumb to use 10 search entities (e.g., 10 particles for PSO) per optimization variable in some lecture books, but this is definitely not a hard rule.
In terms of required iterations, you might just want to monitor the cost curve during optimization. If it stalls for a certain number of function evaluations, you can stop the optimization. Otherwise or additionally, you might want to consider a simple time constraint allowing your algorithm to optimize as long as you like it.
Khairul Eahsun Fahim Hyperparameters are parameters whose values govern the learning process and dictate the values of model parameters learned by a learning algorithm. They are 'top-level' parameters that regulate the learning process and the model parameters that come from it, as the prefix 'hyper_' indicates.
A hyperparameter is a machine learning parameter whose value is determined before training a learning algorithm.
Hyperparameters in machine learning include the following:
usually, it depends on the number of variables of your objective function.
As an example, with SPSO, authors gave a formula for fixing the number of particles according to the dimension of the problem: 10 + 2*sqrt(dim).
For the nb of iterations, using benchmark, it also depens on the number of particle (10^4*dim in CEC, for example). But if you don't use it for publication, you have several strategies, according to your context :
- to test a lot, define what value if sufficient for a good convergence rate...
- you know a goal value and you define a minimum difference/ratio between it and your function results to stop
- you have a time constraint and you define a timout value....