When an algorithm like genetic algorithm has converged to a solution with a certain minimum value of merit function. How do we know the solution obtained is global minimum?
I think this is impossible by definition. If an optimization algorithm is able to determine that it is only a local minimum, then he would continue its search until he finds the global minimum.
Evolutionary optimization algorithms, (especially GAs) are stochastic optimization techniques that does not guarantee optimality. For example, for GAs there is no mathematical proof that it will converge into a global optimal solution.
That's in contrast to optimization techniques like Simulated Annealing that is mathematically proven to achieve optimality.
That's said, again for the GAs example, multiple runs or random restarts with different mutation probabilities can enhance your solution, and achieve global or near global optimal solution.
I would recommend you'd check Russel and Norvig book (1995) "Artificial Intelligence: A Modern Approach", Chapter Two (Problem Solving)
An alternative is the use of Boolean functions of properties such as influences bias energy and other properties. Then incorporate these properties into its algorithm .
case help see the link . In this study we used the behavior of functions by minority games.
A globally optimal solution is a feasible solution with an objective value that is good or better than all other feasible solutions. Locally optimal solution is a solution for which no better feasible solutions can be found in the immediate neighborhood of the given solution. Hence, in terms of your question, if the algorithm loses the diversity at early iterations it may get trapped into local optima, it means that the population becomes very uniform too early. It should be noted that if you use benchmark functions they are defined on a closed domain with predetermined global optimum. For example Ackley function has many local minima but its global minima is zero.
If the merit function is convex, then you can compute the derivative of this function for the given solution. And if it is equal to zero, then your solution is a global optimum.
Mohamed S. Eid: yes, EAs are guaranteeed to converge to the global optimum under some weak conditions (elitism and ergodicity, which is typically the case). Check:
Sunita Parinam: there's no general way you can know that. However, in some cases the fitness of the global optimum or some bound on its value may be known, so you can check the optimality of the solutions found just by inspecting its fitness value.
Conference Paper Convergence of evolutionary algorithms in general search spaces
Its a very interesting and important question in nature inspired optimization techniques, global optima is also a local optima but not vice versa. Over the number of runs (a run comprises of certain number of iterations as per stopping criteria) of the algorithm if the fitness value changes, we may say it doesn't reach the global optima otherwise it may reach global optima. However, there is no exact answer so far.
The following articles discuss the convergence properties of evolutionary algorithms:
Fogel, D. B. (1994). Asymptotic convergence properties of genetic algorithms and evolutionary programming: analysis and experiments. Cybernetics and Systems: An International Journal, 25(3), 389-407.
DeLaurentis, J., Ferguson, L., & Hart, W. E. (2002, July). On The Convergence Properties Of A Simple Self-adaptive Evolutionary Algorithm. In GECCO (pp. 229-237).
Lixin, D., & Lishan, K. (2000). Convergence properties of evolutionary algorithms under Elitist strategy. Neural, Parallel & Scientific Computations, 8(2), 105-114.
Ter-Sarkisov, A., & Marsland, S. (2011). Convergence Properties of Two ({\ mu}+{\ lambda}) Evolutionary Algorithms On OneMax and Royal Roads Test Functions. arXiv preprint arXiv:1108.4080.
Sorry, I did not use GA algorithm, but I often have used other random search oriented algorithm - CE (Cross-Entropy). One of the simplest way to check, is the obtained solution global min, is to repeat search with different init values for random generator.
If nothing is known about the fitness function, the only way to know the value and location of the global optimum is to exhaustively enumerate the search space. That's for a discrete space. In a continuous space, you also need to deal with finite precision in some way.
One of the ways to separate local and global minimums is to avoid too fast convergence for optimal value. For this we should perform tuning, i.e. to select control parameter values of the GA. We shpuld use our task with low dimension and to perform a lot of the numerical experiments.
No person can not say that a funded minimum is global because it might another one find a new minimum that is lower than it. There is no way for know this matter, so we putative the lowest minimum as global minimum.
Generally, in an optimisation problem, two algorithms family are used such as: classical and Metaheuristic approaches.
The first approaches are mainly limited because they have been based on a local search. they have mainly based in the derivative of evaluated fitness function. However, the secon approaches have been introduced due to their capability to find a global solution (near solution). So, the better way to find an exact solution is to use the hybrid approache. this latest exploits the advantages of both approaches already presented. Firstly, we used a heuristic technique to find a near solution. After, we use the obtained results as an initial point for the Classical methods to determine the exact solution.
There is no way to prove a funded is global optimum, sometimes, the solution is not the global optimum. Run the problem several times and examine the solution.
Since such validation is unreliable in most of the cases, mathematicians have presented various benchmarks (case study) and have given a proof for their global optimum. These are why benchmarks are developed in optimization and control (to validate a methodology with a high reliability and confidence). However, we have some benchmarks in optimal control, while the global solution is only the best known optimum, not definitely the absolute one.
There is a question similar to your question here:
Run the algorithm (evolutionary or metaheuristics) with different population size, different iterations, different values of algorithmic parameters, and perform as much trial as you can. Record all the values and compare them. But this will not give a 100% accurate solution; however, you may get approximate solutions.
In the event that no information is provided regarding the nature of the search landscape of a given objective function, you really have only one of two ways to ascertain whether your solution represents the global optimum: (1) Compare your result with the best answer provided in literature should the optimization task have been addressed previously by other researchers. (2) Exhaustively search the landscape by executing multiple runs and allow for a large upper iteration limit to permit the algorithm to reach global values, if ever.
I think, most optimization problems are assumed to be NP hard, atleast from the algorithmic perspective. The purpose, therefore, is to find a subset of weakly optimum solutions which are atleast within "epsilon" off from the global optimum, rather than converging to "the" global optimum. The objective here is to find an algorithm that can estimate this sub-set of weakly optimum points, in polynomial time.
However, this will also be decided by the nature of optimization problem one is solving. For instance, imagine an optimization problem for which the objective function is literally a Dirac or Kronecker delta function. That problem, can be considered as NP complete and convergence in polynomial time may not be guaranteed.