Could you give some information about the problem you want to solve? Which type of Local Search (Tabu Search, Simulated annealing etc.) are you talking?
To begin an answer: In your algorithm, you start from a first solution (e.g. obtained by a simple greedy heuristic), which is usually a bad solution because obtained quickly. You define a neighborhood and an operator in order to move from a solution to another (for the TSP, an operator could exchange two cities in the schedule), and at each iteration, you apply your operator and the current solution is changed if the new one is better.
Suppose you have a local search method, it can be (stochastic) gradient descent, a quasi-newton, nelder-meads, or any meta-heuristic. As a black box, a local search method is an algorithm that takes an initial solution and gives you a local minimum (or at least, with stochastic sampling, something that is close to a local minimum). All single-solution meta-heuristics can be fit into this category. We can easily build a global search method (that gives you the global minimum) by repeatedly restarting from a random location. In expectation, with a large enough number of restarts, any local search method will someday give you the actual global minimum. (Disclaimer: I'm freely using my own definition of local and global search methods here, which differ from the formal definitions used in the optimization community).
This is called `repeated local search`, when you randomly choose a new starting solution. But instead of randomly, we can choose the starting solution as a modified version of the currently obtained local minimum, that is, we kick the local minimum, far enough to get it out of its current attraction basin, but not too far as to loose all information about the "goodness" of the current region. Then we use it as a new starting solution, and apply again the local search method. This is called `iterated local search`, because each local search learns something from the previous.
Of course, the hard part is kicking with the right strength, and as almost everything in meta-heuristics, its an open question how to effectively tune this parameter. Most meta-heuristics include some behavior that resembles iterated local search, that is, take a current solution and modify it to improve it a little. However, iterated local search is when you explicitly have a local search algorithm (low level search), that you use a black box for the global algorithm (high level search). In many cases the local search is simply a stochastic sampling with greedy replacement.
Iterated local search is very easy to implement, and can be effectively used as a baseline with whom to compare more complex search techniques.
The explanation given by Alejandro is very good. Just to summarize, you need 2 basic blocks:
- a local search, improves the current solution until reaching a local optimum (wrt. the neighborhoods used in the local search)
- a shaking mechanism, randomly modifies the current solution to get a new one not too different from it
Starting from an initial solution S0, the local search is called to get a local optimum S. Then, each iteration of the ILS, a new solution S' is created by shaking the current solution S and this solution S' is improved by the local search to get S''. At the end of the local search, the current solution S is replaced with S'' if it is better, that is f(S'') < f(S) for a minimization problem where f() is the objective function. Otherwise nothing is done and S is kept as the current solution.
Usually, a simple Hill Climbing is used as a Local Search. Given a neighborhood structure, it consists in looking for an improving solution S' in the neighborhood of the current solution S (can be the best improving or the first improving for instance). If such a solution S' is found, the current solution is replaced by S' and a new step is done. Otherwise the search stops. More sophisticated Local Searches exist as well. For instance the VND (Variable Neighborhood Descent) if you have several neighborhood structures. And, of course, you can use limited versions of metaheuristics (tabu search, simulated annealing...)
As for the shaking, it consists in randomly modifying some parts of the solution. The amount of modifications "should" be limited, but I'm not sure there is a general rule on how to define "limited" (20% ?)
thanks every body !! # Christophe Duhamel : could you explain me breifly the hill climbing . what did you mean by a "neighboor of the current solution" ? is it the same individual with a bit changing ?
You can define the neighbourhood of a current solution S as the set of all the solutions S' that can be obtained from S by modifying some elements. Usually the modificatyions have to be of the same kind. For instance, if S is a 0/1 vector, a neighborhood can be all the solutions obtained by changing a 0 to 1 or a 1 to 0. Classic neighborhoods in the TSP are based on 2-Opt moves, 3-Opt moves. In graph colouring, this can be changing the colour of one node, exchanging the colour of two nodes, for instance
# Christophe Duhamel , thank you so much your answer was clear and relevant !! ;-)
but i notice something with the iterated local search is that in most of the time it reach a maximum best solution (it cant' go over it) and evry time you repeat the execution it get the same solution !! ... is it normal