I have some optimal solutions of a discret space and I want to apply an heuristic search using those solutions as attractors. I started using distances as cost functions but I don't know if it's a good approach.
Hi, if you use distances as cost function you will find exactly that solutions that you already know (distance zero). Seems not to be targeted. What is it that you want to achieve, if you already know the solutions.
Mohamed-Mourad Lafifi Thanks for your answer. I already follow the last article you recomend me but it is presented under the dynamic systems point of view 😞.
Jörg Bremer thanks for your answer. But 🤔 the solution space can be so big and the started solutions so randomly that I think it is possible to find local optimums with similar characteristics to the atractor solution. The distances we will use, take into acount the solutions structures, no the objective function values of those solutions.
I think you can use any metaheuristic to do this. For single solution based methods, just start with the already good solution you have, instead of the random one. The algorithm will then focus on the search space around that solution. In case you have multiple solutions to start, you may run more than one algorithm in parallel or you may just use multi-start techniques. For the discrete space, you may prefer algorithms like Iterated Local Search (ILS), Variable Neighborhood Search (VNS), Simulated Annealing etc. depending on the problem you try to solve.
For population based algorithms like Genetic Algorithm (GA), you can use elitism strategy. In other words, you can construct a random population and inject some already good solution(s) you have. Then, these added solutions will eventually attract other random individuals through crossover operations. If you put these elit individuals to the next iterations directly, this attraction will be preserved (elitism).
However, these elit strategies should be used carefully and balanced with some exploration techniques. Otherwise the algorithm may stuck in local optima quickly. Luckily, all the metaheuristics come with capabilities to deal with this problem e.g., perturbation in ILS and VNS, mutation in GA etc.
Osman Gökalp Thanks. But I wish start from random solutions and include the atractor as a goal, in the way that the initial solutions tends to a solution similar to the atractor under structure point of view. In your sugestion you can finish very quickly in the same atractor, as propose Jörg Bremer .
Jörg Bremer Sorry, I din't see your final sentece was a question: What is it that you want to achieve, if you already know the solutions?
I want to archive a solution with similar structure to the atractor.
Supouse you have S1 = (0,1,0,0,1,0,0,1) an S2 = (1,1,1,1,0,0,0,0) with a similar spectrum E = (4,5,8,5,6,8,5,3) and similar objective function value F(S1) = F(S2) = 12, following there is a direct relation between spectrum and the objective funtion.
Then you know an 'atractor' S* = (0,0,0,0,1,0,1,0) with E(S*) = (9,5,6,8,2,6,8,5) and optimal F(S*) = 0.
I want to start from random solution X and set E(S*) as goal. Maybe I can find X' with E(X') = (9,5,6,_,2,6,8,_) and F(S*) = 3. That will be good for me. More good if I can find X'' with E(X") = (9,5,6,8,2,6,8,5) and optimal F(X'') = 0.
So, I don't want to re-find S* which is a good optimal solution that I already know. I want to find another optimal solution with the same structure, same spectrum E that imply the best objective function value.
This is an interesting question, and I had the same initial response as others (if you already know the optimal solution, why are you still searching?). But, if you are using a heuristic and do not know the optimal solution, you can save (store away) high quality solutions during a single instance (a "run" of the heuristic) and return to these at some point, intensifying the search in their "neighborhood".
Pete Bettinger Thanks for your answer. But having an optimal solution do not imply I have all the optimal solutions, so I want to search those ones. I can show you a particular case where exists many optimal solutions: the S-box of AES cipher (a public S-box) has an optimal non-linearity of 112, but there are still similar S-boxes in the 8x8 bit S-box space which is very huge, some of them can be generated by the inverse field algebraic method. What about if I want to change the cipher S-box?
The use of an atractor maybe is related with others questions: a) How close are two optimal solutions? b) How I can get a new optimal solution using/starting from an inital one?
If you are interested in avoiding attractors in local search you should check the 'classic' work of Fread Glover in Tabu Search, there he presents mechanisms escaping local optima, while at the same time doing smart restarts from the best regions represented by those best solutions. And he presents a very interesting and sophisticated 'unified' framework (I don't remember in which part, I think part I).