If the algorithm is very greedy (e.g. local search), then restarts of that algorithm might resemble a memetic algorithm. Let's consider a memetic algorithm with a very expensive local optimization technique -- we would like to run the local optimizer less often, so we might want to filter the starting points. This can be difficult since a start point with a good fitness evaluation might not have that much more room for improvement. So, what we need are ways to estimate the room for improvement for a given (e.g. completely random or somewhat random) solution.

I'm primarily interested in continuous search spaces. Given two solutions x1 and x2, is there a way to find out/estimate the probability that the (local) optimum near x1 is better than the local optimum near x2 when f(x1) is worse than f(x2)?

More Stephen Chen's questions See All
Similar questions and discussions