I run a nested or multi level optimization problem, because the nonlinearity is simply too much for the evolutionary algorithm.

I use optimization to search an initial solution, to further search and interrogate at the second level.

It then essentially helps to cut up the solution space, to help the algorithm.

yet, In a sense, at the 2nd level of optimization, the evolutionary algorithm still descents too quickly. It then causes it to miss the better solution.

I have seen something similar with ordinary nonlinear programming/ optimization based on 2nd order partial derivatives, and newtons step etc.

Is there a way to prevent the evolutionary algorithm from descending too quickly?

It can rather descent slower and recalculate at that point.

What are the implications when this phenomenon is possible with nonlinear optimization and evolutionary algorithms? Still too much/ fine nonlinearity?

More Brian Barnard's questions See All
Similar questions and discussions