I am working on solving an optimization problem with two objectives by using neuroevolution. I use NEAT to evolve solutions which need to satisfy objective A and objective B. I tried different configurations, changed mutation values etc., however I always run into the same problem.

The algorithm reaches 100% fitness for objective A quite effortless, however the fitness of objective B is mostly getting stuck at the same value (ca. 85%). Through heavy mutation I sometime manage to get objective B's fitness to >90% but then the fitness for objective A decreases significantly. I would not mind worsening A in favor for B here. However, I only reach a higher fitness of objective B in very rare cases. Most/all individuals converge to a fitness of (100%, 85%).

I extended my NEAT implementation to support Pareto fronts and Crowded Distance Search (NSGA-II). After some iterations this leads to an average population fitness of (100%, 85%), meaning every candidate approaches the same spot.

My desired fitness landscape would be much more diverse, especially I would like the algorithm to evolve solutions with fitnesses like (90%, 90%), (80%, 95%) etc.

My main problem seems to be that every individual arrives at the same fitness tuple sooner or later and I can only prevent that through lots of mutation (randomness). Even then only a few candidates break the 85% barrier of objective B.

I am wondering if anyone has had a similar scenario yet and/or can think of some extension of the evolving procedure to prevent stagnation in this particular point.

Thank you in advance, I am looking forward to any suggestions.

More Sebastian Renner's questions See All
Similar questions and discussions