Given that many algorithms are optimised using a learning mechanism, a method to analyse existing patterns should exist. This method should indicate how different it is from the state-of-the-art.
Thank you for sending this link. It is encouraging to see that the CGP graphs we are using can help identifying patterns. What about if they are many algorithms produced and we need to compare them against an existing algorithm (the state-of-the-arts). This comparison could lead to identify patterns that may lead to good performance. This comparison can be done offline.
Some possible suggestions that I might add to narrow down such a project:
a) Consider an alternative measure to state of the art: This term is ambiguous since I could take it to mean within a particular implementation field such as speech recognition or biological networks. It might also mean within a particular field such as stochastic algorithms
b) Following the first point, the possible number of algorithms and variability might impose a real burden for assessment. I would suggest to break it down in a hierarchy: supervised-unsupervised-semi supervised, then narrow it down into stochastic vs non stochastic, ...., etc.
c) It is an interesting proposal to use control flow graphs to determine execution of the code. But, there are other elements such as: is the algorithm using a distance minimization or error minimization? These parameters will also have to be defined.