As we know, there are already many EAs, such as, GA, PSO, DE, ACO, EP. However, I can find that new algorithms are still emerging, like biogeography-based optimization (BBO), chemical reaction optimization (CRO) and a group search optimizer (GSO), imperialist algorithm, firefly algorithm, artificial bee colony. Basically, these new emerged algorithms also claim that they mimic some natural phenomena.
No matter the criticisms from other fields, we have witnessed a fruitful achievements and impressive progress in evolutionary computation, as I can see compared with past, now many benchmark functions and some complex real-world optimization problems have been well solved by EAs.
Do you think it is still meaningful to develop new evolutionary algorithm? What are the promising directions of EAs?
Dear Guohau Wu
Yes, there are promising directions of EA research, BUT not based on metaphors
for natural phenomena. All these new algorithms are probably NOT very useful in
practice. What is needed is research in techniques which can improve the optimization
capabilities. There is an excellent article by Kenneth Sörensen (attached) which
really explains this point.
Best Regards Thomas
Dear Guohau Wu
Yes, there are promising directions of EA research, BUT not based on metaphors
for natural phenomena. All these new algorithms are probably NOT very useful in
practice. What is needed is research in techniques which can improve the optimization
capabilities. There is an excellent article by Kenneth Sörensen (attached) which
really explains this point.
Best Regards Thomas
Yes. I have worked extensively with at least two (Particle swarm and Differential evolution) very good methods, but there are test functions that defy both (and others too). I made a proposal for 'Host-Parasite Co-evolutionary algorithm" but on some (numerous) functions it succeeds (like other methods) and fails at others (as the alternative algorithms do). For example, the DCS function defies all (for dimension>8 or 10). There is a need to find an algorithm that surely comes out of all local optima. There is none as yet.
Quick aside re: local optima ... I incorporate a "Bump" method into DE from my general evolution system (which includes DE). This method randomly 'bumps' values in random directions and can be tied to measures of search stagnation.
I developed an simple direct search gradient based 3**n algorithm for general integer and mixed optimization problems that locates global solution quickly depending on n
The application of non-traditional methods is not only limited to pure optimization problems. They have been applied to more complex problems such as social network analysis. We have been involved in the study of evolutionary algortihms for Graph Mining problems. From our experience, there is still a scope for developing new approaches for solving real-world problems.
As Thomas Stidsen pointed out, the focus should be more on the robustness of methodolgy in solving the problem rather than spending more effort to come up with new methods.
I view the current trend of devising new methods without much theory and robustness studies is actually detrimental to the field of metaheuristics.
Yes. If the new algorithms solve the known problems of the traditional EAs. Problems as number of control parameters, sound formalization, lack of autonomy, etc. Please see an EA that only have two control parameters and the evolution process is self-adaptive: http://www.sciencedirect.com/science/article/pii/S0895717710001421
Dear Guohau Wu,
Yes,
it is still meaningfull that new algorithms in the field of Evolutionary Computation will be developed (not necessarily Evolutionary Algorithms in the strict sense).
No,
it is not a wise idea to mimic natrural behavior just for the sake of beeing close to nature. So, researching into the direction of "new evolutionary computation paradigms" is hardly to be sucessfull.
On the other hand,
I strongly believe that whenever a real world problem, or an application needs an inprovement, or a solution at all, it is a good idea to consider the biological role model as ONE possible source of inspiration.
No guarantee, but there are numerous ways to solve problems that have been inspired by the biological role model.
I fully agree with Thomas Stidsen, that the "new algorithms are probably NOT very useful in practice" in their pure, originaly shape.
But a carefull investigation might lead to a re-design of some parts and yield a good, reliable and competitive basis for an algorithm.
e.g.: Particle Swarm Optimization, or C.Reynolds Boids.
So, keep the eyes open, for what biology demonstrates, give new algorithms a chance, but don't forget to compare the results with existing methods.
I wish you success,
an i am looking forward to see any new development.
I guess your question can be slightly reformulated, i.e., "Do we need new evolutionary algorithms"?
A short answer is yes, in two senses. First, although EAs have been shown very successful in solving many test problems, even "hard" ones, this huge success in academic research has rarely been shared in industry. This clearly shows that new EAs are needed to convince industrial practitioners to employ EAs to solve their problems. So new EAs to be developed should be able to address the main challenges in solving complex real-world problems.
New EAs can also be needed for understanding biological systems (natural evolution). For example, you are interested in understanding why various nervous systems having very different structures have emerged. To this end, most existing EAs won't help. Again, new EAs are needed.
In short, a new EA in name is not important, but a new EA that can address a new challenge, either in real-world applications, or in fundamental research, is in high demand.
Hans-Paul Schwefel, in his 1997 paper on 'the future challenges for EC' argues that the more an algorithm models natural evolution at work in the universe, the better it will perform (even in terms of function optimization).
In my opinion, we have many methods but the
mathematical support for them is very limited and
we need it. Here there is a big research area poorly
covered up to now.
In my opinion, that still need to develop new evolutionary algorithm, but I think we need change the methods, have innovation to model the behavior of the population and inspired based on new models, such as social phenomena , for example, in the ODMA we present a new operation for population to predict the better position based on the past positions, so I believe research about EA's still open and just we need strong innovation and proper inspired model to changes the way.
I participated both in developing improvements to evolutionary algorithms (mainly PSO) as a result of trying to solve some engineering problems. Now I am in industry. There is much potential in EA to solve problems that are simply avoided up until now. The biggest roadblock to using all of this research is, as Luis Moreno points out, mathematical support and controlling situations where the particular EA fails. Industry needs to fully understand mathematically the limitations and performance constraints. Especially, how to avoid problems from real data with EA. Both understanding performance limitations as well as algorithms that inherently provide performance measures as feedback will always be welcome.
Dear Guohua Wu
Many methods, some of them You have mentioned (like ABC, FA, etc) are claimed to be “new algorithms“. In my opinion calling them “new algorithm” is a misuse. First of all these methods are usually very simple modifications of some general idea. Therefore there exist a group of methods which might be classified as GA or Swarm optimization, but there is no class for Artificial Bee Colony (although there exist many modification of these method).
Proposing methods just because they mimic some natural phenomena is at least doubtful from the scientific point of view. As one of responders already mentioned – such methods are often not very useful in practice. In other words – method will not be effective just because it mimics glowworms or any other creatures. And number of creatures, physical processes, etc. to mimic at hand is, of course, enormous.
In my opinion the most promising EAs development directions are methods which try to learn linkage during its run. It might be done in many different ways (from some point of view DE does use very simplified linkage learning during its run). If You look for an interesting directions in GA I may also propose how to effectively learn and use linkage information in GA:
H. Kwaśnicka, Przewoźniczek M., „Multi Population Pattern Searching Algorithm: a new evolutionary method based on the idea of messy Genetic Algorithm”, IEEE Transactions on evolutionary computation, No. 5, Vol. 15, 2011, pp. 715-734
Best regards
Let me answer partly the question: Yes, it is still meaningful the research on optimization algorithms.
New problems and challenges appear in our world every day and there exists no single and global solution for all of them (this fact is related with the No Free Lunch theorems for optimization).
On the other hand and as it can be interpreted from your question, many researchers ask ourselves the utility of so many methods, which are the result from different sources of inspiration. In my humble opinion, it is really interesting to have a look at several/many of those sources of inspiration, and the corresponding interpretations that researchers develop, because there is always at least an idea/comment that is potentially advantageous for your future work. After all, the history of science is a history of trial-and-errors/trial-and-success. And nevertheless, they help you gaining a more general understanding of the field, which is always good.
What I personally think that is worthy of criticism is the often/causal lack or disuse of well-grounded evaluation methodologies for those many methods. Nowadays, the "it worked for us" is not enough, nor the "it beats the basic genetic algorithm (that with the one-point crossover operator) on three functions". It is precisely this direction where I conducted my research some time ago, but further efforts are still needed.
Dear Guohau Wu,
there is an excellent paper on the topic that has been just published which gives a detailed overview of the state-of-the-art of EAs in the context of multiobjective optimization problems in water resources. It also sets the scene and gives some ideas for future developments. Here is the citation:
Reed, P.; Hadka, D.; Herman, J.; Kasprzyk, J. & Kollat, J. Evolutionary multiobjective optimization in water resources: The past, present, and future Advances in Water Resources, 35th Year Anniversary Issue, 2013, 51, 438-456
Regards,
Thomas
As new problems arise new tools are needed. So the development of new algorithms and software is not expected to stop in the foreseeable future.
Dear all, thanks very much for all your insightful comments and discussions.
According to your comments, I can find that
1) EAs are very useful and still deserve further investigation. New EAs could be beneficial, as we face a huge number of different optimization problems in real-world, while according to No free Lunch Theorems, no single EA will be suited to all optimization problems.
2)Be analogue to natural phenomena could be very useful and give us new inspiration, but not necessary, to design new EAs. And the most important is its performance in dealing with various optimization problems.
3) Mathematical interpretation and analysis of EAs are in need.
4) Try to learn linkage during the run of EAs to enhance their performance.
5) Well-grounded evaluation methodologies to compare and evaluate different EAs.
6) Theory and robustness studies for EAs.
I think your summary is excellent. From my perspective, point 3), 5) and 6) are the most relevant. I can see the appeal of devising a "new technique", but in my opinion acting on existing EAs (by combining and mixing them with other techniques, or by improving their efficiency, or by trying to assess their weaknesses and strong points) would be incredibly more useful.
I agree with Thomas Stidsen, that the "new algorithms are probably NOT very useful in practice". Indeed,many EAs donot perform well when they are used to solve a real-world problem, in particular, discrete optimization problems.So far as I know, no EAs can outperform classical meta-heuristics such as SA and TS. So improving EAs for discrete optimization problems to obtain better solutions is still a challenge.
@Defu Zhang, Dear Dr. Zhang, currently I focus on my research on continuous optimization. I think PSO and DE are initially proposed for numerical optimization. Yes, I also think TS is one of the most efficient algorithms for discrete optimization. However, from my observations, GA and ACO could be as competitive as SA and TS on some discrete problems.
Yes we still need new EAs or meta-heuristics if they are as powerful as some of the top established EAs as long as the no-free-lunch theory in EAs still exists. I would say only the very best new methods should be mentioned. But, the thing is in the literature we cannot ask all researchers who publish new EAs to statistically test and compare their methods with the really top EAs, thus, we still see new EAs merely mimicking natural behaviors but inefficient.
Personally, my main focuses about EAs are on improving EA convergence rate and search consistency, and applying them to the real world problems. We've proposed several hybrid algorithms for some specific applications. We also introduced local search called an approximate gradient for performance enhancement of EAs for truss optimisation.
IMO, hybridisation of existing EAs leading to derivative-free optimisers and surrogate-assisted EAs still need investigation. Nowadays, some researchers have moved on to the so-called hyper-heuristics with the purpose to avoid the no-free-lunch theory which still exist in traditional EAs. This is also one interesting issue.
Times are, that on the edge of computing and robotics, programming is going to be replaced by meta- and later self-programming. That is, noone writes the program that "does it", but one merely writes the program, that learns to "do it". In that area there are currently several strains of who machines learn more and more by themselves. Beside neural networks and others the evolutionary algorithms are one of the major contenders. Balancing trial vs. error is IMHO the main path to interesting research, another one could be the blending of various of those learning approaches to more generalized learning automatons.
Try to see things as they really are and try to explain both of their advantages and disadvantages. At some point we need good experiment designs to come up with basic behaviors of various meta-heuristic methods, then serious statistical and mathematical analysis are needed. If one can see the strength and the weakness of one specific method when applied to different classes of hard problems, one should see this also for other proposed methods. A new designed method or a hybrid one that can solve an existing hard problem or can give a significant improvement is worth the effort.
Since evolutionary computation brand trend toward developing algorithms for estimating the distribution (EDA's) mainly based on the statistical properties of the behavior of each of the genes that determines structure solution or individual. In this sense the Lagrangian evolution algorithms where each of the genes of an individual is treated as a particle and this should behave in a similar way to a vector field appear.
Before addressing your original question, I'd like to add here that many people seem to understand the NFL theorems for search and optimization slightly wrong - Wolpert and Macready's theorem essentially states that no search/optimization algorithm is globally best when considering the space of *all possible functions* (including an infinity of random functions). There is nothing preventing an algorithm of being globally better than others for a subset of functions. A simple counter-example to the usual notion that "no algorithm can be better than another" would be a very simple algorithm that always returns a solution x*=[1,1,1,1,...,1], without so much as looking at the problem. While silly, this simple algorithm is globally best for the (infinite) subset of functions that have their optima in x = [1,1,1,....,1]. There's no reason why we couldn't have a globally efficient (or at least nondominated) algorithm for solving, e.g., low-dimension continuous problems in electromagnetic design.
More to the current point, I do not think there is much space for "new EAs", particularly in the way EA development is mostly done today - poor mathematics, forced metaphors, an almost complete disregard for specific problem characteristics or previous research, all followed by clumsy, non-replicable comparisons that would shame any statistician.
Despite this, I think there's still potential for interesting developments in the field, particularly in the development of domain-specific EAs. For instance, while most of the work current using DE for routing problems yields ridiculously bad results, there is no reason why it can't do much better: with a suitable space representation (another frequently overlooked point), the development of specific operators to take advantage of the problem structure, and the hybridization with local search heuristics known to work well for this specific problem domain, we have been getting some very promising results when compared with exact methods, other heuristics, and EAs.
I definitely agree with Filipe Campelo: The NFL theorem is magnificent from a mathematical standpoint, but utterly useless from a practical standpoint. It does however underline an empirical experience made by most researchers: If you search algorithm is not specialized to a certain optimization problem, it will not perform very well. On the other hand, there IS room for solid research into EA's for specific problem types. Furthermore, there ARE examples of EA's which have achieved state-of-art results on well known optimization problems. Unfortunately, these articles are drowned in an ocean of bad articles about EA's, which is probably the reason for the bad reputation EA's have in Operations Research circles.
We need the mathematical proof for the existing metaheurstics techniques, if you can work in that area you will make a good contribution to science.
I agree with Felipe Campelo and Thomas Stiedsen opinions. NFL theorem is important from a mathematical point of view but in my opinion it is
not the main problem. You can have a wonderful algorithm but if you don't formulate properly the problem (the function to optimize) or you don't you apply the proper parameters to the method you have poor results.
Besides most EA's don't consider well noise problems and this is extremely
important in most practical applications.
Most work look for new variations of methods, but the work on the mathematical
robustness is weak. For example up to know there are very few solid results on how to determine the initial population.
I am reading the comments of Karam Sallam and Luis Moreno and both ask for more and better theoretical understanding of EA's. The sad truth is that the theoretical foundation for EA's (and almost all meta-heuristics) is very weak. To the best of my knowledge the best place to start when looking for theory of EA's is the book "Genetic Algorithms - Principles and Perspectives", by Colin Reeves and Jonathan Rowe. But as Luis Moreno says, the theory is almost useless ... the results of the proofs are very weak and typically made for very simple algorithms.
An interesting questions is this: Why is the theory so weak ? Is it because of limited research ? I doubt that, because I think that many like Luis Moreno acknowledge the huge need for a better theory and hence many have probably tried, but not been successful in developing the theory.
I do not know the answer to that question, but maybe our mathematical tools are not sufficiently advanced to prove theorems for the standard EA applied to real-world problems.
The sad thing is that until we have a better developed theory, actually designing and developing an EA for a practical problem type, is more an ART than a SCIENCE.
Agreed with previous comments. I just wanted to make some clarifications about the NFL theorems.
NFL theorems apply from a mathematical standpoint, considering the space of all possible functions, or more concretely subsets of functions closed under permutations. Though it suggests researchers to move to the design of EAs/Algorithms for specific problem types, there is no evidence (IMHO) indicating that there is no room for the design of general-purpose algorithms for problems with practical interest in the real-world. In fact, we precisely found empirical evidence showing that the design of (partially-)general-purpose algorithms for problems with practical interest might be possible, which is not contrary to the NFL theorems.
Let's have the following example: If you are given a new problem, which is supposed to have practical interest in the real world, do you expect a standard EA, PSO, DE, ACO, or (almost-)whatever (which is not necessarily completely adjusted to the problem specifications, but to an intuitive belief on what is good and what is bad in the general field of optimisation), to perform better than random search?:
- If your answer is yes, there is at least some partial evidence that says that you believe in (partially-)general-purpose algorithms for problems with practical interest.
- If your answer is no, which is ok, you could (not necessarily in this order) 1) apply random search to the problem, as a rapid option to get an initial candidate solution (because its performance is expected to be exactly equal to the other algorithms); and 2) analyse mathematically and in depth the problem specifications in order to design a specialised algorithm.
And two another comments:
- NFL theorems are not restricted to EAs, but to non-revisiting algorithms (there are other studies for revisiting algorithms).
- (Non-revisiting) Hybrid algorithms do not scape from the NFL theorems, because at last, they are also non-revisiting algorithms.
In my opinion, when developing novel EAs one must be able to provide a theoretical justification for doing so. Preferably it should be mathematically rigorous, but this is rarely done. On the other hand, there exists no rigorous definition of an "EA". In fact, it feels as if any iterative algorithm could be dressed up as an EA!
EAs can be successfully applied if one incorporates some sort of "intelligence." Despite some theoretical issues it would be meaningful to think more carefully about what diversification could mean if you input solutions in a pool. If diversification is measured by a simple metric based on objective function values, you might get lost (see the reputation issue mentioned earlier, e.g., by Thomas), but if diversification is measured, e.g., by means of structural properties, then even existing EAs can be advanced a lot.
Instead of giving my own two cents worth of opinion regarding the original question, I would like to ask if there are any serious comparative studies (accompanied with serious statistical analyses) of general-purpose meta-heuristics (GA,SA,PSO,ACO,EA,DE, and even more recent and unproven approaches etc.) applied to as much wider a range of optimization problems as possible in the page limits of a single paper? Such a comparative study would be particularly useful to answer the question. From a limited study I have performed, it seems that the "new" additions to the EA field don't even come close to the power of their "older" ancestors...
Also, regarding the meta-heuristic approaches being better than random-search, the practical experience says that almost always (in fact, in my experience, always), any (meta-)heuristic will perform better than random search, since almost any heuristic will always be even a little intelligent and will somehow explore the search space better than pure chance. Still, this doesn't say anything about the power of meta-heuristics.
Finally, regarding special-purpose heuristics for particular problems, in my experience, when it is possible to devise a tailor-made heuristic for your problem based on domain knowledge, then the special-purpose heuristic will beat all black-box meta-heuristic approaches by large margins (often orders of magnitude). But the problem is, what do you do when you don't have enough domain knowledge, or if the problem is very complicated to analyze to a degree that yields a reasonable heuristics? in these cases, meta-heuristics (with sufficient fine-tuning) seem to be the only option...
In my viewpoint, the term developed a new evolutionary algorithm is less precise.
That there is how do we find a way of searching and matching that is new in the domain (taking inspiration from the behavior of creatures God), to be used similar problems in some classes more broadly that can be attributed directly to the existing mechanism on the model of evolutionary algorithms.
All my respected professors are request to share there experience on https://www.researchgate.net/post/What_are_the_recently_invented_Evolutionary_algorithms_EAs_Optimization_method_Which_method_you_found_en_effective_method#share
Thank you.
I agree that mathematical understanding is of crucial importance. However, we should not forget that these algorithms are not very useful by themselves, they only become useful when applied to real life problems. If better understanding is needed to get a faster/better/robuster optimization, great. If trial and error and some lucky guess leads to a better algorithm, great. I use these algorithms all the time for complex multi-objective optimization problems and my intuition tells me that we really are not there yet. Is intuition a good reason to do something? I think so. Is this science or art? Does this matter? The subtitle of the heavily used book Numerical Recipes is 'the art of scientific computing'. Even older is the famous series by Knuth named 'the art of computer programming'. The point I try to make is that mathematical understanding is important, but an algorithm that works in practice for practical applications should always be the main focus. If that is an art and not science, so be it.
The creation of a new EA algorithm depends on discovering new biological phenomenon which can help in solving NP problems where exhaustive search is not possible. Not all Meta-heuristics algorithms are able in general to find Global optimum solution. If we look at most of the current EA algorithms we can see attempts to improve them or to make them adaptive one to the problems (Genetic programming, one objective GA, multi-objective GA, honeybee algorithm, artificial bee colony, bee algorithm, virtual bee algorithm). So the easiest thing is to improve the existing EA algorithms with respect to efficiency and performance.
The answer in my opinion depends strongly on what you want to get. Note, first, almost all EA methods are substantially the same. The reason is the flexibility of operators that you can design. For example, you can design (fairly simply) a ES that works just like an algorithm so called COCO search (google it, you will find it :-))! Now, the question becomes, what this COCO search exactly adds to our knowledge. If you are observing a nice phenomenon in nature that can be potentially do the task of optimization, the first thing you have to ask is that why it can be helpful/different from the existing ones. What does it have that the others don't. Note that the answer is not "this one is the story of birds and the other is the story of fish". We are after clean and realistic intuitions OR proper mathematical analysis. About the growth: do you mean a growth or improvement. If you mean growth, of course there was a growth. I mean, generating new optimization methods that are all almost the same IS a growth, but about substantial improvement, I deeply (and sadly) doubt it. Specially when it comes to real-world problems, people even don't have a clear understanding of a real world problem at all. Let me give a final statement and then I encourage you to take a look at three papers I will point you. The final statement is: the field needs substantial care, otherwise it is going to be eaten alive (and easily) by the competitors (Operations research and math). Reputation of computer science and particularly EC is very very bad in these two other communities. Also, note that, anything (a field of study, a conference, a publisher, or a person) can survive if it can gain successes that is NEEDED. Most researchers are just doing research to find problems that they CAN solve, but the field can be improved only if it offers solutions to the problems that are NEEDED to be solved. Three papers that I mentioned are: "Quo Vadis, Evolutionary Computation", "The Emperor is Naked: Evolutionary Algorithms for Real-World Applications", and "The travelling thief problem: The first step in the transition from theoretical problems to realistic problems". Read them very carefully and inform me please what do you think.
BTW: my field of research is related to PSO (together with some other things), and its weaknesses in continuous space. I am writing a new paper, and let me tell you something, this field has been saturated by many many many articles that I would say they are useless, and they have been published in good journals (I am not saying mine are good, as I need to publish to be able to defence :-)). One simple thing is changed in the method, it is put under some ridiculous tests, and if the answer was yes (it was defeating an unknown method, taken from somewhere, with special thanks to the authors), it is concluded that "the proposed method worked better than other methods" and if the writer is very generous, s/he mentions that "in the tested benchmarks". There is NOT SO EVER any evidence why we should even expect that proposed approach is going to be good, let rigorous analysis alone. It seems someone just came from space, whispered in the writer's ear a new methodology, s/he tested that and, bingo, worked, lets have a paper guys. That's infinitly harmful to the field :-(.
I think the answer is "yes".
All directions are promising, but the areas Genetic programming and Evolutionary programming can produce nice surprises, in my opinion.
I think that yes, it's meaningful and can be useful, to:
- create new algorithms, and bring innovation (what it's stilll to be discovered may be better than what it's already modeled).
- improve existent algorithms
- apply existent algorithm to new domains.
Today's evolutionary algorithms reflect breeding rather than natural evolution, since the fitness function is pre-defined, and the resulting population belongs to one and the same species.
I would find it interesting to see a simulation of natural co-evolution of a whole ecosystem of different species (meaning that individuals belonging to different species can not mate with each other). Some species may live in symbiosis, some may compete for the same resources, and some may perhaps be predators or parasites. This would be interesting for studying the possibilities and limitations of biological evolution. It may potentially also be useful in the development of more complex self-learning systems, consisting of several interacting components.
There is always space for studying and for developing new optimization algorithms. In fact, each particular problem requires frequently a particular algorithm that needs to be developed. Empiricism (or a heuristic method) is a procedure (method) for solving a particular problem and not the particular problem by itself. It jumps fundamental stages in the scientific method. Naturally, I recommend the scientific method (that obviously everybody knows) and that requires the deep knowledge not only of the problem itself as also of the theoretical basis of technique to be used (rather than use existing codes).
My intuition is that the more successful algorithms are those which mimic the process being optimized at a deeper level - in the a same way as the representation (encoding) of the problem at the chromosomal level is crucial to the traditional GA. With the 'No Free Lunch' theorems in mind, I see every reason why we should seek newer methods and let them compete with each other in the spirit of evolution.
Mohammad reza Bonyadi pointed to very good articles. These articles give a very good answer to the questions above.
I think it is definitely useful to design new algorithms and improve upon existing ones. I think the directions that are especially promising are the ones that can solve new problems, or solve problems in a way that exploits more knowledge about the problems.
The example I am most familiar with, as it touches upon my own research, is multi-objective evolutionary algorithms (MOEAs). MOEAs are relatively new, and can solve problems in which there is not just one, but several objectives to optimize for - which is a very large class of real-world problems (see e.g., the book on applications of MOEAs edited by Coello Coello and Lamont from 2004). Furthermore, the most recent algorithms can optimize for many objectives, which is considerably harder than just a few.
The nice thing about this development is that MOEAs can even be used to make single objective optimization more efficient. When you have one objective, but you can define "helper objectives", objectives that are correlated with your one objectives and provide e.g. some detailed information or a heuristic, that steer the solution in the right direction, this can help with optimization. Multiobjectivication, as this is called, has been shown to be effective in several problems.
Parallelization of a Modified Firefly Algorithm using GPU for Variable Selection in a Multivariate Calibration Problem
Hello, Wu! It depends on your problem and point of view. In my opinion, it is still meaningful to develop new evolutionary algorithm. I may suggest you to read one of my papers called
I think that after you read it, you will be able to have some notion about your issue.
I think self-adaptation is a good research direction for EAs in general...
Just as a followup, I strongly recommend reading Kenneth Sorensen's new paper "Metaheuristics: the metaphor exposed" (http://onlinelibrary.wiley.com/doi/10.1111/itor.12001/abstract) as a sobering treatment of the whole "a novel nature-based algorithm" phenomenon...
Just as a followup, I strongly recommend reading Kenneth Sorensen's new paper "Metaheuristics: the metaphor exposed" (http://onlinelibrary.wiley.com/doi/10.1111/itor.12001/abstract) as a sobering treatment of the whole "a novel nature-based algorithm" phenomenon...
Just as a followup, I strongly recommend reading Kenneth Sorensen's new paper "Metaheuristics: the metaphor exposed" (http://onlinelibrary.wiley.com/doi/10.1111/itor.12001/abstract) as a sobering treatment of the whole "a novel nature-based algorithm" phenomenon...
Just as a followup, I strongly recommend reading Kenneth Sorensen's new paper "Metaheuristics: the metaphor exposed" (DOI: 10.1111/itor.12001) as a sobering treatment of the whole "a novel nature-based algorithm" phenomenon...