What is the relationship between standard deviation and robustness of compared algorithms in evolutionary algorithms? Can any one reference me some papers about this relationship? I will be very appreciated.
In order to do a fair comparison of an optimization algorithm with other algorithms, statistics such mean, max, min for multiple runs are reported.
In case of evolutionary algorithms, when the obtained objection functions values are acceptable, a small value of standard deviation indicates that algorithm is more robust , is able to reproduce the solution with minimum discrepancy and has less dependency on initial population.
You can many articles in good journals like information sciences and evolutionary computations. I have attached one article which present extensive tables for comparison of various evolutionary algorithms.
Short answer: when an optimization algorithm produces acceptable results in different runs, the lower the SD, the more robust and reliable is the algorithm. Low value of SD means that the algorithm results in almost similar answer in different runs.
Always Standard deviation should be minimum. IT should be applied for all the algorithms. Kindly refer the statistics books for your further clarification . So in your case you must relate, Standard deviation and variance should be less for sophisticated robustness of algorithm.
>>> Short answer: the lower the SD, the more robust and reliable is the algorithm. Low value of SD means that the algorithm results in almost similar answer in different runs.
I do not fully agree with the first sentence.
A robust algorithm should be able to yield systematically a satisfactory output across different runs for a given problem. Hence, robustness somewhat does imply a small SD.
However, the opposite is not necessarily true because an algorithm can yield systematically a poor result with a small SD. For example, if the problem has a global optimum which is completely isolated inside a sea of comparatively much poorer local optima, each of which having approximately the same score, then a non robust algorithm may still converge systematically to one of these local optima yielding the same poor score across runs with a small SD. Consequently, a small SD is not always a good indication of the robustness/reliability of the algorithm.
I really appreciate your answer. Yes, you're right, I agree with you. But in majority cases in evolutionary algorithms it has been proven that the lower SD shows the robustness of an algorithm.
A small SD rather reflects the consistency of the algorithm in achieving approximately the same score across different runs for a given problem. Robustness further requires that the score be satisfactory (or acceptable as mentioned above by Dr. Behrouz Ahmadi-Nedushan.)
I do agree that, usually, the mean score is not poor when the SD is small. Hence, usually, "consistency" directly translates to "robustness". However, this is not always true as we cannot exclude particular situations for which an algorithm can consistently yield the same poor score across different runs. In that case, consistency would hardly translate to robustness.
Genetic Algorithm is one type of evolutionary algorithms based on Charles Darwin's Theory of Evolution. I have problems when I want to analyze the performances of this method by average fitness value and Standard Deviation? Does somebody have references that can share and support me to deal with this or give me some suggestions about other indexes to analyze the performance of the Genetic Algorithm?
By reading the responses provided so far, I think there is a degree of confusion on whether lower standard deviations directly translate to more "robust" algorithms. Robustness, when used in the context of evolutionary optimization, is a term that refers to the overall collective performance of the algorithm (best solution, mean, SD, and Functional Evaluations). SD, or algorithmic stability, is only one factor of many. I think I would agree with H.E. Lehtihet in saying that lower standard deviation values does not automatically mean robust performance since the EA could be consistently yielding subpar local optimum results.
I would strongly advise anyone reading this response to consider this fact before writing their conclusions.