The multi-objective genetic algorithm employed can be considered as an adaptation of NSGA II. It is applied to a new scheduling problem formulated and tested over a set of test problems designed for this purpose.
I suggest you take a look at a review paper in which we deal in detail with this question. Long story short: 1) Hypervolume and 2) Unary epsilon indicator. http://joc.journal.informs.org/content/20/3/451
If you are feeling adventurous Empirical Attainment Functions are also interesting:
I think that Kai-Simon's answer is already complete. I can only add that, if the Pareto set is unknown, probably the best way to proceed is to test your approach on the same problem against other "established" Multi-Objective algorithms.
If I remember correctly, the SHARK library has implementations for both NSGA-II and MO-CMAES: http://shark-project.sourceforge.net/
There is also a Linear Genetic Programming tool that has a Multi-Objective mode, albeit it's very simple (classic computation of fronts): http://ugp3.sourceforge.net/
I am part of the team of the latter, so I might be biased towards it ;-)
As a related issue, look into ORDINAL OPTIMIZATION – Soft optimization for Hard Problems (with Zhao Qian-Chuan and Jia Qing-Shan) Springer September, 2007 chapter IV.
I agree with the fellows above. But, considering that the GAs are stochastic and you want to test the performance of your algorithm, I usually believe you have to set up a test suite to run your algorithm n times for each parameter set. The value of the constant n should be the maximum. After the tests, using the metrics aforementioned by the fellows you'll have enough statistics to answer your question. If the results did not lead you a solid conclusion, so the improvement you proposed isn´t relevant.
There are two main ways to measure the quality of the non-dominated fronts obtained by a multi-objective algorithm:
1) Metrics for convergence (hyper-volume, set coverage, etc.)
2) Diversity metrics (Deb's spacing metric, etc.)
I suggest to read the paper of Crina Grosan "Performance metrics for multiobjective optimization evolutionary algorithms" (you can find it by searching in goo gle). In this paper you will find a brief but interesting summary of this topic.
Read the evaluation performance metrics used for the CEC competions in 2007 and 2009. All the used metrics try to measure both the diversity and convergence of e Multi-objective optimizer. The idea, for the bechmark functions (which the Pareto frontier are known for) used for the competitions, to measure the average distance between the Pareto approximation points obtained by an omptimizer (after a given number of fucntion evaluations) and the real Pareto front.
I suggest to read the report of such competitions since you can take test functions and evaluation metrics already implemented in C and Matlab code.