There are several ways to validate to your solution method. One of the best ways to apply DOE type statistical analysis and then applying ANOVA to trap the variation produced in the solutions.
The above answer is correct ,but it depends on the problem,e.g. optimizing transport routes is different to optimizing Warehouses and there are cases you may make sample experiments.
The general answer is:You either find exact solutions or you have upper or lower bounds to check.
For validating your new model, there are several benchmark functions including constrained and unconstrained functions which are widely used in literature. You may refer them and solve (optimize) those problems using your new method. These problems considered as NP hard class problems which are accepted as complex optimization problems. To mention a few of them, welded beam design problem, pressure vessel design problem, spring problem and so forth in constrained optimization problems (in mechanical engineering field of course). For unconstrained problems, there are Ackley function, Shuffle function, Hyper Sphere function, etc. Its obvious in case of improvement your proposed model may be considered as a new optimizer.
Sorry i was mistakenly replied wrong answer for your question, I tough you said new metaheuristic method. In your case, using comparative study you can validate your obtained solution (if there is any). If no one solve it, no problem, just using 3 or 4 optimization methods, solve it and then compare the results. For sure using experimental task would be better. But we all know sometimes doing experiment is impossible, time-consuming, and expensive. In summary, optimize your mathematical model using different optimizers and then compare the obtained results. You may use GA, PSO, SA, etc.
DOE analysis is proper for small size problems. For larger problems, the optimum solution can not be obtained by the exact method. How can verify the algorithm for the larger problems?
The relaxed solution is a good idea, but it just can be implemented for a tight model. If the model doesn't tight, the lower or upper bound may be far from the optimum solution. What is your idea about it?
It is better to avoid heuristic methods as far as possible. In heuristic methods, you get some results of course, but heuristic solutions are always data dependent, and therefore a strong mathematical conclusion can never be possible in such cases. Take for example, the case of clustering algorithms. All clustering algorithms are heuristic and therefore not fully mathematical. A certain algorithm may be okay for a certain set of data, but comparisons as to which particular algorithm would be better than a particular algorithm cannot be decided. Therefore it is better to use heuristic things only when there is no alternative left.
Use the mathematical model as a specification for your algorithm. You might have to reformulate the model for this purpose. Then verify that the results calculated by the algorithm satisfy the model. See http://baber.servehttp.com/Books/Books.html for information on my relevant books and to download the books free of charge. Where the books use the term "program" or "computer program" substitute the word "algorithm" for your purpose.
In the future, use the mathematical model as a specification for an algorthm and then design the algorithm to satisfy the specification. Designing something based on a specification is usually easier and better than putting it together by trial and error, mostly error, then trying to prove that it satisfies a specification, in the process discovering that it doesn't, and then modifying it (again, trial and error, mostly error) until it does. See my previous response to my question.
Algorithmic Correctness. Sometimes achieving a proof of correctness may be difficult, and if not verifying it experimentally is another option. You want to show it is infact what you are modelling. This can differ domain to domain. For example, formulating a combinatorial object may differ from formulating or modelling a traffic model in terms of analysis. Proof of correctness is the goal of any algorithm's correctness.
If you can solve the mathematical model to optimality, then you can directly compare the solutions of both methodologies. When the problem is large and you can only run the linear programming relaxations of your mathematical model (if it is linear), then it will provides a lower (upper) bound for the optimum if you're minimizing (maximizing) the objective function, and also a lower (upper) bound for the heuristic solution, as well. The relative gap deviation is regularly used for measuring the difference.
The best way to chk the model is to implement it and test it for the different benchmark instances. ANd ur objective is to verify the model, means whether its giving the correct or feasible solution or not. U can test it from any of the known instances of the problem. but benchmark instances required for chking optimality.
The non-parametric statistical test have emerged as an effective methodology, affordable and robust for the evaluation of new proposals of meta-heuristics and evolutionary algorithms, reaching great popularity in the literature. The design of experiments is a task of paramount importance in the development of new proposals. The validation of new algorithms often requires the definition of a comprehensive experimental framework, including a range of problems and algorithms of the state of the art. The critical part of these comparisons lies with the statistical validation of the results, contrasting the differences between methods.
I suggest you take a look at the Friedman Test, the Aligned Friedman Test and the Quade test. I also show some important references you might want to study:
[1] S. Garc, D. Molina, and F. Herrera, “A tutorial on using nonparametric statistical test for multiple comparisons of metaheuristics and evolutionary algorithms,” in MAEB, 2012, pp. 455–462.
[2] D. J. Sheskin, Handbook of Parametric and Nonparametric Statistical Procedures, Fifth Edit. Chapman and Hall/CRC, 2011, p. 1926.
[3] M. Friedman, “The use of ranks to avoid the assumption of normality implicit in the analysis of variance,” Journal of the American Statistical Association, pp. 674– 701, 1937.
[4] M. Friedman, “A comparison of alternative tests of significance for the problem of m rankings,” Annals of Mathematical Statistics, pp. 86–92, 1940.
[5] E. L. J. Hodges, “Ranks methods for combination of independent experiments in analysis of variance,” Annals of Mathematical Statistics, vol. 33, pp. 482–497, 1962.
So, as mentioned before, you have to work on the strengthening process of the linear programming relaxation version of the model, before launching into a branch-and-bound (or branch-and-cut) framework.
From the meta-heuristic stand point: sometimes we allow the presence of infeasible solutions for promoting diversification. However, you must keep strong control in this feature, by penalizing infeasibility, for instance. Furthermore, you also need correctly assess feasibility in your problem. This is the reason for my previous comment.
I want to give you some concrete advices on your question.
Let us consider a concrete problem, e.g. your paper "Effective heuristics and meta-heuristics for the quadratic assignment problem with tuned parameters and analytical comparisons" (Authors: Mahdi Bashiri and Hossein Karimi, Journal of Industrial Engineering International 2012, 8:6). I placed your Model Description into "Model.doc" file (see attachment "Model.doc") as Figure 1 and Figure 2 (only model in large letters to see better).
Advice 1:
In my view, there is a little incorrectness in Model Description: in equations (constraints) (2) and (3) must be written "sum (1,n) X[i,j] = 1" but not just only "sum (1,n) X[i,j]".
Let us consider your TS (Tabu Search) algorithm as "most excellent method in computational time". Consider a concrete data "Lipa80b" (Figure 3 in "Model.doc") with n = 80, Runtime = 0.8 sec, RPD (Relative Percentage Deviation) = 23.55%, where RPD = 100 % * ( TS_OFV - Exact_OFV) / Exact_OFV, TS_OFV is a result of TS algorithm, Exact_OFV = 7763962, Exact_OFV is an optimal result,
OFV is Objective Function Value. You give a limit for the number of iterations in TS as max = 1000.
Question 1:
Can the parameter 'max' influence on the TS result?
Can you get a better TS result for max = 10000, 20000, 30000, etc? E.g., can you get RPD