I want to check the sensitivity of the Ranking produced by Vikor Method, whereas weights have been calculated through AHP. How to perform a sensitivity analysis of this integrated AHP-Vikor methodology? Are there any tools?
You can change the artificial weights obtained by AHP and in fact it will alter the ranking, however, this is fictitious, since alternatives are not selected by changing criteria weights but considering actual criteria values. My suggestion is to determine real criteria weights using entropy and then yes, altering their values will influence the ranking
How can the ranking obtain from VIKOR method be justified? The ranking varies from each method for instance, TOPSIS presents one ranking, AHP gives another ranking and VIKOR produces another ranking. So, how can we conclude this?
With your question and your experiment comparing results using the same data, the same criteria weights and different MCDM models, and your more that logical question, you gave me more motives and reasons to maintain what I have been promoting here in RG as well as in papers and international conferences, the last one two weeks ago in Ottawa, Canada, and sponsored by the International MCDM society.
Bluntly, as I mentioned in this conference, the way Sensitivity Analysis is performed nowadays - irrelevant of the MCDM model used - produces wrong results that in addition are almost useless. You illustrated it very well when using three different methods and the same weights you got three different rankings. This is unconceivable.
However, the models are not to be blamed for this. The problem lies in using weights that are not appropriate, or still better, the problem lies in using criteria weights. They should not be used, or at least using weights that are relevant and true, not artificial.
You really put the finger in the wound.
Your observation is very valuable because it clearly shows the inadequacy of using criteria weights to perform a sensitivity analysis, irrelevant of the method used.
As I said before, changing criteria weights, which are artificial, indeed changes the ranking, simply because it is arithmetic operation.
Regarding why the ranking changes for different MCDM models even using the same weights, it is due, in my opinion, to the fact that they affect differently the values of the algorithms and involving not only multiplications but also additions and subtractions.
For instance in ELECTRE you use these weights when comparing two alternatives regarding one criterion, and then you give the alternative a weight equal to the criterion weight subject to the action (concordance matrix).
In the same model the discordance matrix computes the absolute differences between performance values.
Both matrices are computed as dominance matrices and then compared and a raking obtained.
For PROMETHE method, each pair of alternatives is compared regarding each criterion and using preference functions for each criterion, which in turn are multiplied by criteria weights.
For TOPSIS method you don’t need to use weights, and most probably if you do there will be no change, because if you multiply each performance value for a weight, the same increse applies to all performance values in that criterion and their relative difference remains constant. Because TOPSIS is based in Linear Programming, these weights are irrelevant.
In VIKOR the best performance value for each criterion is found as well as its worst performance value. Then the difference between the best value and any performance value for each criterion is computed. This difference is then divided by the difference between the best and worst values and this ratio multiplied by the criterion weight
In AHP each alternative is compared regarding each criterion and a value of preference multiplied by the corresponding criterion weight.
As you can see because the different ways that criteria weights are inputted in each algorithm it would be a ‘miracle’ if rankings coincide, and in my opinion it does not depends on the nature of the weights, because even if you use entropy generated weights, probably the result will be the same.
This is one of the reasons that different models, treating a same problem, with the same data, yield different results, and this is also the reason by which I believe that weights should not be used in MCDM. Of course, it means assuming that all criteria have the same importance which is not realistic. However, if we run all models without using weights and analyze the final results it is likely that these coincide. At this point the DM opinion, expertise and know how can be put to work, and because his knowledge and experience he could be inclined to assign weight to certain criteria, but he will be doing this based on actual results, not on blind preferences.
Linear Programming, works without any weights, so perhaps it could be considered as the technique delivering the most suitable solution, not the optimal, but one that satisfies the DMs.
Again, I want to point out that TOPSIS is considered one of the best models as well as DEA (Data Envelopment Analysis), with wide recognition for determining optimal efficiencies are based in Linear Programming. Is this a coincidence? I doubt it.
Your first question was: How can the ranking obtain from VIKOR method be justified?
I believe that the ranking obtained by any method can be justified by the method itself. Again, the use of criteria weights without any mathematical foundation obviously alters the results producing not comparable results.
As a proof, the SIMUS method based in Linear Programming, with no weights, utilizes two completely different approaches (Simple additive weighting, similar to SAW), and (Outranking, similar to PROMETHEE), both starting at a Pareto efficient matrix, and producing two identical rankings.