Instead of comparing the ranking of different MCDM methodologies, it's better to compare the process. How the weights have been identified, how many decision-makers were there. Each methodology has its own way. If you compare the process, you might end up with some insights about the results or the rankings.
If you compare the processes you will end in choosing none, because all of them are based on subjectivity, sometimes absurd as in pair-wise comparisons of criteria.
Of course, you can use objective weights as those from entropy, but there are also other subjectivities such as choosing a threshold, a veto, or a distance among several.
In so doing, these methods are distorting reality, each one in different ways. How can you then expect similar rankings?
In my opinion, the only valid methods are those that work with quantitative and qualitative performance values, the last coming from experts, statistics, polls, and surveys. Those coming from experts must be based on reasoning, experience, and know-how, not on personal opinions or intuitions
Once a result is reached, the DM using his/her expertise, know-how, and based on solid and realistic results, can correct, eliminate, improve or simply reject the result, or replacing the best alternative because it is strongly subject to criteria that have little or no possibility to vary, but he/she can't modify the initial data as is done today
Since you compare AHP and TOPSIS, and I guess using in the last the weights from AHP, it is obvious, at least for me, that the comparison is biased, since you are using for TOPSIS the output of AHP
The sole criterion I use to select a method is whether it satisfies a set of rational properties (think proto-axioms). It has been proven that the Borda Count is the only method to satisfy
1 Complete Transitive Outcome: procedure produces a transitive outcome from the judges’ transitive rankings (transitive means if A is preferred to B (notated A≻B) and B≻C then A≻C)
2 Pareto condition: if all voters prefer A to Bthen the outcome of the method also has A≻B
3 Unrestricted domain: allows each voter to select any strictly transitive ranking of alternatives
4 Intensity of Binary Independence (IBI), also called the Intensity form of Independence of Irrelevant Alternatives (IIIA), which has the overall ranking of any two alternatives determined by each voter’s relative ranking and the intensity of that ranking
The attached article explores the broad applicability of Borda.
The attached spreadsheet is an example of how TOPSIS and Borda yield dramatically different rankings. I have not seen the theoretical foundation on which TOPSIS relies and have posted a question seeking justification for selecting TOPSIS over Borda.
First of all, it appears that you are not aware that the same Saaty, the creator of the AHP, was against using fuzzy with AHP, because according to him, AHP is already fuzzy
Second, If you use AHP in conjunction with Topsis or Copras or with any other method, the result is biased, since you are using as an input of them, the partial output of AHP, and then comparing them
Third, are you sure that you can use AHP in batteries storage when most probably the different criteria are related, and then ,you can't use AHP?
In my humble opinion, the most important aspect to consider in selecting an MCDM method is to analyze how well they can model and solve a scenario.
I don't think that Borda can invalidate Arrows, as asserted in one of the papers that you suggested
If we translate Arrow to the MCDM field, which in most cases works with subjective and capricious estimates due to the preference or the opinions of the DM, without a doubt we will find that indeed there is a dictatorship, as Arrow says, because decisions are taken that affect people without being consulted. For me, it is difficult not to agree with his theorem
As I understand it, Borda deals with rankings, without considering the means, methods, or assumptions that took place to reach a result
It is possible to compare the rankings of the two MCDM methods using the new WS ranking similarity coefficient developed by Wojciech Sałabun and Karol Urbaniak:
Sałabun, W., & Urbaniak, K. (2020, June). A new coefficient of rankings similarity in decision-making problems. In International Conference on Computational Science (pp. 632-645). Springer, Cham. ISO 690
There are no results for Sałabun, W., & Urbaniak, K. (2020, June). A new coefficient of rankings similarity in decision-making problems. In International Conference on Computational Science (pp. 632-645). Springer, Cham. ISO 690
Thank you for your reference to the paper authored by Urbaniakej and Salabun.
It is not my intention to challenge these recognized colleagues, but to ask a simple question:
Why so many researchers are interested in determining the closeness of two rankings? What for? What could they extract from that?
Assume four alternatives A1, A2, A3, and A4 subject to a set of criteria. We use say three different MCDM methods and obtain the best alternative as well as the corresponding rankings (R). The four rankings are different as 2-1-4-3, 4-3-2-1, 2-4-1-3, and 3-1-4-2.
Suppose that using Spearmen we get for R1 and R3 a rho of about 0.89. a high correlation indeed, but what does it mean? That the two of them have a large similitude. And what do we do with this information? Nothing.
If we knew the real or true best value and ranking, as the paper assumes, something that of course we don’t know, we could compare the closeness of each ranking to the real ranking and decide which is the best. However, in this case, WE WOULDN'T NEED ANY MCDM METHOD, because we already know the best, consequently, which is the utility in determining if a large rho exists between two different when neither a YARDSTICK nor a BENCHMARK exists?
In the conclusion, the authors state that: ‘the main contribution of the paper is a proposal of the new coefficient of the rankings similarity’.
I reckon that this method may be very important to define similarity between two rankings and it is indeed a valuable contribution, but again, what is that information good for?
There could be that I am missing or ignoring some important consequence. If this is the case I would appreciate it if you or the authors could explain to me its relation regarding the selection of the best alternative.
I am not saying that it is wrong; I just want to understand its importance in MCDM scenarios.
In my point of view, instead of compare two different methods at the end of the day, it's better to justify the MCDM technique you are going to use for your problem. It's not logical to compare the results of two MCDM techniques as they give you different types of results. Of course you can do a validation with the experts in the field of your case to check the final ranking for you, however, comparing two completely different techniques would not be a good idea.
I am in complete agreement with you, especially when you say that it is necessary to ‘justify the technique you are going to use’. Is this so obvious, that I don’t understand how many practitioners don’t realize this.
A DM can compare two methods provided that both can model and solve a problem, but not if they are different, not in their respective algorithms, which we assume they are sound, but in their structure. For instance, he can compare method A and method B rankings on a certain problem, if both admit independent or dependent criteria, but if A may work with both, while B only with one, obviously we are comparing bananas with pears. Unfortunatelly this is done constantly when comparisons are made between TOPSIS or PROMETHEE with AHP. We can’t rationally expect correct results, and maybe only coincidence, not to much mathematical indeed…..
Therefore, the first principle to analyze: Do those methods share the same attributes?
As another example: Can you compare two MCDM methods when one of them works only with a lineal hierarchy, to another that works with a network? We can only guess the reasons by which Dr. Saaty, created AHP, and years later invented ANP. Probably, because when he developed AHP it was useful for the military scenarios on which he was working on, since they have a lineal hierarchical structure, as old as the world.
In my opinion, at the beginning, companies adopted it, because they also were working in a lineal hierarchical structure, since centuries, but it was changing, and companies switched to matrix structure. It appears than even after the spin they continued using the method. Of course, we can’t blame AHP, for it was used for something it was not designed for.
Perhaps, years later, Saaty recognised that his method was being improperly used by many companies that had not a hierarchical structure, and then, he developed ANP, that considers the more appropriate network approach.
However, I disagree with your last paragraph. There are no experts than can validate a result, because neither them not anybody, knows the ‘true’ ranking. They can agree on opinions, on reasoning, on know how, but it does not mean validation. In addition, if there are two groups of experts and both reach different conclusions, wich is the right one?
In my humble opinion, the solution could be comparing each ranking with a ranking determined by the evolution of the set of criteria along the alternatives in the original decision matrix, provided that it has reliable values, and without any weights or human assumptions, that is, REALITY.
The reasoning behind this suggestion is that the evolution of the real data (criteria), is based on the alternatives, while the alternative rankings are based on criteria, that is, there is mutual information, a very well-known procedure in Statistics and Information Theory.
In this way, each ranking can be compared with the ‘original ranking’ using the. Kendall Tau correlation coefficient, and thus, the selected ranking should be the one with the highest Tau.
This is only an idea. I would very much like to have the opinion of my colleagues in RG