Proposal for an objective binary benchmarking framework that validates each other for comparing MCDM methods through data analytics

Mahmut Baydaş1 , Tevfik Eren1 , Željko Stević2 , Vitomir Starčević3 and Raif Parlakkaya

I have read your article and my comments are:

1- In the abstract you say “However, because the algorithms of MCDM methods are different, they do not always produce the same best option or the same hierarchical ranking.”

In my opinion this is debatable. If we are using mathematical algorithms, with the same data and the same aim, results and rankings should be equal or very similar, because the mathematics are only one. It is like saying that 3^2 should be give different results according to the algorithm used for solution; multiplication, i.e., (3 x3 = 9), or logarithms (2 x log (3) = 9.

Results are different because each method introduces weights and assumptions that don’t have any mathematical support. Subjective weights in AHP, thresholds in PROMETHEE and ELECTRE, false assumptions as in BW method, distance in TOPSIS, etc.

2- In page 2 “Simply put, MCDM methods can be compared based on their ability to relate to real life. This brings to mind the naturally occurring sequences in real life”

100% in agreement. This is fundamental; if a MCDM method is unable to model a certain reality in a problem, it is useless, at least for that problem.

3- In page 2 “The factors that differentiate the ranking results of MCDM methods are the normalization type, assumptions, limitations and threshold value, along with different calculation procedures”

And subjectivity, possibly the most important, but not on calculation procedures, because you are using a universal tool: Mathematics. The same mathematics principles are used to build a house, and airplane or a car, even to compute financial performances. An example is entropy, using the same principles than Thermodynamics, or the design of blades in wind turbines, following the same aerodynamic principles used in aeronautics. This is science, very different to guessing and intuition.

4- In page 3 “In this study, an “output-based” solution obtained with “data analytics” is proposed as an alternative to a classical “input-based” methodological solution.”

Agreed. In other words, you promote, as I do, the ’Bottom -up’ approach, instead of the ‘Top-down’ approach, even supported by common sense.

5- In page 3 “To gain a more robust insight, a separate comparison metric will be proposed based on the RR findings produced by the MCDM methods”

Are you deleting or adding companies along the study? It is the only thing that can produce RR

6- In page 3 “On the other hand, comparative analyzes of the methods show that none of the MCDM methods are perfect”

I fail to see the relationship between comparing methods and perfection. None method is perfect

7- In page 6 “Since normalization distorts the original data in the first decision matrix, it violates the principle of independence from irrelevant alternatives (PIIA)”

Normalization in most forms consist in dividing each performance factor by a number originated by diverse modes, like sum of values, largest value, vector, etc. If all performance factors in a criterion are divided by the same factor, why is there a distortion, since the relative importance between values is not altered? Therefore, the performance values for all alternatives are divided by the same number. Where is the violation of independence here?

9- In page 6 you mention a procedure based on the Spearman correlation to measure RR.

It appears attractive, but I have my doubts because the alternatives that are added or removed combine in hundreds or thousands of relationships, and I don’t think that they can be condensed using a simple method like correlation. I base my presumption on the fact that removing only one alternative may have a large effect on the others remaining, imagine removing several at the same time!

Remember also, that either removing or adding alternatives means changing the number of dimensional spaces of the system, generating different scenarios, where each alternative identifies a different space, consequently, also the common space where are all feasible solutions of the problem is changed, and therefore, the ranking can be dramatically different. In Linear Programming for instance, you can have hundreds of iterations in where an alternative is deleted, but immediately replaced by another that was not in the former solution. In this way, you can have thousands of changes, but the number of dimensional spaces is always the same. In my humble opinion this is one of the reasons by which there is no RR in LP.

10 - In page 8 “Accordingly, we used representatives of all MCDM types in this study, including the new methods.”

This is inexact. I don’t see in your paper that you used methods like Linear Programming, Goal Programming, Solver (in Excel), LINGO, and SIMUS, all based in LP, and using different systems, as the other two large classification. All your methods use personal and thus, subjective appreciations, something that does not happen in LP.

I hope this helps

Nolberto Munier

Similar questions and discussions