Dear Bartłomiej Kizielewicz , Jakub Więckowski , Andrii Shekhovtsov , Jarosław Wątróbsk , Radosław Depczyński , Wojciech Sałabun

I have read your paper

Study towards the time-based MCDM ranking analysis – a supplier selection case study

These are my comments

1- In page 1 you mention an ‘optimal solution’, however, there is consensus that in multi criteria an optimal solution does not exist, since you cannot have at the same time the maximum benefit together with a minimum cost. It is one or the other; that is why MCDM methods aim at finding a compromise solution, that tries to reach a balance between the different demands.

2- In page 3 you say “What is more, in the case of testing the MCDA methods’ performance, it is worth determining the extent to which the obtained rankings are similar to each other”

Is it worth? Why? There is not a single reason to justify this very common thinking. The knowledge of that does not contribute to a reliable result, simply because we do not know which the true result is

3- In page 4 “AHP and TOPSIS [28], VIKOR [29], ANP [23] or DEMATEL [24] methods proved their effectiveness in the problems of supplier selection and evaluation”

How do you know that they are effective? Just because deliver a solution? That is what they have been designed for. Where is there connection with reality, that is unknown?

4- In page 4 “Both simple fuzzy developments of the MCDA methods based on the triangular or trapezoidal form of membership function (fuzzy AHP…………”

May I remind authors that Saaty said that fuzzy cannot be applied on AHP, since it is already fuzzy? And obviously he was right.

5- In page 5 “The aim was to check the impact of the used method to received rankings and answer how MCDA rankings vary”

And what do you learn from that? Why do you think that said knowledge is useful? I have asked this same question in many many papers, and got no answer….It appears that similitude of rankings impress many people, but nobody can say why.

6- In page 5 “The preferences for the set of alternatives are then being calculated based on the obtained rule base”

Fine, and what is that rule?

7- In page 8 “This method is used in the sensitivity analysis of solutions obtained by different MCDA methods”

You are talking about equal weights, and I do not think this is correct. You know that in most SA analysis it is chosen the criterion with the maximum weight, something that does not have any mathematical support,although it is intuitive. It is also incorrect to use only one criterionfor variation (ceteris paribus principle).

8-On page 8 the formula (14) for entropy represents the entropy of a criterion, but not its mean. You have to divide it by Ln(n)) to calculate the entropy mean of a criterion. Same for formula (15)

9- On page 9 “On its basis, a set of 53 potential supplier evaluation criteria was identified”

Where are they? You only show 8.

10.- As I understand is Figure 2 you compare monthly the Spearman correlation between three methods using entropy derived values. But in each month, you get there rw coefficients, my question is: Between which methods? For instance, what does it mean 0.25 for COMET? It is a low correlation related to which of the two other methods?

11-Page 16 “research clearly shows significant variability of the month-to-month suppliers of rankings?

Sorry, I don’t understand the meaning of this sentence. Could you please clarify?

As a bottom line, what is the conclusion of this extense analysis? And mainly, how can it detect what is the best ranking, which is the main concern?

I hope these comments may be useful

Nolberto Munier

Similar questions and discussions