# 174

Dear BARTŁOMIEJ KIZIELEWICZ, ANDRII SHEKHOVTSOV, JAKUB WIĘCKOWSKI, JAROSŁAW WĄTRÓBSKI, and WOJCIECH SAŁABUN

I read your paper:

The Compromise-COMET method for identifying an adaptive multi-criteria decision model

My comments

1-In the abstract you say “which identifies the adaptive decision mode based on many normalization techniques and finds a compromise solution”

And how do you find a unique compromise solution by comparing rankings from several methods?

Are you looking for a composite ranking? But even if you get one what is it good for? The fact that you got a composite reranking is of no use, because you do not know which is the real ranking.

2 – Page 3 “Effective resource utilization is paramount to prevent irreversible environmental damage”

There is no doubt that resources are paramount, but not only for environmental damage. They are fundamental for any resource be it money, people, water, fuel, etc. Unfortunately, maybe 99% of the more than 200 MCDM methods consider that resources are infinite, and thus, they are not contemplated. The exception is PROMETHEE and Lineal Programming (LP).

3- Page 3 “management, and mitigating negative impacts .This underscores the relevance of MCDA methods,

which can facilitate selecting optimal decisions that align with”

sustainability goals.

Only sustainably goals? In reality, in real projects, all criteria are goals, and consequently, they must have a target.

4 - “adaptive compromise method for decision modeling”.

And what is it? You do not explain, at least what it means.

5- “Existing methods so far are susceptible to the rank-reversal paradox”

As per my research on RR it is not a paradox, but a natural and at random geometrical occurencre. As a fact since you are always working with the same number of alternatives or dimensions, to talk about RR appears irrelevant, since you are not adding or deleting alternatives. Normalization only may change the order or position of alternatives, and this is not related to RR, since dimensions are preserved.

6- “While current approaches offer discrete ratings and compromise rankings for a fixed set of alternatives, they falter when evaluating new alternatives”

Naturally, because by adding or deleting alternatives you are mapping data in a space of say 2 dimensions or alternatives, in another of 3 dimensions.

This means that in the 2D all feasible solutions of the problems are contained in a planar polygon. When you pass to 3D, the polygon becomes a polyhedron. Therefore, if in the polygon you find for instance that A2 >A1, this ranking may or not be preserved in 3D.

It is easy to see this in its geometrical constructions, and thus, the act of adding an alternative, delivers more information that could alter the original ranking, , in the same way that a cube provides more information than a and expanded rectangle.

Of course, you could not accept my theory, and in this case, I would be interesting to know yours, that is, why adding a new alternative may produce RR

7- “Each previous evaluation set or alternative requires recalibration”

What is a recalibration? Do you mean to run again the software?

8- “The paper presents the C-COMET method, offering a unique approach to establish adaptive decision models, impervious to the Rank Reversal Paradox”

Were you able to prove this assertion?

9- “method is the Analytic Hierarchy Process (AHP) approach, which is based on mathematical modeling of the relative importance of criteria and alternatives”

I am puzzled, since how can you consider a right mathematical modelling using AHP when the resulting initial matrix is FORCED to be transitive, irrelevant of what the DM estimates?

10- “Therefore mentioned authors proposed a new MCDM approach free of the Rank Reversal Paradox for a safer and more reliable decision”

Interesting, and how this can be done? I do not know what method these authors proposed; if in reality it works, at present it will be largely known. In my opinion, this is impossible, because violates the geometrical principles of working with multi dimensional spaces. By the way, I can prove mathematically and with examples what I say regarding RR

11- “Sequential Interactive Model for Urban Systems (SIMUS)”

I am afraid that this is not exact. SIMUS suffers from RR as any other method, if it weren’t, my RR theory would be invalid, however, due to its algebraic structure, it that does not compare alternatives, but selects them using the economics concept of Opportunity Cost and ranks criteria in each iteration, through a ratio analysis, and it could be the reason for its resistance to RR, as I have demonstrated using examples and in 66 combinations of adding and deleting, as shown in my book published in 2019 and also in its second edition in 2024

Recently, in an actual work I consider a case starting with 2D and adding a new one up to 10D.The results clearly shows that sometimes the invariance of the ranking is preserved for several dimensions, while in others it changes adding only an additional dimension. Why the randomness? Because it depends on the values of the new vector inputted and its interactions with the existing vectors. For this reason, nobody can say that a new alternative is better or worse that those existing.

As a fact, in my actual example, as new alternatives or dimensions are added, the rankings tend to be decreasing in length and at the very end, in 10D there is only one alternative. The reason could be that as we increase the alternatives, the next one incorporates the values of the precedent, as a cube also contains the information from the precedent square. In addition, it appears that the more the dimensions the larger the am amount of information, which is lineal. However, adding only one more alternative, the feasible solutions space be very very complex, and it could be that in 10D it is not possible to determine the coordinates in 10 dimensions due to the complexity of the polytopes.

As a bottom line, I am not saying that my theory on RR can explain everything, but I understand that it helps to understand the RR issue.

12- In page 6 you say MEREC or Entropy, meaning that both address the same issue. I disagree.

MEREC works removing one criterion at a time and then restoring it and using the next. The procedure is attractive, but in reality, in a set of say 9 criteria, the method is applied to nine different problems, because in each one are considered only 8 criteria instead of nine in the original problem. And thus, in each run the software will work on 9 different scenarios

These are some of my comments. I hope they can help.

I am willing to share with anybody my findings

Nolberto Munier

More Nolberto Munier's questions See All
Similar questions and discussions