# 109

Dear Shervin Zakeri , Prasenjit Chatterjee , Dimitri Konstantas , Fatih Ecer

I have read your article:

A comparative analysis of simple ranking process and faire un Choix Adéquat method

My comments.

1- In the abstract you say “This paper focuses on a comparative analysis between two multi-criteria decision-making (MCDM) methods, called the Simple Ranking Process (SRP) and Faire Un Choix Adéquat (FUCA)”

What is the comparison for? You do not explain. Even if the two ranks are identical, this similitude proves nothing, and it could also be a mere coincidence; similitude is not related to a reliable answer, there is no a theorem or axiom that supports that.

2- In page 2 “. Defining evaluation criteria that connect system capabilities to objectives. 2. . Generating a pool of alternatives for achieving these objectives”

In my opinion alternatives are chosen first, and then, criteria. You can’t define criteria if you don’t define first to WHAT they will apply for. Suppose you select first criteria like ‘taste’, and ‘sugar content’, fine, but for what?

If your alternatives are different kind of fruits and dishes, it makes sense, but if you must decide between building a bridge or a tunnel, these criteria do not obviously apply. As you rightly say, criteria are objectives thar must be optimized or minimized, therefor they must have a goal, a quantity that you want to achieve.

For instance, one criterion could be “in two years we aim at having an increase in production of 10%”.

For what? It is not the same to increase that demand of cosmetic products than of electric cars, since investments, personnel, equipment, materials supply, working capital, etc., are completely different in time, money and probabilities. In car manufacturing, adding a robot may cost more that all the products you need for cosmetics. Therefore, the amount or goal criteria must respond to the alternatives they have to evaluate.

For this point of view your Figure 2 is correct, since alternatives selection precedes criteria, and thus, contradicting what you said before

However, the same figure says ‘Validating the obtained results?

How do you validate? There is no possible validation in MCDM for we never know which the true result is. If we knew we wouldn’t need MCDM!

The figure is also mistaken when it all starts with ‘Goal setting”. The establishment of goals or goals setting, must be done afte,r not before the choosing of alternatives, and when you select criteria

In point 5 “5. Selecting the ‘optimal’ or preferred alternative”

Optimality does not exist in MCDM, and algorithms do not have preferences

In point 6 “If the final solution is unsatisfactory, collecting additional data and proceeding to the next iteration of multi-criteria optimization”

And how do you know that a solution unsatisfactory, if you don’t have a yardstick to compare it?

3- Page 3 point 3 “Specify Criteria/Attributes/Performance Indicators: Define the criteria, attributes, or performance indicators that will be employed”

Criteria, attributes and performance values are not the same.

Criteria are objectives to be met. For instance, maximum investment is 5,000,000 Euros

Attributes are the format of the performance values within each criterion. For instance, the values are very similar, or with a large dispersion, or integer, or decimal, or binary, etc., that is, they have attributes or characteristics.

4- Page 5. You speak about decision making paradox. It is true, it exists, because selecting a MCDM method is a selection problem by itself.

However, in my opinion, this paradox should not exist, because the best MCDM method is rational, with a solid algorithm, without personal appreciations and fundamentally, it the method that best satisfies ALL the requirements and characteristics of a problem. You cannot err on this. If a method is based on assumptions without mathematical support, using invented weights, or selecting the best criteria at random, or not being able to comply with a problem requirement, it is not the best.

5- In page 7 you say that SRP is one of the few methods immune to rank reversal (RR).

In my opinion, it is risky to assert that when we do not know the reason for RR, especially when sometimes it appears and other times it does not. In my research on RR, I reached the conclusion that RR is unavoidable because it is a natural geometric consequence of adding or deleting alternatives. And yes, not even SIMUS, the method I created, may escape from that fate, although apparently, it does.

6- Page 7 “Therefore, the first step of the SRP algorithm is to define the criteria weights”

As mention before, if that is the first step of SRP, in my opinion, it is wrong. You cannot ‘put the cart before the horse’.

7- In page 7 “The first step is to determine the rank of each solution for each criterion”

It is hard to understand how you have a solution in the first step, when do you don’t have any.

I guess you wanted to say ‘the performance value of each alternative corresponding to each criterion’

8- In page 8 “After establishing the initial ranks, the next step involves calculating the weighted ranks. It is implied that each alternative’s rank is assigned a specific weight or importance”????

This is confusing. Alternatives ranks in MCDM refer to the final scores for each alternative. How is it that here you use them? I think that you should used the MCDM language and definitions Probably what you are referring to is to performance values, no ranks. It is not a ‘rank matrix’ but an initial decision matrix

9- In page 8 “method. However, the specific calculation or formula for obtaining these scores is not detailed”

And how the reader knows where they come from? It is a simple ordering of alternativa scores, from the highest to the lowest.

10- “Matrix, with each criterion assigned a weight of 0.125.”

Therefore, all criteria have the same importance? Strange..

11- In page 12 “s However, its limitations in handling MCDM problems with a smaller number of criteria and alternatives are demonstrated in Table 11, where alternatives No. 1 and No. 3 received identical ranks”

It does not speak quite well about the method, because it must be independent of the number of alternatives and criteria used. This also does not justify your assertion that the method is suitable for complex scenarios

12- In page 13 you say “reduce complexity”

In my opinion, this is not applicable because if a scenario is complicated by nature, normally there is nothing you can do to reduce it, because it would mean modifying the original problem. If the problem consists in addressing several scenarios simultaneously, you can’t simplify it by reducing the problem to only one scenario, however you say that it e sures reliability. How and why?

13- In page 13 “This study and its analytical outcomes offer several benefits”

In my opinion that assertion should be left to users, not to the authors

14- My last question: What do you gain by making comparison between two methods?

Nothing, because even a high correlation indicates that that solution is correct

hope my comments help

Nolberto Munier

Similar questions and discussions