# 140

Dear Irik Z. Mukhametzyanov , Dragan Pamucar

I read your paqper

"Thin" Structure of Relations in MCDM Models. Equivalence of the MABAC, TOPSIS(L1) and RS Methods to the Weighted Sum Method

My comments:

1- I am not sure that you can say in the Introduction that using several different MCDM methods you get “ a synthesis of solutions, and that it is posited that the reliability of a solution derived from employing a myriad of methods is heightened, rendering it a preferable approach”

In my opinion, what you get is a set of different solutions in the form of rankings and there is not mathematical support for your assertion, and that “this approach is considered as the resolution of a MCDM problem”. Really, this is a bizarre conclusion. Maybe it is correct, but you have to prove it.

You speak about correlation, but the fact that some methods have similar ups and downs, is not a guarantee of reliability, but of similarity. And remember that the slopes muy be different which translates to different distances between scores

2- Your words “The principal challenges associated with decision-making utilizing the 3M approach, which currently lack definitive resolutions, are as follows:

i. Determining the qualitative and quantitative composition of MCDM methods to be included in the list for solving a specific problem.

What composition are you taking about? All MCDM methods must incorporate the same elements: Alternatives and criteria in any mix, whatever the problem. What they must include is all the characteristics of a certain problem, that maybe different from one problem to another

ii. “Establishing a methodology for comparing results obtained from different methods”.

And what is the gain in doing that? This is similarity and it proves nothing

iii.”Assessing the significance (weight) of the employed methods.”

Weights of the employed methods? First time I hear about it. Possibly you refer to criteria weights that should be equal for all methods

iv. “Addressing the question of whether methods should be grouped and, if so, how to form these groups”

It depends what you are looking for, like sorting and ranking

v. “Defining a method for synthesizing the solution”

Are you suggesting that there should be some sort of ‘Composite ranking’? This is attractive, albeit I remember reading very recently something on that issue, and in this case, how?

3- In page 3 the article talks about assessing scores. Scores, are unitless values, i.e., a series and numbers. How can you assess them? You say that the rate list reveals the fine structure of the ranking? Really, it is a very interesting concept that I would very much like to know about. In reality, when you get a score, it is already the result of assessment, obtained by combining alternatives and criteria using a MCDM method.

4- I am not sure I share your comments about rankings and distances between values. In my opinion, ranking values of alternatives, are not related with distances.

All MCDM methods work on the basis of linear equations constituted by the set of values for all alternatives and for each criterion. They are lineal functions.

According to linear algebra, if we have, say two alternatives and seven criteria, all criteria are considered but the score for each alternative depends on the intersection of only some criteria. The intersection point gives de scores for both alternatives. This can be easily seen in a simple 2D graph. Distances and weights do not take part in this evaluation. This is the reason by which criteria weights, even necessary for the computation, are irrelevant in alternatives selection.

5- In page 7 you say “, i.e. the method for assessing the significance of attributes (weight of criteria),”

This statement is incorrect because you equalize, as most people, the concepts of criteria with attributes, and they are different. A criterion is an objective, a condition that contains a series of performance values.

These can have different attributes or characterisrics, like large dispersion in the criterion, the criteria composed by integer, or decimals, with positive and negative values. These are the attributes than can be different between criteria. For instance, in the same matrix you can have a criterion composed by Boolean values, that is 0 or 1.

Another criterion formed by integers, another with negative decimal values, etc. Each criterion has a dispersion attribute with close performance values, and another with dispersed performance values, etc

As you can see attributes are the characteristics of the performance values in a criterion. Weights are the significance or importance of a criterion in comparison with other criteria

6- In page 8 “In contrast to the ranking list, the rating list reveals the "thin" structure of relationships”

And what does it mean?

These are my comments and I hope they can help

Nolberto Munier

Similar questions and discussions