# 177

Dear Bartłomiej Kizielewicz, Wojciech Sałabun

I read your paper

The pymcdm-reidentify tool: Advanced methods for MCDA model re-identification

My comments

1- This is the first time that I hear about reidentification in MCDM, and really, I am puzzled.

First of all, what is reidentification?

As I can gather you try, using the Python library, to perform some sort of inverse engineering, in order to recreate or learn on some concepts like criteria weights values from old projects, and perhaps using different techniques. Please forgive me if I misunderstood. But this involves, as my understanding, that you accept that the rankings on similar projects are correct and thus you try to recover then.

Immediately two questions come to my mind:

a) Is there any proof that they were correct?

b) It appears that you work with similar projects, but it does not mean that what the stakeholders wished and demanded in the old project, can be applied to a new one. Even similar projects from a same company, addressing the same objective, may have different demands, and values.

I normally try to avoid mathematical formulas and instead, I question the rationality of procedures. That is, I deal with facts not with theories and assumptions, which may be correct, if it can be demonstrated that they are, something that nobody can.

2- In page 2 you say “These methods allow structuring the decision-making process by providing tools for formally modeling the decision maker’s preferences”.

In my opinion, this is a very general statement, because there are MCDM that do not need DM’s preferences, which, as you know, are based in non-mathematical bases, especially AHP. I do not know your opinion, but as far as I know, there is only one reality, that is not contingent to intuition or the different opinions of other DMs. This point has been extensively discussed from the verry beginning, possibly in the eighties, and really, I found incompressible that is still in use.

In addition, criteria weights are useless, for other than producing a relative ranking among criteria, they do not have any role for evaluation of alternatives. Shannon’s Theorem is very clear about it. Therefore, why people is still even considering them?

3. Page 3 “Each MCDA/MCDM method can generate different results depending on how the weights and criteria are defined and the algorithm used to process them”

It is true that different methods normally produce different results, probably due to the algebraic structure of each one, most of them without mathematical support, let alone reasoning.Therefore, as it very often happens, we can get as many rankings as methods, and rarely some with coincidence. Thus, what are they useful for, even coming from metaheuristics, since the put the DM in square one? We need a solution, not a serires of solutions.

4- In page 3 you refer to rank reversal paradox, because they can destroy invariance. Now why there should be invariance hen you add/delete an alternative. Where is the proof of it? As an example, I have cited many times. All feasible solutions of a problem are contained in a geometrical figure or common space, like a rectangle. If you have 2 alternatives subject to many criteria, most probably the solution of the problem is at the intersection of criteria, and this is pure mathematics.

If you add an alternative to the same problem, it could be that the common space now a polygon, that may give the same result, but most probable not, because now you have a cube that not only incorporate the former solution but adds new dimensions with maybe other solutions. There is no reason that invariance should be preserved in the new dimension, so RR is the product of a geometric transformation and can be completely at random, because it depends on the values of the new vector inputted

If you want, we can discuss in extense about RR, either publicly or in private.

5- Page 4 “…. difficulties in finding an expert with expertise in multiple fields simultaneously”

I agree 100% with your assertion, because I have been saying the same during years. If the criteria involve say 10 different fields, nobody can pretend that the DM be an expert in all of them, even if there are 10 different experts, since the expert in health cannot discuss with the other experts in engineering, financing, environment, transportation, etc. However, AHP ignores that fact and happily assumes that experts can produce quantitative comparisons in all fields

All these assumptions except EV, or the geometrical mean, do not have the minimum mathematical support, let along reasoning. Unfortunately, this lack of common sense, ignoring reality, addressing a problem as it is, incorporating as truth invented weights, is common in most MCDM methods and nothing is done to improve it.

These are my comments

Nolberto Munier

More Nolberto Munier's questions See All
Similar questions and discussions