# 169

Dear JAKUB WIECKOWSKI, BARTŁOMIEJ KIZIELEWICZ, JAROSŁAW WA˛TROBSKI, and

WOJCIECH SAŁABUN

I read your paper:

A new approach for handling uncertainty of expert judgments in complex decision problems

My comments:

1- You say “ However, relying solely on expert judgment can introduce biases

and inaccuracies. Subjective criteria weighting methods attempt to quantify the importance of criteria, but often ignore the uncertainty associated with expert judgment. Their results are presented as a single vector of weights, which may not precisely reflect the actual problem”

Of course you are right, you cannot rely solely on expert judgement and that most of the time is subjective, and depending on the DM. Another may think differently

There is no doubt that the DM ideas, opinions, knowhow, is very important, since normally not all criteria have the same importance.

But leaving the determination of such importance to DMs is not adequate. Of course there is uncertainty, because usually those weights follow the descriptive or wished values instead of the normative ones, where they are consequence of reasoning, interchange of information, research, etc. In addition, weights should be applied, if necessary, at the end of the process, i.e., when a result is known based on original data. Is in here where the DM is fundamental, not at the beginning of the process

Normally, these weights are invented values originated in wishes and moods of other DM, and this has nothing to do with real-life projects. It is true that even original values may be uncertain, but that can be corrected using for instance, a minimum and a maximum value for each criterion and bases in realities and statistics. For instance, determining maximum and minimum demand for product based on trends and statistics Certainly there is still uncertainty, but this is bounded.

It is hard to understand that the method you propose can solve this issue

When a person A reveals his/her opinion on a certain issue, many factors may intervene, for instance, knowledge on the issue, intuition, mood, influence of one sub-factor over another, personal pleasant or unpleasant memories, etc. From there, the person A gives a value or average, impossible to explain or justify, but valid for him.

The same for person B, who, no doubt, will have his value as a function of different memories, knowledge on the issue, etc. Consequently, one wonders if it is valid to divide by two the sum of A and B values, and thus, making averages of averages, especially when one does not know the number of subfactors considered in each case.

2- “DECISIONS made in real-world situations are complex and require careful analysis to minimize potential risks and maximize the benefits of making optimal decisions”

There are not optimal decisions in MCDM, but results that reflect a balance, a compromise, between all factors. This was established long time ago (Zeleny, 1974).

3- “Results are compared with those obtained from currently used subjective weighting techniques to verify effectiveness”

Are you suggesting that your results can be validated when comparing with an existing similar problem?

First, not all problems are equal, and second, how do you know that the other solution is valid, since there is no way to prove validity, whatever the method you use? For that you need a yardstick to compare to, something that nobody has.

4- “The DM experiences, perspectives, and subjective assessments, which

can lead to significant differences in the interpretation and

evaluation of criteria”.

Thank you, you are confirming what I wrote in (1)”

5- “Combining these approaches makes it possible to increase the accuracy and flexibility of criteria weights”

I can agree that combining weights with fuzzy may be beneficial, provided that the weights from experts are obtained after arational process of thinking, investigating and consulting, and not when using invented weights from AHP. In the first case it makes sense to find a crisp more realistic value, in the second case it is a waste of time. In the first case there are reasons even not mathematical, but based in experience, to support a criterion value. In the second, based on intuition, what do you have? Nothing

6- “It operates under the premise of a direct and proportional relationships regarding the significance

7-of the examined decision alternatives”

And on what is based this premise. Is there a rational thinking about it? Why in a portfolio of different projects, an alternative like for instance building a high rise, must be linearly related regarding significance, with an alternative like decreasing crime rate?

8-“This approach facilitates the comparison of diverse methods, discerning discrepancies in results”

Fine, what for? Does it prove that a result is correct? No, it only shows that some rankings are similar, which in MCDM means nothing. There is not a single mathematical axiom or theorem to sustain that.

9- Page 5. “Consistency or close alignment among rankings derived from different methods is imperative for robust decision-making”

Why? Is there any proof of that or is it only an assumption? I do not see the relation between robustness and rankings closeness. The first is based on position invariance of activities selected, that depend on certain criteria leeway. If by ranking closeness you mean a high correlation in two different methods, it only shows that both criteria follow the same ups and downs, not related with strength.

10- Page 6 “Departing from conventional approaches for determining the positional ranking of decision variants, the fuzzy ranking approach assigns membership degrees to alternatives for each ranking position, highlighting the robustness of the result’

I don’t think that fuzzy has a role in this, however, it has importance in refining initial data.

The positional ranking of alternatives is given by any of the MCDM methods. The robustness of a solution is given by its sensitivity to criteria variations.

Strength is thus related to the increase /decrease of a criterion, but more often to a subset of criteria, taken simultaneously, and how much they can vary without changing a solution.

Thus, if alternative D has been selected in the first position as the best one, it may keep that position regarding say, criterion C1, along its range of variation, and thus being very strong, if this range is wide enough, however, if it is also subject to criterion C5 for instance, D is extremely weak because the smallest change in C5 will provoke that alternative D losses its best potion.

This is mathematics, not assumptions, and you can check it by running a simple problem in your computer using the Solver add-in in Excel, in the Data tab. There you can see the selected alternative and ranking, and at the same time, you can learn which is the set of criteria intervening in this solution. To determine which is this range, you need to use SIMUS.

I do not need to remind you, that there is a theorem supporting this. It is not my invention

Needless to say, upon request I can demonstrate this with real examples either theoretically of using graphics, as I have done in one my books.

The problem, as I see it, is that no MCDM method, except Linear Programming, considers this join action of criteria, and are happy, using the intuitively appealing of selecting ONLY ONE criterion to vary and chosen by ITS HIGHEST WEIGHT, which can be easily demonstrated as FALSE.

These are some of my comments, I hope that can help

Nolberto Munier

More Nolberto Munier's questions See All
Similar questions and discussions