# 206
Dear Bartłomiej Kizielewicz · Jarosław Wątróbsk · Wojciech Sałabun
I read your paper
Multi-criteria decision support system for the evaluation of UAV intelligent agricultural sensors
My comments:
1- Abstract: You say “The results confirm the framework’s effectiveness demonstrating its robustness and stability in decision-making. Sensitivity analysis and comparative studies further highlight its reliability, particularly in addressing rank reversal issues commonly found in existing MCDA methods such as TOPSIS and AHP.”
I am afraid that you are mistaken, for SA is not able to prove effectiveness of a framework or highlight a method or procedure. It is designed to find if the solution found, that is the selected best alternative, is strong and stable
SA is not related to RR, which, by the way, as per my research, may happen in all MCDM methods, because it does not depend of a method, but on the different topologies that are generated when spatial dimensions or alternatives are added or deleted.
2- Very good and precise information in the abstract clarifying a subject that is unknown, for many of us, or at least for me.
3- In page 7 you mention MEREC for criteria evaluation. It is indeed independent of subjectivity, but in my opinion, it is a biased method, because in each run it is solving different problems since its eliminates progressively a criterion, that is, if there are, say 9 criteria, in each run it considers only 8, and each one on a different matrix
In the next step you compare rankings using different MCDM methods, and what is the gain in doing that?
None for me, since a high correlation between the rankings of two methods, only denotes that both move in the same direction, but it does not mean that they are close to reality
Selecting weights to evaluate alternatives, is incorrect, because what really has the capacity to evaluate alternatives is the discrimination of values within a criterion, not values between criteria, or in other words, what is relevant is the content not the envelope
4- Page 11-” The criterion weights determine their relevance, which is crucial in evaluating alternatives”.
This assertion is not supported by any mathematical theorem, axiom or common sense it is simply intuitive.
My justification of my assertion is as follows:
Iin MCDM the DM is working with lineal equations represented by straight lines in a plane that in different manners, according to the method, define a space of solutions, where one of them is preferred, as in TOPSIS, where the best solution is that closest to the ideal point.
When throe DM multiplies the original values in the initial matrix by a weight, these values increase or decrease proportionally, that is, there is no relative change within each criterion, since the line displaces parallel to itself. However, what changes is the position in the plane of a criterion related to others, due to the various weights, and thus, displacing differently, and varying the original distances between them. This may produce a topological change in the common space of solutions, and now the alternative closest to the ideal point in TOPSIS, may have changed, which produces a different ranking. It can be seen that weights only modify the original distancers between criteria. This is geometry, not evaluation. You may or not agree with my explanation, but it is rational and mathematical, not intuitive. This can produce that the enplane of this criterion related to the other that also change.
Regarding the formation of solution spaces, I am reproducing what AI says about it:
“In Multi-Criteria Decision-Making (MCDM) methods, solution spaces are defined based on the criteria and preferences involved in the decision-making process. Here are some common approaches:
1. Weighted Sum Model (WSM): The solution space is defined by assigning weights to each criterion and calculating the weighted sum for each alternative. The alternative with the highest score is chosen.
2. Analytic Hierarchy Process (AHP): The solution space is structured hierarchically, where criteria and sub-criteria are compared pairwise to determine their relative importance.
NM- That is, the alternative with the highest score is chosen
3. TOPSIS (Technique for Order Preference by Similarity to Ideal Solution): The solution space is defined by identifying the ideal and anti-ideal solutions. Alternatives are ranked based on their distance from these solutions.
4. PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation): The solution space is defined by preference functions that compare alternatives based on criteria.
NM- The best alt. is selected according to compliance of preference functions
5. CoCoSo (Combined Compromise Solution): This method uses a compromise solution space where alternatives are ranked based on aggregated measures”
NM – That is, it uses a aggregating multiplication rule
NM – Linear Programming (LP)
The best alternative is which best satisfy all criteria, and it is defined by the vertex of a polygon than tangents the main objective line
As you can see, all methods generate solution spaces, and from there the best solution is extracted by different means, not always following linearity, except LP, which is altered by different procedures. This could explain why a same problem, solved by different methods, give different rankings.
5- Page 11 “For the STD, Entropy, and CRITIC approaches, one can see a large difference between the most significant and less significant weights. In contrast, for the MEREC”
This is obvious, because STD, Entropy and CRITIC, refer to the solution of the same problem, when MEREC solves different problems by eliminating criteria, consequently, there are different matrices each time
Table 3 in page 12, shows that Entropy, and CRITIC, both scientific methods show high similarity, because both work with discrimination. In my opinion. CRITIC is more complete because it determines the STD or discrimination and besides, the correlation between criteria, that is, takes into consideration redundant information in two criteria
6- “The similarity coefficient of the WS rankings was used to test the similarity of the obtained rankings”
And what this information is good for?
7- Page 17 “The phenomenon of rank reversal occurs when the relative ordering of alternatives changes unexpectedly after the addition or removal of an alternative, raising concerns about the stability and reliability of the decision-making process”
Youa are right about unexpected changes, and it is due to the random nature of RR; that is, it may or may not occur, and this is unpredictable, because it depends of the characterises of the
vector added and its intersections with the vectors of the precedent matrix
“To address this limitation, we propose a novel approach-COCOCOMET-which is resistant to rank reversal.”
I doubt about this. You can prove perhaps what you say, by try to successively increase the number of alternatives, and probably, you will find that say for instance 2, 3 and 4 alternatives are indeed ranking invariance, but may change if you add alternative 4. Why? Because each time you add an alternative to a matrix, you increase the amount of information, therefore, a newer alternative does not consider the old ones because their information is already included in the result for the alternative added
As an example, think on a square, you get information about two dimensions (2D); add a new dimension, and you will get a cube or 3D, that already has the information that gave you the square. RR is produced due to different topologies that appear with a new addition, and consequently, no MCDM method can escape of this. I did this exercise expanding from 2D to 10D, and it happens as commented.
8- Page 17. “Generating a ranking of alternatives”
Since when criteria define alternatives? How can you select criteria if you don’t know to what alternatives are going to evaluate? Of course, this approach is equivalent of ‘putting the cart before the horse’
9- Page 20 “Future research should focus on exploring optimization strategies or heuristic approaches to ensure the framework remains efficient for large-scale decision-making problems”
This is interesting. How do you evaluate efficiency? For me, efficiency can be computed by determining in what percentage a criterion is achieved. Remember that a criterion is an objective, therefore, you need to establish a target, a goal to achieve. This can be easily and mathematically done using Linear Programming, that works with targets, but I do not imagine how you can do that with the more than 200 MCDM existing methods. None of these methods, except PROMETHEE, LP and SIMUS consider resources. Remember that criteria forcefully rely on resources, such as money, manpower, water, contamination allowances, etc., and that the purpose of MCDM is to select an alternative subject to a set of criteria that optimizes the use resources. Consequently, MCDM also can be defined as electing alternatives as those that make the best use of available resources.
10- Page 20 “The findings indicate that the proposed framework effectively identifies optimal UAV sensors, providing a structured and adaptable approach for agricultural applications”
How do you know that they are optimal? Let me remind you that in MCDM optimality is a myth, since you cannot ask at the same time for the maximum benefit and the minimum cost. You must be looking for a compromise solution, a balance.
11- Page 21 “Furthermore, the study highlights the importance of incorporating multiple evaluation techniques to achieve more reliable and consistent results. Sensitivity analysis and comparative evaluations demonstrate that the proposed model maintains its effectiveness across various weighting scenarios, reinforcing its practical applicability”
You can say that if you wish, but on what grounds? Who says that multiple evaluation achieves more reliable results? Where is the demonstration of this? In my opinion they are only assumptions without any mathematical support
Again, what is effectiveness? You never defined it
These are my comments. Hope they can be useful
Nolberto Munier