# 209
Dear Renzo C Bertuzzi, Sajid Siraj, Ludmil Mikhailov
I read your paper
Sensitivity Analysis Techniques for Enhancing a Decision Support Tool
My comments:
1- In the abstract you say “In AHP, the relative importance of alternatives and criteria is assessed by using the Pairwise Comparison (PC) method, in which two criteria are compared at a time and the judgments used to elicit preferences of a DM”
In AHP there are not judgments, but preferences based on intuitions.
2- “The three major types of SA- numerical incremental analysis, probabilistic simulation and mathematical mode”
Good start. It is the first time I see SA in AHP addressed seriously. Just a little observation: Remember that SA works with both, increments and decrements, not only with increments
3- Page 3 “In practice, DMs may not feel confident about the provided judgments, for example, the judgments may be of subjective nature, or might have come from a group decision where members have different opinions [18” . In such cases, it is desirable to run a sensitivity analysis (SA) on the results to analyse how sensitive the solution is to the changes in input data”.
SA is not designed to make some sort of balance when different DMs have different opinions. SA is designed to determine if a solution is strong or not, and this strength depends of the allowable variations of some basic criteria, and not precisely chosen by their weight.
To understand this, consider that a criterion cannot be increased or decreased as will or as much as the DM wishes; it has a upper and a lower limit. As long as the criterion varies within these limits, the alternative position is preserved; if the criteria is increased / decreased surpassing any of these two limits, there is rank- reversal, which is different from RR that applies when the number of alternatives is increased or decreased.
Consequently, for a criterion that can vary say between -0.5 and 0.7, as long as the DM values are into this interval (increase /decrease), will not alter the ranking. Thus, we can have three DMs with values 0.1, 0.4 and 0.3, and since the three values are within 0 and 0.7, all of them are valid. However, if the criterion calls for maximizing, the worst estimate is 0.1., because a little variance may produce rank reversal. However, since in AHP the DM does not know which are the basic criteria among the total, and does not know either the allowable variation of each one, SA in AHP is a futile exercise.
OAT also called ‘ceteris-paribus’ in economics, is useless, because it is irrational to vary only one criterion and holding the others constant. All basic criteria calling for max or min, even mixed, must be considered jointly.
4-Page 3 “A number of studies have reviewed MCDM/AHP software tools from different perspectives; however, no study appears to have focused on SA for evaluating their features.
This is incorrect, the SIMUS method proposed and applied it since 2011.
5- Page 5 your Table should be placed on another site, not here, since after that you continue explicating the example posted is the matrix above
6- Page 8 “For large problems with several criteria and alternatives, analysing only one element at a time may not provide enough information for making a decision”
Absolutely correct and not only for large problems. However, if you recognize it why are you using OAT?
7- Page 10 “The random weights are generated in each iteration, and the overall rankings are calculated for each set of weights”
Not exactly. Iteration is when you use the result of a previous run to improve them in a new run. What you do in simulation is repetition.
8- Page 1. Fig 3, you present a graphic that probably nobody understands, since you do not even explain it. It gives the same information that the initial matrix does, however you say nothing about the necessity of using a MCDM method like TOPSIS. What was then its purpose?
You say “It represents the minimum change that must be induced on the weights to cause a rank reversal”
Could you inform please where this assertion comes from? The minimum change comes if the allowable variation of a basic criterion is near zero. That is, 0.1, is a very small allowable variation that may cause rank reversal. Where is this reflexed in Figure.3?
9- Page 16. I am really puzzled. Why are you interested in the number of rank reversals you can have? What this really means, whatever the method used? Thinking about it, since it is done in a simulation, a large number of rank reversals is indeed undesirable, because it means that there are many opportunities for that to occur, and I do not think that it is ideal.
Regarding the OAT I wouldn’t even consider it because it is unrealistic. As an example, suppose you are designing a car aiming at low price, high speed, high acceleration, comfort, etc., and assume that you choose speed as the most important criterion. This is useless, because you may deign a high-speed car, but not at a low price, but that does not include high comfort, or a good acceleration. All these criteria must be considered at the same time.
When you are in SA, the MCDM method say was alternative D, but normally an alternative is affected by several criteria at the same time, and they form a set called ‘basic criteria’; the other are irrelevant. If only one of the basic criteria has a very small allowable variation, the alternative position is at risk, it does not matter if the other two criteria have a large variation.
Rank reversal in SA depends as said on the length of the variation of each one, and simulation cannot detect that.
These are some of my comments
Hope they can be of help
Nolberto Munier