Alessandro Ferrarini Thank you for your very helpful answers. Besides, in my view point, the purposes of sensitivity analysis is to determine how different values of independent variables (inputs) affect particular dependent variables (outputs) in order to demonstrate the comprehensive/robustness of the results.
Thanks for your advice Mahmut Baydaş. What would be a solution when the result of the sensitivity analysis shows that the rankings of alternatives are significantly different from each other.
In my humble opinion it is difficult to make out the meaning of the difference you mention unless there is an objective criterion of justification. It is not known which of the MCDM methods is most suitable, and this is a half-century old problem. One of the best aware of this problem is Mr. Nolberto Munier. Moreover, I know that he has developed implicit objective criteria. In addition to his answers to the questions in RG, you can benefit if you carefully read his studies on sensitivity analysis in his scientific articles. On the other hand, which inputs did you change and how and what were the results? If you can talk a little bit maybe I can help?
Mahmut Baydaş I used weighted aggregated sum product assessment (WASPAS) with triangular fuzzy number, an extension of WASPAS, which is one of the robustness and comprehensive MCDM techniques based on aggregating weighted sum model (WSM) and weighted product model (WPM).
In the integrated utility function, the value of "lambda" (i.e., coefficient value of WASPAS or trade-off parameter) is considered as equal to 0.5 for beginning analysis.
In the sensitivity analysis, the proposed methodology's outcomes are discussed by changing the range of coefficient value of "lambda" from 0 to 1.
The sensitivity results include 2 parts compared to the base case ("lambda" value of 0.5):
Dear colleague, you can write to me via DM. It would be nice if you could provide information about the data of the problem, its purpose, alternatives, criteria and weighting methods.
I am afraid that I do not agree with your comments.
1. Criteria subjective weights are perhaps appropriate to determine the relative significance between criteria. However, they are inappropriate to evaluate alternatives, because they do not indicate the evaluation capacity of each criterion, or discrimination of each criterion values, which is needed to evaluate alternatives (Shannon Theorem), something that objective weights do.
2. You have to consider all criteria simultaneously, not one by one, as you suggest
3. Sensitivity of data variation is not related with sensitivity of criteria variation
4. If you get, using method M1, that alternative A3 (selected by that method) is extremely sensible, and it contrasts with M2 for which alternative A1 is the best, what conclusion can you extract from that comparison?
I read it thoroughly and as I understand it you realize sensitivity analysis using the Lambda coefficient.
I believe that this coefficient could be very useful for determining the partial significance of criteria, but not for sensitivity analysis.
You identify it as a trade-off; if that is so, it is not a weight, and then you can’t use it for sensitivity analysis. Trade-offs are useful to make gain-loss balances between criteria but not for their ranking. The fact that increasing say 5% a criterion, means that you have to deduct that 5% from other or others, but it does not mean that the first criterion is better or worse than others. It comes from the AHP method, where Saaty said, that is it ‘assumed’ that trade-offs are equal to criteria.
Even if they were, you can’t use a subjective weight, to measure the importance of a criterion to evaluate alternatives. This is so because to evaluate alternatives the importance of criteria is given by its information content (Shannon Theorem), and a weight does not inform about that capacity.
Thanks for your very helpful explanation. I now understand about "lambda are useful to make gain-loss balances between criteria but not for their ranking".
Can you propose me ideas for doing sensitivity analysis on the change of alternatives in this case?
Besides, can you help to clarify what's different between "sensitivity analysis" and "comparative analysis" in MCDM?, and when is it needed?
While comparison analysis and sensitivity analysis are different things, they have common sets of intersections and are sometimes confused. Both are about results.
I think sensitivity analysis can be measured by standard deviation. For example, when we increase the number of alternatives, the measurement sensitivity of some methods decreases and the results deviate compared to other methods. In my data, GRA is an example. Sometimes it's the other way around, and some methods are positively affected by the alternative number increase, and TOPSIS is a great example. On the other hand, there are methods that are negatively affected by the number of criteria and the AHP may not be consistent for 15 criteria. If we leave everything aside, the sensitivity analysis we do is actually doubtful because we do not know the truth. Sensitivity analysis would also be more meaningful if the correct or appropriate one was known. We try to understand a phenomenon by hearing or comprehending in these conditions that the eyes cannot see. Dear colleague Dang In these circumstances, I think deviations are a criterion for sensitivity analysis. If you are doing a sensitivity analysis, I think you should add other methods. Because I see a benchmark problem in your work. For example, you can compare the mean of PROMETHEE, VIKOR, TOPSIS, SAW methods with lambda WASPAS results. Look at the deviation and interpret it. I hope you will get satisfactory results.
You are right, comparison analysis and sensitivity analysis (SA) are different things.
Of course, that both refer to results, but in a different sense
The first looks for similarities among the rankings of different methods, while the second looks for degree of sensitivity of the best alternative chosen.
I don't understand your concept of measuring sensitivity analysis. Measuring what? If you refer to the measure of sensitivity of the best solution, it can be measured knowing the range of variation of the intervening criteria
I don’t think that standard deviation (SD0 has any role in SA. SD has a fundamental role in determining criteria capacity for alternatives evaluation, but it does not participate in SA. When you increase the number of alternatives, what you get sometimes is that there are not scores for all alternatives, and this happens because not all alternatives can comply simultaneously with all criteria
When you say that the results deviate comparing several methods, I am not sure that it is true, since I don’t see any reason for that if the new set is subject to the same set of criteria as before. What in opinion can happen, is that some original scores corresponding to former alternatives, are eliminates because the new alternatives may comply better with all criteria.
I don’t know about GRA and I don’t see that TOPSIS is an example. Could you explain why?
I prefer not to talk about AHP because you know my opinion about this method where decisions are taken considering inventions without any support.
As a matter of fact, if we assume that an alternative ranking is the best, although we don’t know if it is the truth, for that ranking we know exactly its reliability, if we consider which are the criteria involved with that result, and which are the limits each criterion is allowed to vary, and by the way, you can SEE it using the total utility curve for each objective. I have illustrated that many times, and it is also in my books, and with examples. This easily done using SIMUS for multiple objectives
Again, I am afraid that SD has noting to do with this. However, if prior to the SA we use SD for each criterion, then, we will be able to know which is the most significative criterion, but it does not mean that we know which criteria participate and which not.
I understand that you can compare of course the SD of each ranking corresponding to different methods, that is not a problem, but what do you do with those different values?
I have a similar approach based on a non-proved hypothesis, that computing the entropy weights, it is reasonable to think that the project with the lowest entropy and then, with the highest information, must be considered the best, because it gives the maximum amount of information on the whole system
Thank you for your very detail explanation. I now really understood about the comparison analysis and sensitivity analysis, "The first looks for similarities among the rankings of different methods, while the second looks for degree of sensitivity of the best alternative chosen."
About AHP, as Mahmut Baydaş mentioned, it is not easy to get the consistency with a large number of criteria. That why I only used it to determine the weight of criteria (not used for ranking alternatives).
I also thanks for your suggestions about "entropy weighting method". I will check it in my model.
According to Dang's statement, “In the sensitivity analysis, the "lambda" coefficient value range is changed from 0 to 1, and the results of the proposed methodology are discussed.” He stated that the deviation between 0 and 0.5 is high and he obtained more stable results between 0.5 and 1. Mr. Munier, I agree with the theoretical information you have explained, but what is your solution for different “lambda” values in this case?
Thank you for considering that I can answer your question.
However I don't have an answer because I don't know in detail the method our friend Thanh posed.
I already mentioned Thanh that it is improbable that after 0.5% the selected alternative holds, because most probable the intervening criteria are decreasing their Lambda as long as the Lambda of one criterion is changed, and then, their margins of variation are decreasing simultaneously, until a moment where the selected criterion ceases to be important for the best selection, and then, a new criterion is now responsible, and most probably producing a change on the initial ranking.
This can be clearly seen when the original straight line of a certain objective, changes to another straight line, but with a lower slope, corresponding to a lower marginal cost, and then originating a convex curve, which represents the utility curve
Trying to answer your specific question, and always according to my reasoning, it appears that it is not natural that the same procedure produces alternatives changes for lambda values below 05, which agrees with what I said above, and suddenly, from 0.5 > lambda< 1, it remains constant. I would check this
I am attaching the resulting curves produced by SIMUS/IOSA, published in one of my books, for solving a problem, where the best alternatives was subject to two criteria C3 and C5, and their simultaneous variation. The curve of the left is from increasing criteria C1 and C5 at the same time. That on the left is for decreasing.
Observe that the curve on the left shows a straight line and for a certain value changes forming a convex curve