In order to decide whether a method is better or worse than others should be verified through the evaluation results. In MCDM context, how can we compare and evaluate the results of these MCDM (e.g., AHP, ANP, SAW, TOPSIS) in specific context such as (Application of decision-making techniques in supplier selection)?
MCDM, Supplier Selection
Dear Hamzeh
You post an interesting and several times asked question, but which unfortunately has no answer, at least from my point of view.
As I have mentioned several times in RG to evaluate the goodness of a result produced by any method you need to make a comparison with the 'true' result which is of course unknown.
If it was known we would not need MCDM, as we do when we use an algebraic formula, because we are certain that, if we introduce some values in the formula, the result will be correct and probable unique
This is the reason by why I suggest not to use the expression 'I obtained good results using the XXX method', because you don't know what the best solution is. In such a case a result is just a result, although most probably better than another obtained by intuition, but nothing else.
As an example, suppose that you have to decide between building a bridge or a tunnel to cross a river, and your model selects the tunnel. Once built, reality will say if the decision was right or not, because for instance, if in that future traffic increases, may be the tunnel is not prepared to handle it, because it has no direct ventilation, and then there would be an increase of emissions in a confined tunnel space, a fact that does not happen iin a bridge.
However, this is something that should have been consideried in the planning stage, that is, it was necessary to add a criterion regarding contamination and its limits.
In my opinion, there is an indirect way to test a model selection, and it is related with the effect that a future action, such as the mentioned increase in traffic, may have in the best solution suggested by each model.
From there you may perhaps decidde that the selected alternative by model YYY is better because is less risky than the solution suggested by model XXX.
In the case of supplier selection I guess that you can perform a similar analyses, examining results regarding for instance the effect of delay in supplies, or the increase of prices , or eventual problems with transportation. Again, probably these aspects should have been included as criteria, when designing the initial matrix, and because of it, you possibly can make a selection of models, since some of them may not accept limiting resources, or too many criteria, or criteria correlation
Dear Adis
I believe that what you propose is an excellent procedure, especially using Spearman
Maybe too late for this, but we have compared a set of MCDM methods in terms of the rankings they produced.
A comparative analysis of multi-criteria decision-making methods Blanca Ceballos, María Teresa Lamata, David A. Pelta Progress in Artificial Intelligence. November 2016, Volume 5, Issue 4, pp 315–322
Fuzzy Multicriteria Decision-Making Methods: A Comparative Analysis Blanca Ceballos, María Teresa Lamata, David A. Pelta International Journal of Intelligent Systems. Volume 32, Issue 7, pp. 663–753, 2017
Regards
David
Dear Hamzeh
I have read the excellent and well documented paper mentioned by David Pelta. They use the Spearman Correlation Coeffcient to evaluate three different methods.
The procedure is correct from my point of view. They use the Spearman correlation coeffcient to conmpare rankings. In my modest opinion I believe that the Kendall Rank Correlation Coeffcient is a better choice because it has been designed precisely to compare ranks which is the essence of the exercise.
I also believe that if it is true that this procedure either using Spearman of Kendall may allow comparisons, it says nothing to determine with is the best. To this effect, our group at the University of Valencia have developed a method that allows indirectly reaching a conclusion.
If you or somebody else is interested I can send a copy of our proposed method
Read follows http://ieeexplore.ieee.org/document/7521387/
Article Comparison of three multicriteria decision-making tools
Dear colleagues,
First of all, I sincerely appreciate your guidance and answers, they were very helpful.
Dear Pro. Munier
I would appreciate it if cloud you please share with me the proposed method that you mentioned and any document that introduce or give an overview about Kendall Rank Correlation Coeffcient on the following email address:
Dear Dr. Pavid A. Pelta
I would appropriate it if cloud you please share with me the hard copy of your second mentioned paper, because I cannot download it.
Fuzzy Multicriteria Decision-Making Methods: A Comparative Analysis Blanca Ceballos, María Teresa Lamata, David A. Pelta International Journal of Intelligent Systems. Volume 32, Issue 7, pp. 663–753, 2017
Dear Thanh
Thank you for the good papers that you share it with me.
Thank you
Dear Hamzeh
The paper you request is the number 282 which you can see here in RG
If same result is not obtained in MCDM for a Decision making problem how we can conclude. Please send the answers/ research paper to my email.
Dear John
If starting with the same problem, using mathematical tools and aiming at the same objective, the results are different, we must conclude that there is something that perturbs the process.
It is like using the sum of 2+2+2+2 or using the multiplication 2 x 4. The processes are different but the result is the same. Of course it is more complicated in MCDM.
In my opinion, the main culprit is subjectivity. You can use different methods, but at the end the result should be very similar. However, if each method uses different weights, different assumptions and different approaches, don't be surprised if results are different.
In my opinion, we must start all methods with the same decision matrix and process it without any subjectivity added.
When we get a result that is mathematically correct, then the DM is able to use his expertise and knowledge and modify if need arises the initial matrix. This is the time for subjectivity, not at the beginning.
I am convinced that if all methods would follow this procedure, and that if each DM produces modifications on his own in each method, after results are obtained, these will be probably equal or very similar.
My reasoning is that in this way all DMs start analyzing their results which are mathematically sound, then all start with a solid ground, and then, they are able to make modifications according to their own experience and knowledge.
We can make comparison between methods in a same application. Each method is used for a specific problematic (selection, ranging, outranking). So the use of a method lies to the decision problem to be resolved.
Dear Abdelkaber
You certainly can make comparisons between methods results for the same problem. and what do you get out of it?
Nothing
It is true that each method should be used in accordance with a specific problematic and from that point of view ELECTRE and PROMETHEE, are very well suited, because they have different approaches and objectives.However, outranking is not a specific problem, but a procedure used by ELECTRE and PROMETHEE and partially by SIMUS.
The selection of a method should be based on what well it is able to represent reality as close as possible. In my opinion, this is the main issue. You can use an elegant method such as TOPSIS or VIKOR, but if they are unable to represent reality, you are solving a fictitious problem, not the real one. Where is then the benefit?
The selection of the method will be more easy in next applications.
Dear Abdelkader
On what grounds?
If you compare the results of two methods for the same problem, and they are different, how can you use that comparison in a future application?
Sorry, I don't understand
Dear Prof. Nolberto Munier
Thank you for your answer for the issue I am facing in MCDM.
For research collaboration I want to send my CV and publications to you.
Please send your contact details to [email protected] or to my whats up number +91-9444119988.
With Regards
John Rajan
Dear John
Thank you for your letter.
At present my research is based on ways to improve the MCDM process, making it more reliable, by representing more faithfully real-world scenarios.
I am also very keen in improving sensitivity analysis procedures.
If you are in a similar line I will be very glad in collaborating with you.
My address is: [email protected]
My whatsApp number is also available as well as my Skype address
Many thanks for the valid question and many responses. As noted, a few similar questions have been posted and replies on this forum. In particular, many thanks to Nolberto Munier who provided comprehensive responses to these posts.
We developed an MS Excel program that incorporates 10 different MCDM methods. It is described in the following paper (which is available as open access from the journal website). Our MS Excel program is available free of cost by writing a mail to [email protected].
Wang Z. and Rangaiah G.P., Application and Analysis of Methods for Selecting an Optimal Solution from the Pareto-Optimal Front obtained by Multi-Objective Optimization, Industrial & Engineering Chemistry Research, 56, 560-574 (2017).
Dear Gade
Thank you for your kind words. I really appreciate it
There is a lot of people also collaborating in this forum and all of us can learn from others.
My main concern is that there is not a MCDM method that can cope with most of the characteristics in the real-world, and I tried to ameliorate than with SIMUS, which is based in Linear Programing and its software available to anybody with no restrictions.
Dear Nolberto Munier
Many thanks for your thoughtful response. Glad to learn that SIMUS is available to interested readers. Will you be able to send me a copy of SIMUS (and related papers) to me at [email protected]?
Dear Hamzeh,
You can find some of comparison methods in these papers:
Article Analysing Stakeholder Consensus for a Sustainable Transport ...
https://www.researchgate.net/publication/328227792_Sustainable_Urban_Transport_Development_with_Stakeholder_Participation_an_AHP-Kendall_Model_A_Case_Study_for_Mersin
Article Sustainable Urban Transport Planning Considering Different S...
Article A SYNTHESIZED AHP-SPEARMAN MODEL FOR MENSURATION THE SEGREGA...
Best regards,
Sarbast
Dear Gade
Of course, I will send you the software. In the tutorial you will see very different cases
Many thanks, Nolberto Munier. I look forward to receiving the software to my email: [email protected].
Dear Hamzeh,
First the selection of the method depends on the set problem. In case many methods have been applied, the results must be validated with respect to the realities on the ground. The method that is giving results closer to the reality will be selected.
Refer the following:
https://www.emerald.com/insight/content/doi/10.1108/BIJ-10-2017-0281/full/html
I think the MODM output results are completely unmatched with MADM. But there is Methode for unifying the various MADM output results and I can help you
Dear Abdelkader
What you say is logical, however, how do you know which method is closer to reality, if you don't know it?
Think that if you knew what it is, then there is no necessity in performing a MCDM analysis.
I think that there is an indirect way to decide if the solution reached is, if not validated, at least representative of a true result. I have tested it with real-life scenarios.
Assume that you have to select between two alternatives, both for the same purpose, and you got A>B.
If you perform a RATIONAL sensitivity analysis, that is, not based on the criteria that has the largest weight, because that is useless, but based only on those criteria that define the best solution, which is the logical procedure, you can find for instance that some of those criteria does not have any margin for variation.
It means that the best solution is not stable, since any small variation of one of those criteria, may produce the solution to be inverted, that is B>A.
Since it is assumed that your example corresponds to a real-world problem, you can check if this happens in that reality, that is, if that criterion is known as having large variations in prices, or if it is approximately stable.
For instance, you have selected for export, and using a MCDM method, the most convenient product , among several that you manufacture, and depending on criteria such as the international price of similar products, the price of the competence, or in both, at the same time.
If the sensitivity analysis shows that any of those criteria that define your selection, not others, don't have a margin for variation, they most probably will produce a reverse of the selection if one of these criteria changes.
If in reality, these variations HAPPEN IN THE REAL WORLD, for instance, the price of oil, then your MCDM model appears to represent reality, and you better consider another product.
Selection of one of the Pareto-optimal (non-dominated solutions) involves weighting methods and selection/ranking methods. Our latest MS Excel program (version 2) has 8 weighting methods and 14 selection methods. User can use any of them as well as give his/her own weights for objectives. For more details, see Wang Z., Parhi S.S., Rangaiah G.P. and Jana A.K., Analysis of Weighting and Selection Methods for Pareto-Optimal Solutions of Multi-Objective Optimization in Chemical Engineering Applications, Industrial & Engineering Chemistry Research, vol. 59, pp. 14850-14867 (2020). Our user-friendly program is available free of charge to interested readers of this posting, by sending an email to [email protected].
Today, mostly MCDM methods are evaluated on their abilities and capacities. But it is also important to make comparisons over ranking results. But when you know where you are going, how you go becomes an important issue.
Refer to the following article, Table 2.
Article Agriculture supply chain risks and COVID-19: mitigation stra...
Will give some ideas.
Regards,
RS
This paper presents a comparative analysis of these two methods in the context of supplier selection decision making. The comparison was made based on the factors: adequacy to changes of alternatives or criteria; agility in the decision process; computational complexity; adequacy to support group decision making; the number of alternative suppliers and criteria; and modeling of uncertainty.
https://www.sciencedirect.com/science/article/abs/pii/S1568494614001203
Article AN OBJECTIVE CRITERIA PROPOSAL FOR THE COMPARISON OF MCDM AN...
Dear Hamzeh
I suggest reading my paper number 336 in RG, which addresses that issue and offers a possible solution
Dear Do Duc
I could not access the paper you mention, but I can't understand the relationship between the GINI index and the evaluation of different rankings. Could you please explain?
Thank you
I agree with Nolberto that the comparison is of no use if we do not know the real solution.
However, in some cases, the comparison of rankings or even (vector) compatibility is an asset.
Please see our recent paper published in ESWA proposing a new aggregation method for group AHP. Kendall and Spearman rank correlation was used to indicate the efficiency of the creation of the new consensual group preference vector and Garuti index was applied to compute vector compatibility among the individual preference vectors and the created consensual group vector. The paper also presents interesting simulations to verify that the new aggregation approach performs better than the traditional methods AIP WAMM and AIP WGMM or the recently emerged CPVP technique. Article Comparing aggregation methods in large-scale group AHP: Time...
Comparing the results is by comparing the weights obtained from various MCDM approaches. One has to interpret well by decoding these obtained weights.
Dear Szabolcs
Thank you for your contribution and the attached paper.
You rightly stated that comparison between rankings is of no use, since we don't know the actual or real answer, but you also said that it can be sometimes useful, and you refer the reader to the attached paper, which I perused.
My question is, OK, assume that it is possible to use an index that can improve comparability, but I fail to understand how it can help us in determining which is the MCDM method or its ranking, closest to reality, which is our objective.
Where is the link between each ranking and the real scenario ranking that would allow us to make comparisons? The fact that ranking A is very close to ranking D does not necessarily imply that one of them is the closest to the 'true' ranking.
If we can find such a link, then, we can correlate each ranking with the certain one, using, for instance, Kendall Tau.
In my opinion, there is a way, based on the fact that the initial decision matrix completed with reliable data (and this discards AHP), gives us hints of what the result could be, and the support for this hypothesis lies in the fact that both, the real data and the data processed by an MCDM method, share mutual information, although in different configurations.
That is, in the initial decision matrix, criteria performance values depend on alternatives, and in processing, alternatives are selected according to criteria. i.e., there is mutual dependence.
Using Venn diagrams, one space identified by a circle representing data, and another circle representing results, there could be an intersection or common space between real data and a solution found using an MCDM method.
If we can find that common space and measure it, I believe that we could have a valid procedure that measures the mutual dependence between two sets of variables, that is, how much one set of variables tells us about the other set.
Since each set has its own entropy, we could also define it as determining the joint entropy.
This is not mine, it is Information Theory.
I would very much appreciate it to have your comments either positive or negative, both are valuable
Dear Nolberto,
thank you so much for your reaction, I appreciate it.
My response and paper also referred to a specific case, in which we investigated the efficiency of the aggregation method, and in this case, concordance and compatibility measures are useful. The global (or consensual) preference vector should reflect the individual preference vectors as much as possible. If one aggregation method provides higher (aggregated) concordance and compatibility between each individual preference vector and the consensual vector, then it can be stated that this is a better approach for aggregation.
Your proposal refers to investigating the efficiency of the decision-making method itself (either individual or group evaluations). Actually, I like it pretty much just am not sure if it has not been done yet. As I can remember, Saaty conducted this type of research in one of his early papers (sorry for not providing the reference). It was about soft drink consumption in the US and the AHP scores reflected the real soft drink consumption quite well. However, I cannot remember another example so your suggestion can be a nice research direction.
Szabolcs
Dear Szabolcs
Thank you for your answer and explanation
I understand that your purpose was not aimed at determining the closest ranking to reality, but you were addressing a specific case regarding aggregation.
Now, coming back to the problem of
“How to compare the evaluation results of different MCDM methods?”
posted time ago by our colleague Hamzeh Mohammd Alabool, I am glad that you like the procedure I proposed. An early version is in RG, under code 336 of my profile.
You said that Saaty conducted this type of research and I wouldn’t be surprised about that, knowing Saaty bright mind, and assuming that he detected this problem when other methods appeared, but unfortunately, I did not find any reference on the Web about it, which, of course, it does not mean that it does not exist, or that Saaty perhaps did not publish it.
Do you remember that Saaty proposed a problem consisting, if I don’t remember wrong, to determine the relationship between sizes of different geometrical forms?
I learned about it when I was doing my master's, and I quite remember that I said to my professor that in my opinion, this example did not prove that AHP can give the right response, and that, it, in addition, was biased, because we were given a picture or visual data of the different forms, and then, it influenced our judgment.
Sorry, I don’t remember was my professor's answer was, but certainly, it did not convince me.
I bring this example because I find it similar to what Saaty did in the past, as you mentioned
It appears obvious that Saaty or somebody else, knew the result, and if in reality, AHP found the correct answer, it could have been because the researcher had first-hand information about the reality, and thus, preferences based on intuitions, may have been influenced by the real facts.
When I challenged my professor, I already had some experience with MCDM, and my observation was grounded on the fact that in MCDM problems we never have a hint of the answer, because we only have a matrix with data, maybe with hundreds of alternatives and criteria, and of course, we don’t know the result. If we knew we wouldn’t need MCDM methods!
It is for me evident that to answer the question of what method better addresses a scenario, we need to work with two spaces, one related to the scenario and the other related to the rankings, and find if there is an intersection.
If both are independent, there is no way that our solutions, solved by whatever MCDM method, can represent reality.
This is also the reason which I find incomprehensible, to say the least, that the DM preferences can be transferred to the real world in the AHP method.
As I said, I will be glad to share my findings with any researcher and try to solve this problem that has been around for decades.
Regards
Nolberto
We have proposed five statistical measures in two different papers. They analyze the MCDM results in various aspects. Both papers are under review and I’d be glad to share them here after the publication.
Dear Sherwin
The only way to compare two procedures is to see how well they approach reality, however, since we don't know it, which is the purpose of comparing different methods?
Suposse that method A and B show a high correlation measured by Pearson, Spearman or Kendall, or whatever other correlation method.
It only proves that for a certain problem they quite agree in the result. Since the purpose on MCDM is to find a compromise solution, what is that comparison good for?
Does this comparison show that this is the best solution?
In my humble opinion, what should be done,is to compare the rankings of different MCDM methods with a hidden ranking of the problem. That is, find the way to make a real problrem 'to speak by itself'. That is, revealing its internal structure and getting a ranking from it. The MCDM method that best approaches this revealed ranking, could be the best.
This infortmation is contained in the data, we should try to extract it.
I imagine the human body, the most complicated and at the same time the perfect 'machine'.
Physiology is the science of life, and explains how the different 'parts' of the human body work and are related. I guess that physiology assinged a 'weight' to each of the 78 organs. Probably, organs like heart, the lungs, the skin, the kidneys, the pancreas and other have the highest weights, because we can't live without them, although parhaps kidneys and lungs have a little less weight because we can live with only one of them.
Thus, if we consider the 78 organs as 'alternatives' subject to many criteria related with life, theoretically we could apply MCDM and determine a ranking of organs, albeit most probably with a little differences in weights, because all organs are necessary'.
Probably this is a silly example, but I believe that it illustrastes what I want to say. Considering what we know about conditions to be alive, it is probable that we can determine this score, starting probably with the heart, since we can live without arms, legs, nails, but not withour heart, lungs, sking and others.
Thus, if we can perhaps make our bodies 'talk', we could make, in the same manner, a problem 'talk', and reveal its 'physiology'.
Dear Prabjot Kaur
Referring to your answer on July 21, please provide details of your 2 papers.
If possible, please summarize the possible answers from your 2 papers for the benefit of researchers following this question.
Thank You.
Dear Prabjot
I adhere on what Gade suggests. I asked the same some time ago
You mention your papers but don't publish them
Suggest reviewing this paper for MCDM methods:
https://www.sciencedirect.com/science/article/abs/pii/S092583881732827X
Dear Ali
I already sent you my comment on that paper. Did you receive it?
Dear Jerbi
Simulation is an excellent tool, but to use it you need certain data of all elements intervening and interracting in a project. The same happens for all MCDM methods.
We don't have that
In my humble opinion, the solution is to compute somehow, the evolution of the real data and compare it with the rankings of the different MCDM methods. The ranking closest to the evolution would be the best
Dear Nolberto
what if we simulate the possible entries of a decision-making process and gain a much more large-scale pattern than in reality? In AHP for instance, the dimensions of the simulated matrices or vectors are restricted (from 2 to 9), and the possible entries of the matrices are from the well-known Saaty-scale, so there is a great chance for a valid simulation.
Certainly, as you mentioned, the objective here should not be the comparison with real data or decisions. However, if you want to test how a proposed method or technique behaves in many decision-making cases, simulation is a great (if not the best) tool. Due to the large number of cases achieved by simulation, I am more convinced about the efficiency of a technique than by the comparison with real data because the latter might be due to coincidence. Of course, efficiency should be very precisely defined, in my latest research it was the correlation between the individual and global preferences within a group of decision-makers.
Dear Szabolcs
- Well, simulation could be a good solution, if we could simulate jointly all the characteristics of a problem and their interactions, however, it is difficult for me to understand how we could have a larger pattern that reality, since we don’t know it.
That is, it is impossible for us to consider all the components of reality; we know some, but probably not all of them. In addition, in real world problems, matrices are normally much larger. I have worked (using SIMUS), with matrices of 120 projects and 12 criteria. Since there are not ‘adjustments’ to get transitivity, it takes less than 3 minutes to solve this problem.
If we make say 5000 simulations most probably, we can get a convergence of results, but what does it mean? What is it good for?
If we know the true result, it is a different problem, because we have something to use as a yardstick. As you know, this is the procedure used in supervised AI.
I understand that more than 2-9 restrictions are a convenience based on the psychologist George Miller hypothesis, that refer to capacity limits in information processing, suggesting that it is limited to about seven units plus or minus 2 units. Not too practical indeed.
2- Yes, the Saaty fundamental scale is very well-known, but also criticized, not for me, since I never discussed it. This is the first time that I do.
As you know, it is based on two psychophysical laws from Weber and Fechner, addressing the relationship between stimuli and response. In reality, the purpose of the Saaty scale is to approximate a ratio between two criteria - and thus, involving two values - in an absolute integer given by the Saaty fundamental scale. I recognize that it is an ingenuous ploy.
I have three questions:
1- Is it valid to apply these laws when the ratio is completely arbitrary, reflecting the appraisal of the DM regarding importance, and taking the dominated value as unit?
2- Is it valid to apply the Weber -Fechner equations to convert stimuli in responses?
I don’t think that saying that quality if preferred to price is a stimulus. It is simple a value reflecting an intuition.
Saaty equates the pair-wise comparisons with the neuronal system. However, in here, the stimulus is an emotion produced by something objective, like the birth of a son, or subjective, as observing a beautiful sunrise, and that fires a neuron, which response is pleasure.
In pair-wise comparisons it is a number got from nowhere; it does not have any effect in your body or spirit.
Saaty, in Chapter 2 of his paper ‘Fundamentals of the Analytic Hierarchy Process’, says “Weassume that the stimuli arise in making pairwise comparisons of relatively comparable activities”. It is only an assumption, and by translating it using Weber-Fechner, he tries to convey the notion that his scale is scientifically supported.
3- There is another problem related with the number 9. Saaty explains that it is not a limit, but in cases of large stimuli, the responses are incorrect.
Therefore, I doubt that this scale is a good chance for valid simulation
4- Regarding what you say of how a model behaves, I don’t think than that is the point. We are trying to get reliable solutions independently of how a model behaves, and in my opinion, it is impossible for us, because we don’t know which is the ‘true’ result.
5- Regarding the efficiency of a technique, indeed it could be a coincidence. True, but it is precisely what we are looking for, albeit I would be uneasy and doubtful if there is a perfect correlation, or even 0.9.
I would say that a correlation of about 0.75 would be a good indicator, and I would use Kendal Tau Correlation Coefficient, instead of Spearman.
Regards
Nolberto
Dear Nolberto,
please read my paper published in Expert Systems with Applications journal: Duleba, S., Szádoczki, Zs: Comparing aggregation methods in large-scale group AHP: Time for the shift to distance-based aggregation. Expert Systems with Applications, 196, 116667.
This paper gives the response to all your questions hopefully.
Szabolcs
Dear Szabolcs
Thank you for giving me access to your paper.
From the title I realized that it is not related with the aspects that we have been discussing, let alone to answer my questions. Notwithstanding, I read it, although not in depth.
I am not trying to avoid this important issue, simply I don't see where is the relation with what we have discussed.
A little digression:
When I developed SIMUS I also allowed it to work with groups.
However, I don't need to aggregate anything. In SIMUS, each criterion is analyzed by each member of the group, and each one is free to express his approval or disapproval, regarding the attributes of each one. The criterion under study is then modified according to the different opinions considered simultaneously,for instance changing some values, adding up a new criterion, considering for the same criterion both, lower and upper levels, also simultaneously, etc., that is, modifying the initial decision matrix. To make it clearer, the new matrix contains the comments of all members of the group, as well as the silent of those that don’t see anything wrong in the original matrix.
As you know, in Linear Programming, if the problem is feasible, there is always an optimal solution, not only showing the scores of the alternatives but the OPTIMAL value of the whole system. SIMUS takes advantage of this. This optimal value, which is the sum of the products between the objective factors and the corresponding scores of the alternatives, is identified as Zj.
Once all experts agree that their observations have been registered in the new matrix, the software is run again, and yields, or not, a new solution that contemplated all opinions simultaneously, not by their sum, but considering the influence of each one on the problem.
This Zj is then compared with the former, that is, before the opinions (Zj-1).
If Zj > Zj-1 when the criterion calls for maximization, all opinions are accepted. If the criterion calls for minimization, the new solution is accepted if Zj < Zj-1.
As you can see, after the ‘n’ experts freely express their opinion, the system computes the answer.
When this is done, the second criterion is examined in the same way, and the same for the others, until all of them have been analyzed.
A very important issue to remember, is that the Zj, whatever their results are optimal, according to the Lineal Programming Theorem, of course, if the project is feasible.
As you can see, the process is completely different as is being done with other MCDM methods.
If you are interested, I can send you the section of one of my books where this is explained, step-by-step in a real problem.
Coming back to your article, I don’t see, how, as you hope, it answers my questions. As a matter of fact, it does not answer any, since it is not related with our debate. I don’t really understand how you thought that your article is related with our discussions.
Best regards
Nolberto
Dear Nolberto,
we were discussing the possible role of simulation in AHP. If you read the paper in depth, you could see that it contains simulations of entries as I mentioned before. You are insisting that simulations have no meaning in AHP because they cannot reflect reality so the comparison is always pointless (as the original question sounds in this topic).
As you can read in the paper, simulation is a great tool to reach thousands or millions of possible cases of evaluations. Currently, we are examining over 7 million cases of pairwise comparison matrices filled by the Saaty-scale and with CR less than 0.1. They are not reflecting reality but demonstrate possible evaluations by the Saaty-scale with tolerable inconsistency so that possible responses to decision problems.
If you read the paper thoroughly, you can see that the comparison of the outcomes makes sense because they examine the vector compatibility and rank correlation of the individual vectors and the aggregated global preference vector.
Friendly regards,
Szabolcs
Dear Nolberto,
I am very curious about your response to the preference transitivity issue. If we consider only one aspect at a time (as AHP does), do you think the preferences are transitive? If so you accept a merit of AHP.
Friendly regards,
Szabolcs
Dear Szabolcs
1- Back in the 90s researchers started with simulation in AHP, thus, it is not new.
I am not an expert in simulation, but I don’t think that it is representing the AHP method.
2- OK, assume that we have the weights generated by 20,000 simulations and there is certain coincidence in say 46 runs. What does it mean? It is as having the data posted by 46 different DMs, which rankings are very similar. Granted, it is a good indication that it is the solution of the problem, but then, you are at square one, since you must select one among the 46 results.
3-First of all, do these simulations consider criteria independence?
If they do, the procedure is incorrect because you can’t partition the problem
Saaty said that this is a system, with which of course, I agree.
I believe that I mentioned that Triantaphyllou, in a paper published in the 90s, was perhaps the first, that asserted that partitioning as AHP does, is incorrect, and this is very simple to demonstrate.
4-In my opinion, one of the worst drawbacks in AHP, is that weights are determinized without considering the alternatives they must evaluate. You can rightly compare quality and price, using a common denominator like personal satisfaction, and establishing your preference, but you can’t apply that preference to everything, because for some applications or options or alternatives, you may change this preference, based precisely, on the characteristics of each alternative. Do your simulations consider this aspect?
5- And yes, I said that simulations cannot reflect reality, and I hold that assertion, and I posted why.
6- By the way, this was not the original question of this topic. The question was: “How to compare the evaluation results of different MCDM methods?”
7- Simulation is a fantastic tool, but, in my opinion, not for evaluations. I read somewhere time ago, that experiments have been performed to this respect, and the results were not encouraging, when compare with other MCDM methods. Results from simulation were very different from those by PROMETHEE, or TOPSIS, when these were close.
8- Let me remind you, that CR
Dear Szabolcs
Referent to your question about my assertions on preferences, don't understand what you mean. Could you pls. clarify it?
Anyway, we are talking about preferences when we consider a pair, not only one criterion.
Dear Nolberto,
thank you for your questions let me respond to them shortly.
You mentioned simulation is not new in AHP and many researchers conducted them so for me that means it might be worth it to run them.
Also, please note that the objective of the simulation is important and it seems you do not consider the real objective and the process of the simulation presented in the paper. The code is described so please study it if you have time. You will see that CR is not the product of the simulation. You will also see that the proposed method is the novelty and its efficiency is proved by not only such a low number of matches that you mentioned (46 cases out of 20.000??) but around 60-70%. As you will see in the code, a comparison is also included regarding the original question of the topic.
Regarding your question number 4, your most serious concern about AHP, I do not agree with you at all. You can evaluate criteria without even knowing the alternatives. If I want to buy a car I can set up my criteria weights without seeing the sortiment. Having seen the possible cars, I can evaluate them based on my weights of criteria.
Getting back to preference transitivity please take a look at my example written in one of my previous comments. You evaluate decision elements pair wisely but from ONE aspect at a time this is the merit of AHP. In this case, the preferences have to be transitive.
Let me wish as huge success to SIMUS method as the AHP has already achieved. Also, I would be happy to read your paper in a top journal falsifying any points of the AHP method. I know you already have a book about it but we all know that the real success would be one or more top journal papers against AHP.
Thank you for the nice conversation.
Friendly regards,
Szabolcs
SD . You mentioned simulation is not new in AHP and many researchers conducted them so for me that means it might be worth it to run them.
NM . I am not discussing the utility of simulation, far from it. It is a magnificent tool, and of course, it is worth to try it.
SD . Also, please note that the objective of the simulation is important and it seems you do not consider the real objective and the process of the simulation presented in the paper. The code is described so please study it if you have time.
NM . I have read the manuscript again I find that your objective is to demonstrate the advantage of an aggregation system over another. Fine, and what does it add to our discussion? What is the code? This word is not in your manuscript.
By the way, I notice that in your example you can’t use AHP, since Time availableas well as Speed are clearly related to Directness. The more direct a route the higher the speed and less the travel time. I am surprised that you, a known AHP expert did not realize it, however, I am not surprised that the reviewers of your paper did not notice it. This is something that I have been insisting constantly. Many reviewers either don’t know about it, or if they know, let it pass perhaps as a minor problem. No wonder that are thousands of papers using AHP incorrectly, and thus, increasing the number of publications.
By the same token, I don’t see that there is lineal hierarchy here.
SD . You will see that CR is not the product of the simulation. You will also see that the proposed method is the novelty and its efficiency is proved by not only such a low number of matches that you mentioned (46 cases out of 20.000??) but around 60-70%. As you will see in the code, a comparison is also included regarding the original question of the topic.
NM . As a matter of fact, remember that the CR is the ratio between the CI and the RI. The later, as you know, is the average value of the CI random matrices, generated by simulation.
SD . Regarding your question number 4, your most serious concern about AHP, I do not agree with you at all. You can evaluate criteria without even knowing the alternatives. If I want to buy a car I can set up my criteria weights without seeing the sortiment. Having seen the possible cars, I can evaluate them based on my weights of criteria.
NM . A preference must me something related to what is evaluated; as somebody said, “You can’t ask a blind to evaluate a picture”
By reasoning, a preference can’t apply to everything. In a MCDM problem your alternatives may pertain to different activities. For instance, to establish a 5-year plan, a City Hall has many different not related projects like sewage, improving defenses for floods, improving domestic waste collection, building a city highway, creating public playgrounds s for children in city squares, social projects, environmental projects, health projects, etc. Some criteria apply to them all, while another not.
Do you think that establishing that criterion cost is 3 times more important than criterion benefits apply equally to all alternatives?
For instance, it can apply to perhaps creating spaces for children vs building a hospital, obviously, it appears that from the point of view of minimum cost it is preferable the first, because a hospital is by far more expensive. But what happens if you apply the same preference to children’s playgrounds and works to avoid flooding in an area of the city? It is obvious that the benefit is much more important than cost.
Even, if you apply the same preference to the car example, and you have three models to choose from, you may select the cheapest car, but, after test driving the three of them, you realize that it is better to spend a little more and buy a car that offer greater comfort, therefore, you change your preferences and now benefits are preferred to cost.
You can see that it does not make sense to have a constant preference.
SD . Getting back to preference transitivity please take a look at my example written in one of my previous comments. You evaluate decision elements pair wisely but from ONE aspect at a time this is the merit of AHP. In this case, the preferences have to be transitive.
NM – That is a DEMERIT of AHP. As I told you before, that this procedure is called OAT (one at a times), or ceteris paribus in Economics, that is, varying one while keeping the other criteria constant, instead of AAT (all at the time). The first violates system’s theory that establishes that you can’t partition a system, but AHP insists on it.
SD . Let me wish as huge success to SIMUS method as the AHP has already achieved. Also, I would be happy to read your paper in a top journal falsifying any points of the AHP method.
NM – My friend, I don’t take offense when you say that I falsify any points in AHP. Days ago, you said that you admit that I have strong arguments, and now you say that they are false? Can you explain your dichotomy?
Thank you for your wishes, but SIMUS will never even remotely attain what AHP achieved.
It is different, and even uses a different decision matrix, in contrast to all other methods, and it does not use weights, something really strange for many people
In addition, people want easy ways to solve their problems, which is quite logical, and thus, good methods like PROMETHEE, ELECTRE, TOPSIS, and SIMUS, don’t have the popularity that AHP enjoys. They demand reasoning, analysis and rationality. Why to bother when AHP offer to solve a problem mechanically and having fun? And they also know that if they are not consistent there is always a fantastic formula that solves the issue, telling what to do. Most of them don’t have an idea of how the Eigen Value method works, and they don’t care.
SD - I know you already have a book about it but we all know that the real success would be one or more top journal papers against AHP
NM – That is correct, I wrote a book about AHP drawbacks published by Springer. The manuscript was reviewed by 3 reviewers and in took me about four months to convince them, until they accepted my points.
What is the difference between publishing a book or a paper? It is a little book, only 130 pages, all of them addressing AHP drawbacks, except a few pages where I had to add something good about the method, and it has some, forced by the reviewers under the argument that it is not good business to publish a book with a negative title.
It is published as a paper book and as an ebook. Thanks to the later, and because Springer metrics, I can know at any time how many people have consulted the book.
As of today, there are 3868 accesses and 17 citations.
SD -Thank you for the nice conversation.
NM - I reciprocate my friend; I enjoy it.
Nolberto
Dear Researcher
I invite you to know and evaluate the content of the article "A Systematic Review of the Applications of Multi-Criteria Decision Aid Methods (1977-2022)" https://doi.org/10.3390/electronics11111720. Please leave your Like and comments on the article's page if you find it interesting. If possible, reply on Twitter.
Best Regards,
Prof. Dr. Marcio P. Basilio
Federal University Fluminense
I think we can't say that a method is better than another in general cases. But we can say that method is better than this in a specific domain. Here to do the comparison we must verify results obtain by every method and those of ground truth.
Thank you.
Abdelkader
True, there is not a better MCDM method that another. But this is a classical and very naive definition.
The method to use, unless there is a simple problenm, depends on the characteristics of the problem, consequently, perhaps, we can say that the importance of a method on others is dependent on the type of problem.
In my opinion, the method that can model most of the characteristics of a problrm, is, without a doubt, much better than all others, simply, because it can solve the problem, while the others not.
If you have a problem that has dependencies between alternatives, and out of the more than 100 methods there is only one that can model this aspect, it is by far, better than the others for that type of problem
Recomiendo utilizar los que se conoce como los indices de sperman
Chritian
The Spearman correlation between two MCDM methods will indicate how close they are, and even if they are very close, it does not mean that their rankings represent reality.
What is that information useful for?
Dear Elio
Thank you for your support.
What I don’t still understand is why some practitioners insist with this praxis, that only demonstrate that two MCDM methods have a high correlation in their rankings when treating a same problem
There is a complete disconnection between these results and reality, mostly, because, as you point out, we do not know the real outcome. To say nothing is both are based on the same collection of invented weights.
Your sentence “For instance, I have very rarely seen a discussion on the transparency of the methods or on the workload required from the analyst and stakeholders to obtain meaningful (or useful) results”, is fundamental by emphasizing ‘transparency’ in MCDM methods.
Curiously, there was more transparency and common sense, in the very first methods like ELECTRE and PROMETHEE, than in subsequent methods, the worst, by far, being AHP and ANP.
In my humble opinion, the most rational and transparent is TOPSIS, and of course, the grandad of all methods, Linear Programming, with 100% transparency, where nothing is assumed .
It is coincident with what I have been saying for years: Our MCDM methods are based on mathematical considerations and some wild assumptions, and unfortunately, they are only poor approximations aiming at representing a real problem.
For instance, when considering that subjective weights, a product of the mind of the DM, are applicable to reality, is equivalent to comparing the high correlation between two methods to reality. And this is true even considering objective weights.
Your last paragraph is certainly illustrative.
Answering your first question. Of course not. A high correlation only indicates that in the graphic representation there are some coincidences in the ups and downs of both ranking.
And what does it mean in our purpose for looking a solution? Nothing
In my opinion, the only way to produce a method that may replicate reality results, is by getting rid of weights of any kind, but considering criteria relative importance, as LP does. Not for nothing, his creator was awarded the Nobel Prize in Economics in 1956!
And also by getting rid of unfounded assumptions, for instance that in ANP exist feedback, or that trade-offs are equivalent to weights, or…………………...
Thank you for your contribution
Nolberto
Dear Alabool
I wonder your question. It is a real issue to answer it.
Under my viewpoint, the answer depends of the kind of problem. for example:
1. Is it an additive compensatory decision situation? Is it an a outranking one?
2. Is it a ranking problem? Or is it a choice problem?
...
...
In my oppinion, the choice of the method should take into account the kind of problem.
Dear Helder
In my opinion, the fact that a method is comnpensatory in a large part suggests that it is incorrect for a certain problem, because it means that increasing a criterion produces a proportional decrease in others, and that could not be true. For instance, if you increment a criteria such as price it may imply that demand would decrease proportionally, which is incorrect, due to the very well-known no-linear relationshipp between offer and demand.
I believe that using outranking or any other process, you can get a result, however, since it is impossible to compare it with the true' value, it is irrelevant what method you use, provided, of course, that it is representative of the real project (most methdods DON'T), rational (some are irrational), and work with real objetive data and with know-how and statistics estimates of qualitative data.
Choice and ranking problems are different aims. The first asks for determining the best solution, while the second aims at defining a ranking. Most MCDM methdos do both simultaneously.
Without a doubt, the type of problem define the method.
Estimado profesor Nolberto Munier antes que nada le saludo y deseo que este muy bien, desde hace tiempo sigo sus trabajos y respuestas a cada pregunta donde participa.
Quisiera hacerle una consulta, espero su respuesta basada desde su experiencia y no lo que diga la literatura,
Si llegase a tener una situación problema donde debo clasificar (jerarquizar) y luego tomar la mejor decision frente a un grupo de alternativas y me proponen implementar los siguientes métodos, cual usted me recomendaría utilizar del "más recomendado" al "menos recomendado" y porque de su respuesta.
Los métodos son.
AHP
FAHP
TOPSIS
TOPSIS MODIFICADO
ELECTRE
VIKOR
Gracias por su amable respuesta
Estimado Christian
Gracias por su consulta.
Mi respuesta es que depende el tipo de problema.
1- Si es un problema personal o de contratacion de personal para una empresa, yo recomendaria AHP, porque a pesar de sus muchas fallas, refleja lo que quiere una persona, y en donde las consecuencias, buenas o malas, de la seleccion, caen sobre ellos.
Desde el punto de vista tecnico, AHP carece de fundamentacion matematica, salvo el uso del metodo Eigen Value.
Todo lo demas es imaginacion y por supuesto, el resultado puede cambiar si otra persona hace el mismo analisis. Lo principal del metodo es, desde mi punto de vista, que si los promotores piden explicacion sobre el mismo, no hay ninguna, ya que es un metodo irracional, al esgtar basado en intuiciones. En consecuencia el decisor puede verse en problemas, si los promotores le piden que justifique el porque de los resultados.
Por otro lado, y esto es una caracterstica, no una falla del metodo, los criterios tienen que ser independientes, algo que la mayoria de los que usan AHP ignoran u omiten, por conveniencia.
Tiene un software que es muy facil de aplicar, y pago.
2- Si el problema es mas complicado y hay dependencia entre criterios, ANP puede ser adecuado, aunque es dificil de explicar, y ademas esta basado en la evaluacion de la comparacion pareada entre criterios, al igual que AHP, lo cual es un absurdo. Si los criterios son muchos, ello significa una labor arduua para el decisor, y esto se aplica tambien a AHP.
Tiene un software facil de aplicar, aunque trabajoso, y pago.
3- FAHP no fue recomendado por el mismo creador de AHP y ANP, Tomas Saaty, dado que el especificamente postulo, que AHP y ANP ya eran fuzzy por naturaleza. Esto es otra cosa que la mayoria de los decisores ignoran o quieren ignorar.
4- TOPSIS es posiblemente el mejor de los metodos aunque tambien tiene algunas limitaciones tecnicas, pero trabaja con datos ciertos. Tiene la ventaja que puede usar pesos derivados de entropia y por lo tanto objetivos
Tiene un software facil de aplicar y creo que pago.
5- TOPSIS Modificado. Lo lamento, no lo co nozco.
6- ELE CTRE es un metodo muy bueno, basado en outranking, muy racional y con varios diferentes enfoques, asi que hay que seleccionar el que mas te conviene. Es complejo.
Tiene un software, nunca lo use, asi que no se si es facil o no, y creo que pago.
5. VIKOR es muy similar a TOPSIS y algunos lo consideran tecnicamente superior.
Creo que tiene un softweare.
6- Otro muy buen metodo es PROMETHEE, para mi, el mejor de todos, por su racionalidad. Tambien puede trabajar con pesos objetivos, y es muy bueno para el analisis de sensibilidad, que lo hace en forma grafica.
Tiene un software que es muy facil de aplicar, y pago.
7 – Si el problema es complicado o muy complicado, puedes usar SIMUS, que no usa pesos de ningun tipo, y que ademas es muy sencillo.
Tiene un software facil de aplicar, y gratis.
Espero haber respondido a tu pregunta. Quedo a tu disposicion para cualquier otra inquietud. o si necesitas ayudas en tu problema.
Cordialmente
Nolberto
Nolberto Munier guaooo, agradezco su respuesta es muy valiosa para mí, investigare ese método de SIMUS no lo conozco hasta el momento, pero confiare en su experiencia.
Si lo desea le puedo mandar el software que incorpora un extenso tutorial
Nolberto Munier se lo agradecería profesor si no es molestia, mi correo es [email protected]
Please, confirm if you recerived the softeware that I sent some days ago
How to compare the evaluation results of different MCDM methods?
In my opinion, the golden criteria for a healthy selection of someone among them should be:
1- The method with low compensatory efficiency is fairer, 2- The method with lower Rank Reversal generation degree is more consistent and reliable, 3- Any external factor/ anchor/ reference point (may be in real life it) the method that provides better correlation with is better, 4- Attention should be paid to the Normalization type used for an MCDM method. In my opinion, this is the innocent-looking but most easily deceived step in the MCDM calculation process. 5- Particular attention should be paid to the structure of the data (for the initial decision matrix) that an MCDM method uses in the calculation. 6- Attention should be paid to the data distribution of the MCDM final score results (their Standard Deviation and Entropy can be calculated). 7- Fuzzy-based or crisp-based MCDM results should be compared. This can give you an idea of which type of data will be more efficient.
Best MCDM selection is definitely a difficult area of expertise. Because there are more than 200 types of MCDM, more than 10 types of normalization and weighting methods. Moreover, it is possible to use threshold value, preference function. Also for the First decision matrix there can be many data types from real life. In such a complexity, only software can make a fair comparison with the above criteria. Moreover, there is so much data that we can call it big data. So artificial intelligence has to learn and teach us here. For example, company financial data and country economic performance data are completely different types. In my opinion, the MCDM type and normalization type to be selected for these two data should also be different. In short, thousands of combinations should be made in big data and artificial intelligence should decide the best MCDM method.
What will the result be? The MCDM method, which adapts well to the conditions, is the best.
Nolberto Munier buenas tardes y saludos, por este medio le confirmo que he recibido el software, muchas gracias. Lo estaré revisando y muy probablemente lo utilice en mi tesis de doctorado.
De verdad muchas gracias, he aprendido tanto de ustedes.
Dear Christian
It is my pleasure, and, as I presume, is the same for all members of RG.
We try to help, as you can help others.
Dear Nolberto, what do you think about fuzzy sets? Do you think fuzzy based MCDM or crisp based MCDM is more successful? Moreover, would you prefer Fuzzy logic or Aristotelian logic?
thanks
Dear Mahmut
I am not very fond of fuzzy especially when used in MCDM.
Im my opinion it is far more important that a problerm is well represented through a mathematical model, than to use fuzzy, most of the time starting from arbitrary low and high values.
One of the reasons by which I don't think that fuzzy is adequate in MCDM is because criteria are addressed separatelly, when they should be trated jointly. That is, you can determine a crisp value for criterion C4 without considering if that value has effect on other criterion.
Thus, in the way it is done at present, it is for me a waste of time
Regarding your last question, I don't have the faintest idea of phillosophy, however, I would be inclined to Aristotelic dialectic, that encourages discussion, dialogue, but without subjectivity. This is the reason I am against pair-wise comparison, and then using arbitrary values to go into fuzzy.
I ask. What for? Where is the logic? What is the purpose of finding better values using fuzzy, with the initial values depend on the mood of a person?
Now, if you determine low and high values for each criterion, bassed on reasoning, consultation, analysis and research, probably fuzzy can help.
In a fuzzy set, each object has a degree of membership. In the classical approach, an object is either a member of the set or it is not. In fuzzy logic, the membership degree of an object can be any limitless value in the range (0, 1). Fuzzy logic is used in many science, medicine, and technological applications in daily life. For example, fuzzy logic has the principles of creating artificial intelligence applications.
Compared to classical logic, it is said to offer fuzzy logic, more lenient, flexible, and more modelable solutions, especially for complex nonlinear real-life applications. Fuzzy eliminates the drawbacks of making firm judgments and shows that grading is more acceptable and applicable.
In this context, we can think a little more deeply to answer the question of whether crisp or fuzzy is better for MCDM. Finally, objectivity is not just about precision or crisp numbers. Maybe fuzzy is more objective for most situations, right? Gray tones consist of thousands of intermediate colors as an alternative to black and white. Doesn't reality already have a structure of more alternatives? I suggest you to examine subjective methods such as AHP, SWARA, BWM in a comparative way in terms of classical logic and fuzzy logic.
Dear Mahmut
MB -In a fuzzy set, each object has a degree of membership. In the classical approach, an object is either a member of the set or it is not. In fuzzy logic, the membership degree of an object can be any limitless value in the range (0, 1). Fuzzy logic is used in many science, medicine, and technological applications in daily life. For example, fuzzy logic has the principles of creating artificial intelligence applications.
NM- Thank you for the lecture on fuzzy sets, I already knew about what you say.
I believe that there is a dichotomy here; if the membership is is limitless, how can it be bounded between 0 and 1?
MB- Compared to classical logic, it is said to offer fuzzy logic, more lenient, flexible, and more modelable solutions, especially for complex nonlinear real-life applications. Fuzzy eliminates the drawbacks of making firm judgments and shows that grading is more acceptable and applicable.
NM- If fuzzy is used based on reasonable, documented, analyzed and researched values for low, middle and high values I agree with you.
Unfortunatelly, very often fuzzy is used as a procedure that can produce crisp values out of garbage input, when it comes from AHP or from assumed values without any foundation. The result is logically garbage
MB- In this context, we can think a little more deeply to answer the question of whether crisp or fuzzy is better for MCDM.
NM- Again, if the crisp values come from a rational source inputted in fuzzy, of course that fuzzy can help MCDM, but not if the source are invented or arbitrary values
MB- Finally, objectivity is not just about precision or crisp numbers. Maybe fuzzy is more objective for most situations, right? Gray tones consist of thousands of intermediate colors as an alternative to black and white.
NM -No, objectivity is not about precise crisp values. It derives from considering reality. It comes with observation, research and analysis not from invented figures that change according to the mood of the DM.
MB-Doesn't reality already have a structure of more alternatives?
NM- Normally, reality is unique, what you can have is several ways to reach it.
MB-I suggest you to examine subjective methods such as AHP, SWARA, BWM in a comparative way in terms of classical logic and fuzzy logic.
NM- I examined long time ago the methods you mention. The three of them have in common arbitrarity
AHP: Based on intuitions and false assumptions like conveniently assuming that trade offs and weights are the same, or that the fundamental table of Saaty, is derived from the Weber-Fechner Law, by equalizing intuition values to stimulus, that are completely different things.
SWARA: “The relative importance and the initial prioritization of alternatives for each attribute are determined by the opinion of the decision maker, and then, the relative weight of each attribute is determined”. Alireza Alinezhad (2019).
It is more rational because it is based on the opinion of the DM than can be backed up by analysis, experience and knowledge. It makes sense, but it is still subjective, and thus, subject to whom makes the appraisal.
BWM: Based in choosing, the best and the worse criterion. Based on what point of view, Environment, Technical, Social, etc.?
If you disagree with my answers, pls. rebut them with your reasons, or just give me a reason to accept your defense of these methods.
There are a number of ways to compare the evaluation results of different MCDM methods. Some of the most common methods include:
It is important to note that there is no single best way to compare the evaluation results of different MCDM methods. The best method to use will depend on the specific MCDM method being used and the specific application. Here are some additional considerations when comparing the evaluation results of different MCDM methods:
It is important to consider all of these factors when comparing the evaluation results of different MCDM methods. By doing so, you can increase the likelihood of selecting the most appropriate method for your needs.