One of the earliest uncertainties in applying a MCDM model is deciding which one to use, considering that are more than two dozens. The most known are AHP; ANP; SAW, MAUT/MAVT, TOPSIS, PROMETHEE, ELECTRE, VIKOR, WASPAS, SIMUS, MACBETH, MOORA, GOAL PROGRAMMING, LINEAR PRROGRAMING, etc.
Most of them also admit fuzzy logic
I believe that it will be very instructive for this forum to have inputs from many different practitioners about the model selected, but by far more interesting will be to learn which was the reason for the choosing.
This question is not intended to benefit, degrade or judge any model let alone to criticize any practitioner. My only interest is to have some reliable data about the reasons people select a certain model.
I believe we can gather useful information. Just as a hint, reasons could be.
Because:
1- I have the software
2. It is the only model I know
3. It is very easy to implement
4. According to literature it is the most popular model used, and then there must be a reason for that, even if I don't know it, but I trust this preference
5. I feel confident in its capacity to model my problem
6. I heard that it is very good
7. I think that it models my reality very nicely
8. I think it is the best because.........................
9. It is easy to understand
10. It gives me the possibility to input my own experience
11. My project was a simple one, and then I did not need a sophisticated software
12. I am not aware of the characteristics of each model.
13 I have a friend that used............ and he is happy with it
14. It is a question that I have asked myself many times
15. My university computer department has this model
16. My professor suggested me to use the................model
17. It fulfils my needs
18. All models are the same, they are inaccurate
19. I need to perform a sensitivity analysis
20. Other reasons. Specify please
Thank you very much for your participation, it will help us all.
Hello,
To decide which MCDM method to use, I ask a few questions about the nature of the input: of the multiple criteria problem at hand:
1. Is the number of alternatives small enough to make two by two comparisons?
No => the multiple criteria problem must be transformed. What transformation will be used (secondary objectives are converted to constraints / a single combined objective function is developed / all objectives are treated as constraints)?
Yes => Are all criteria with quanlitative data?
2. Are the data of the problem deterministic -'crisp' or not?
3.As decision maker, do I want to take all criteria into account?
no => there must be a hierarchy among the criteria
yes=> Can the decision maker express differences in importance in the criteria on an ordinal scale, on a ratio scale or is it impossible to express this?
The answers tot the above questions lead to a classification of MCDM methods. More details can be found in following publications:
- >Why Don't We KISS? A contribution to close the gap between real-world decision makers and theoretical decision-model builders. (book) https://www.researchgate.net/publication/262971801
-> Multicriteria Analysis - Some gear and guidelines to climb a huge mountain (Technical report) https://www.researchgate.net/publication/263009935
In each class of above classification, several MCDM methods can be found. Other reasons can than be used to pin it down to one method (one important reason for me is: do I have a software to perform the MCDM method?).
By using this classification I avoid the use of a MCDM method which is not really suited for the multiple criteria problem/situation at hand.
Wim De Keyser
Book Why Don't We KISS? A contribution to close the gap between r...
Technical Report Multicriteria Analysis - Some gear and guidelines to climb a...
NM. Hi Wim
Thank you very much for your contribution.
This is the kind of response I was looking for.
I might or not be in agreement with some of your statements, but most certainly you are the first researcher that provides a comprehensive and easy to follow ‘pathway’, with logical questions, for people to select a MCDM model, and that usually don’t know where to start with; I believe that many people will benefit from your approach
WDK. To decide which MCDM method to use, I ask a few questions about the nature of the input: of the multiple criteria problem at hand:
1. Is the number of alternatives small enough to make two by two comparisons?
No => the multiple criteria problem must be transformed. What transformation will be used (secondary objectives are converted to constraints / a single combined objective function is developed / all objectives are treated as constraints)?
NM Thus, are you referring to the size of the problem alternative-wise? That is, if there are too many alternatives you say that it is not convenient to make pair-wise comparisons? If this is the concept, I agree with you that this should possibly be the first question for a practitioner to ask himself. It will rule some methods out.
In any MCDM model criteria and objectives are the same thing. Therefore, it permits to consider the different arrangement that you mention. In my opinion I think that maybe you want to clarify for all of our colleagues benefit this response, perhaps with an example.
WDK Yes => Are all criteria with quanlitative data?ns?
NM Yes, of course this is a paramount question, but I don’t understand if you referring to quantitative, qualitative data or to both mixed. In my opinion, it is not often to find a problem with only quantitative data; however, there are many with only qualitative data, and almost all with a mix of them.
Therefore, I gather that the reason of your question is to make sure that your chosen model must be able to handle any of the three conditions. Isn’t it?
WDK 2. Are the data of the problem deterministic -'crisp' or not?
NM Since you use fuzzy logic terminology I am not sure if you are referring to deterministic (quantitative) or uncertain (qualitative). I gather that in this second case they can be considered ‘crisp’ from the fuzzy point of view, after defuzzification.
WDK 3.As decision maker, do I want to take all criteria into account?
no => there must be a hierarchy among the criteria
NM. I agree, but not in a hierarchy. There may be criteria you are not interested on, or you may be interested on them all. In my opinion the second choice is better, because you are evaluating your alternatives using all available evaluators, not based on their relative importance
WDK yes=> Can the decision maker express differences in importance in the criteria on an ordinal scale, on a ratio scale or is it impossible to express this?
NM. No, it is not impossible to express importance of criteria. What is important is to give this importance its appropriate value. Importance obtained from pair-wise comparison can certainly give an estimated weight to each criterion, but more important is the fact that they are useless to qualify criteria for assessing alternatives, because these are evaluated according to the discrimination amongst the performances values for each criterion, and not for the criterion relative importance.
Remember that you can easily determine actual weights though different methods, and NOT using artificial onesfrom pair-wise comparisons. There are procedures based in ratios, on entropy or on other reliable concepts where subjectivity is absent. You can apply them to SAW, PROMETHEE, ELECTRE, TOPSIS, etc.
WDK The answers tot the above questions lead to a classification of MCDM methods. More details can be found in following publications:
- >Why Don't We KISS? A contribution to close the gap between real-world decision makers and theoretical decision-model builders. (book) https://www.researchgate.net/publication/262971801
NM. I tried to open it but it only shows the content, but not the whole text.
-> Multicriteria Analysis - Some gear and guidelines to climb a huge mountain (Technical report) https://www.researchgate.net/publication/263009935
WDK In each class of above classification, several MCDM methods can be found. Other reasons can than be used to pin it down to one method (one important reason for me is: do I have a software to perform the MCDM method?).
NM Naturally, but not enough
WDK By using this classification I avoid the use of a MCDM method which is not really suited for the multiple criteria problem/situation at hand.
NM. No doubt about it
In my opinion it appears that you do not consider how well a MCDM replicates reality. For instance where are the limits or thresholds for criteria? I don’t think than we can evaluate say five alternatives considering their cost, if we do not have an idea of how much the entrepreneur is willing to spend, or we can use a ROI criterion, but we need to know which is the minimum level the entrepreneur is willing to accept.
I believe there are many other things to consider, For instance relationships between criteria, not enough funding, etc.
I have a proposal: You, I, and whoever wants to participate from the forum (the more the better), can work together on this subject via email, and when all of us reach an agreement the results can be published in RG for every body to improve, criticize or accept it. I think that it would be a healthy contribution to get e better MCDM.
I believe that thanks to RG this is a golden opportunity for everybody to participate, and thus getting a lot of information from different practitioners, different points of view, and taking advantage of their experience
Your response as well as from whoever want to participate in this effort will be appreciated
Nolberto Munier
Book Why Don't We KISS? A contribution to close the gap between r...
Technical Report Multicriteria Analysis - Some gear and guidelines to climb a...
Dear Nolberto;
When I had first faced with a multiple criteria problem that I had to solve, I looked at the literature and selected the most preferred one or presented one. I also checked whether it was applicable to my problem or not.
Then I studied other ones and tried to apply other methods.
I have a life long research study. I try to find all of the MCDM and apply them to the same problem to understand better.
In this life long research some important items are as follows according to your hint reasons.
"1- I have the software"
When I have the free MCDM software, it is easy for me to apply and use it. Otherwise it is costly (time, financial etc.)
"3. It is very easy to implement"
The difficult methods make the application difficult.
"4. According to literature it is the most popular model used, and then there must be
a reason for that, even if I don't know it, but I trust this preference"
"5. I feel confident in its capacity to model my problem"
"9. It is easy to understand"
"10. It gives me the possibility to input my own experience"
"16. My professor suggested me to use the................model"
"17. It fulfils my needs"
If you prepare a questionnaire during your research, I will be happy to attend the survey.
Have a nice day
Best Regards
Hello Nolberto,
With the first question "1. Is the number of alternatives small enough to make two by two comparisons?" I do refer to the size of the problem alternative-wise.
If the answer to this first question is 'No', the multiple criteria problem must be transformed. Possible transformation are:
- secondary objectives are converted to constraints (eg. Single Objective Approach)
- a single combined objective function is developed (eg. Utility functions, Method of Zionts and Wallenius,...)
- all objectives are treated as constraints (eg. Goal Programming, Reference Point Method,...)
If the answer to the first question is 'Yes', I look if the data is only qualitative, only quantitative or is a mix.
Several MCDM methods can handle qualitative data (eg. Numerical Interpretation Method, ELECTRE I, ARGUS,...) while other MCDM methods can handle quantitative data (eg. ELECTRE III, PROMETHEE,...).
There are not a lot of MCDM metrhods that can handle a mix of qualitative and quantitative data (the only one I know is Analytical Mixed Data Evaluation Techniques). A lot of researchers like to tranform qualitative data into quantitative data (so called cardinalisation) so they can use a MCDM method that requires quantitative data, but I prefer not to do so because cardinalisation adds properties to the data which are initially not there (like a measurement unit and/or an absolute zero). I do some times transform quantitative data into qualitative data (meaning that I will not use certain properties that are initially present) so that I can use an MCDM method that handles qualitative data.
Question 2 refers to deterministic and non deterministic data. MCDM methods that can handle non deterministic data are eg. Outranking method under uncertainty and Stochastic extension of PROMETHEE.
On the subject of the impossibility of expressing the importance of criteria, impossibility should be read as 'no a priori information'. MCDM methods like Protrade method en Strange do not require the decision maker to express from the beginning the importances of the criteria.
The classification is a rough tool to narrow down the number of MCDM-methods that can be applied on the MCDM problem at hand. Is is not the aim to identify 'the best' MCDM-method. The classification narrows it most of the time down to a handfull of MCDM methods or less. When several MCDM-methods are still possible, other considerations must be made to select the appropriate MCDM method.
With kind regards,
Wim De Keyser
Dear Burak
Thank you for your response. As I understand you identify the questions that most appeal you.
"1- I have the software"
When I have the free MCDM software, it is easy for me to apply and use it. Otherwise it is costly (time, financial etc.)
Naturally, but it all depend on the nature of the problem you have to solve. You can’t apply any model to solve any problem. Well, you can, but it is not wise.
"3. It is very easy to implement"
The difficult methods make the application difficult.
In my opinion model difficulty is not an issue. The problem is that some models require more thinking and more work from the user to provide the data requested. From your response, I believe that there is another question that I omitted: Does this project warrant the use of sophisticated models, or may be a simpler model is enough? As an analogy, it is as asking either for using a 60 ton mining dump truck for one bungalow excavation, or inversely, using a town delivery truck for a mining operation.
"4. According to literature it is the most popular model used, and then there must be
a reason for that, even if I don't know it, but I trust this preference"
I believe this reasoning may be understandable. However, I think that the user should take notice of the type of problems that popular models are employed for. If I am a DM and my problem is of a personal or a corporate nature, and where personal appreciation and preferences are paramount, probably the AHP is the most popular and the best model, but if you are solving a river basin problem with hundreds of alternatives, thousands of criteria and many conditions, AHP is not fit for the job.
"5. I feel confident in its capacity to model my problem"
Fine, but check the capacity the model has to model your problem
"9. It is easy to understand"
Well, in Project Management there are several models for planning a job. Top notch models such as ‘Primavera’ (Oracle), and ‘PC Project’ (Microsoft), are worth thousands of dollars, and you have to take a course to understand them. However, the rewards are high and that is the reason for which they are used. They give you a lot more information than ‘over the counter’ models, for instance resources balancing. The same happens in MCDM. There are models very easy to understand but that easiness has a price. The more difficult models usually have by far a larger capacity and give a lot of information in comparison with the easy ones, but most important, according to my point of view, they represent reality more faithfully.
"10. It gives me the possibility to input my own experience"
This is very true. However, if you has been hired as a DM, I don`t think that the stakeholders will be happy with your experience in several models, unless that you can explain them the advantages, pros and cons of each one.
"16. My professor suggested me to use the................model"
Of course, however let consider how much experience this professor has in actual working with MCDM models, other than the academic field.
"17. It fulfils my needs"
This is in my opinion your best answer and you best choice, because it means that you have studied the problem, and selected the model that fulfills your needs, irrelevant if it is easy, difficult, cheap or expensive.
If you prepare a questionnaire during your research, I will be happy to attend the survey.
Thank you so much, I take your word, but don’t overestimate me, I might have some experience and knowledge, but for sure there are many people that surpass my know-how. We can't afford to miss them; and consequently It must be a joint effort. Of course, I can start looking at that questionnaire, but it has to incorporate the contribution of many people.
Thank you for your contribution and goodwill
Nolberto
Dear Mr Love
I am of course aware of Dr. Zavadskas prestige and work, since I follow his publications, and as a matter of fact I have requested a couple of times his opinion about some of my ideas, the last one just two weeks ago
The problem is that most MCDA users lack an appreciation of utility and weights. They automatically assume that MCDA is the right solution for every decision-making problem even medical problems. An interesting case is the economic evaluation of alternatives, where practitioners assign weights to cost relative to effectiveness. In a 1994 paper, “Using values in OR,” Prof. Keeney writes “I do not want some administrator to give 2 minutes thought to the matter and state that pollution concentrations are 3 times as important as cost.” Unfortunately, in practice even Keeney does it.
This is my answer to Mr. Klaus Goepel letter sent to me regarding my questions. I consider it so important that I answer it here, because I understand that interests eveybody
Dear Klaus
Thank you so much for answering my ‘Calling for reflexion on actual MCDM and suggestions
KG. “that model xx was successfully applied to solve a problem”
This can be said, why not? It does not imply that the model solved the problem, but only that model xx was applied as a tool to support the solution to the problem.
NM. True, but I pinpointed to the adverb ‘successfully’. My question is again, how can anybody assert that it was successful? I think that you agree with me that this is a very subjective opinion.
KG. “results are debatable because the process does not considered reality, however they continue using it”
Of course results are debatable. In practice, following a formal process of a MCDM is in many cases (and organisations) already a big step forward, and the results of the process always need to be discussed, they must be debatable. Finally, people, and not the MCDM make the decision!
NM. You are assuming the word ‘debatable’ as ‘discussion’, in this sense you are right, we cannot take a result as its face value. A model is just a tool; we can’t blindly accept its results, it provides information and paving the way for discussions, but it is for DM to make a decision.
But my expression is not related with a healthy discussion, it deals more with uncertainty, disputable, controversial, issues, etc. because if the model does not consider reality, results are dubious to say the least.
KG. 5. Be able to consider in a portfolio of projects, different starts and finishes, annual percentages of completion, compliance with annual budget, etc.
Should we really mix project management (time, money, resources) and MCDM? Of course money time and resources could be a part of the criteria for a decision, but percentages of completion, compliance with annual budget etc.?
NM. I think we should. If not how can you make a selection of projects in a portfolio when they are not at the same state? This happens all the time; I am talking reality not suggesting a computer and academic exercise. As an example, a construction company usually have a five year plan portfolio of projects and corresponding budget, and it would be erroneous for them to consider that all of them start at the same time. What happens with those that integrate the portfolio that are under construction? Forcefully they have to be considered as well as their annual percentage of completion, in order to have funds assigned and to have them available when needed.
We have forcefully to establish annual percentages of completion because you cannot use part of your five year budget as a ‘lump sum’ for each project. That lamp sum varies for each project in each year. Since probably the company does not have the full amount in a strong box, this is the only way that a financial analysis can be carried out.
If you like I can send you and actual example of this kind.
FG. 7. Be able to work with positive or negative values for alternatives performance, as well with integer and decimal values, and with a mix of maximization, minimization and equality criteria,
Positive or negative values and integer or decimal is not so important in my opinion, but mix of maximization and minimization yes!
NM. Suppose you are to determine the best environmental indicators related with some criteria. Experts can analyze each indicator and criterion pair and determine if there exists a relationship or not. This can be positive or negative, and then negative values need to be considered. The problem is that as far as I know no MCDM considers this circumstance. Again, I can send you an actual example. Of course, a mix of maximization and minimization, as well as equalities is paramount, otherwise you do not properly model a problem.
FG. 8. I don’t agree fully, there will always be a certain limitation. It has to be reasonable and practicable. If you mean a limitation to 10 alternatives, yes. But 100 or 200?
NM. t is not the DM that establishes a limitation in criteria, it is the scenario.
If we are dealing with a serious problem it is not a matter of reasonability or practicability, it depends on the problem.
A simple problem may have as much of say 10 criteria because after a thorough study it is agreed that they represent the different conditions that alternatives must met. But there are problems so complex that they need hundreds of criteria. I also have an actual example for you, although not performed by myself, but by the MIT, and long time ago.
FG. 11. see 5.
NM. I don’t believe that reflexion 11 is linked with reflexion 5.
The former relates with the resources you have and their availability, the later deals to relationships between projects.
FG. Will 12 and 13 ever be possible?
NM. In my opinion they should, provided that we eliminate subjectivity at the beginning of the process.
FG. A model and its outcome will always depend on the person, modelling the reality.
NM. Yes, but that person must be able to model reality. A model must perform based on actual facts not on assumptions or preferences. I am not saying that there must be no participation of the DM; I am saying that the DM must work with the data he has, and when the model arrives at a result, this is the time for the DM to make as many modifications as he considers, because now he is modifying results obtained on reliable data, not on preferences. This is especially noticeable in the stage of performing sensitivity analysis.
That is, in my opinion, human influence must always be present, but not at the beginning but at the end, and correcting what is not right. Just think in this analogy; in designing airplanes or cars a mock-up is made following strict mathematical parameters and possibly some qualitative assessment. But, it is when the mock-up is built and tested in the wind tunnel where the engineers can see the deficiencies and make whatever correction the believe necessary. In my opinion this is similar in MCDM.
FG. And the results will depend on the model, how close it describes reality. Like in physics, sometimes the first linear approximation is sufficient to describe what you see, sometimes you need to take into account 2nd and 3rd order to get a sufficient precise description of what you see.
NM. Exactly, you can consider the second and third order by linearizing the corresponding criteria vectors.
FG. From my point of view one important aspect I couldn't find in your paper:
Whatever process or method developed, it must be practical and user friendly in the sense that DM don’t need to understand the mathematical background or theory behind.
NM. Very true, as you don’t need to know how a car engine works to drive it, but you need to know how to utilize it and what data to input, as speed, acceleration, load, breaking distance, etc.
FG. Therefore it has to “translate” it into the “language” of DMs, for example “what-if scenarios”, “consensus” for group decisions, or clear indications of uncertainties in proposed decisions and limits of the model or method used. Don’t expect them to select transfer functions or discuss eigenvalues etc.
NM. I agree in a hundred per cent. However, you were saying above that the DM opinion has to be considered, therefore, how else is the model to know the form of the relationship between two alternatives is considering a pair-wise criterion, as in PROMETHEE? Like it or not, the DM has to be aware of it.
FG. Hope, my feedback s helpful.
NM. I would say that it was so helpful, that I considered putting it in RG for everybody to take notice of your points
Thank you very much Klaus, and pls continue expressing your ideas
Nolberto
Dear Edouard Kujawski
Your statement:
The problem is that most MCDA users lack an appreciation of utility and weights. They automatically assume that MCDA is the right solution for every decision-making problem even medical problems. An interesting case is the economic evaluation of alternatives, where practitioners assign weights to cost relative to effectiveness. In a 1994 paper, “Using values in OR,” Prof. Keeney writes “I do not want some administrator to give 2 minutes thought to the matter and state that pollution concentrations are 3 times as important as cost.” Unfortunately, in practice even Keeney does it.
I imagine you refer to the time when selection was taken solely on the concept of cost/benefit analysis.
Reference to Prof. Keeney quote, you are right, . That is what AHP does, and many people sees that procedure natural and reliable
Dear Nolberto:
You are right on! But, my objection with MCDA practitioners goes beyond application to cost/benefit analysis. It applies to any additive sum of criteria when the weights are interpreted as importance measures of criteria independent of alternatives. I consider the swing weight method to be theoretically sound.
Ed
Dear Edouard. Thank you for your response.
Your sentence
'It applies to any additive sum of criteria when the weights are interpreted as importance measures of criteria independent of alternatives'
Needless to say, I plenty agree with you and with your sharp (and certain) bolded sentence. It appears that not many practitioners realize that, or, if they do, may be they don’t care, even if it is not a good practice.
On criteria weights - I have said it many times - this constitutes a fallacy in the methods that use weights, independently of their soundness mathematical structure and principles, and I have even discussed this issue with Saaty. My reasons for my rejection is that these weights are derived from pair-wise comparison between criteria, and they DO represent subjective importance between criteria, BUT THEY ARE USELESS FOR QUALFYING CRITERIA FOR ALTERNATIVES EVALUATION, because criteria must be qualified according to the amount of information they provide regarding alternatives.
For illustration purposes allow me to propose a trivial example, which I hope may illustrate those colleagues and practitioners that are normally using pair-wise comparisons criteria weights, which are further employed to select alternatives
Assume we stop at a car dealer for purchasing a car; he has several models of the same maker or even from different makers. So, there are many models or alternatives where we can choose from.
For our selection we think on these criteria (with assumed weights denoting the relative importance between them), costs (0.48), speed (0.23), fuel consumption (0.20), and comfort (0.09). Starting with costs, suppose that we are told that in the range we are interested on all cars have the same or extremely close costs; it is obvious that this criterion is irrelevant for selection - even when it was dubbed as the most important because its relative weight - since costs are very close and then selection based on them is meaningless or impossible.
Then, we have to use other criteria that show discrepancies on values. Suppose that fuel consumption shows large variations between models; consequently, this criterion will then be much more important than costs for evaluation, even when its weight has relatively a much lower value. It is clearly seen, as you say, that criteria must be related with alternatives (models).
Regarding the swing method I must confess my very poor understanding of the method. However, it calls my attention that it depends on the opinion of the DM. It is true that reasoning considers their close relationship with alternatives performance and that in some way discrimination is considered. Thus, in my opinion it is similar at having preferences. Again, my opinion is from somebody with only a varnish on the system and then it probably is worthless.
Numerical weights, determined by pairwise comparison or not, are used in most MCDM methods as trade offs between the obtain 'preference' on the criteria. Taken the example of buying a car: one unit of 'preference'/'utility'/... on one criterion (eg 'comfort' - with weight 0.09) can be exchanged for how much 'preference'/'utility'/... on another criterium (eg 'speed' - with weight 0.23)? Answer: 0.39 (= 0.09 / 0.23)
In my experience, users are supprised when they realise that there is a compensating effect between criteria by means of the weights. Most of them can not really express numerically the exchange rate between two criteria (eg comfort and speed). In fact, the only numerical trade offs (or exchange rates) that users understand and accept are the exchange of current monetary value for future monetary value (cfr. Net Present Value). Expressing the importances of the criteria on an ordinal scale is easier for the user and better accepted. My point of view is that this ordinal input from the user should not be transformed into numerical values (most of the time on a ratio scale) but should be used as is by selecting a MCDM method that can handle importances on an ordinal scale.
Dear Waldemar
Yes, pair-wise comparisons are useful even in MCDM. I reported several times that in my opinion AHP is a good method for trivial and corporate problems because pair-wise comparisons can be safely made since they affect the people doing the analysis as in the case of personal decisions and in contracting new people for a company.
But it is another different history when the scenario involves thousands of people who will suffer or enjoy the consequences of the project. It is absurd to think that a DM or a group of them can take decisions on behalf of millions of people. As K. Arrow defined it, it would be a dictatorship.
When some months ago I was writing in RG about this issue I came across by chance with the Arrow theorem.
For me it is the mathematical proof hat I am right about not using pair-wise comparisons for everything, but especially when there are people involved.
Even if I mentioned it in RG nobody came and told me that I was mistaken, however, I had my doubts (I am not a mathematician) about using that theorem. For that reason when somebody like you with a profound mathematical knowledge, thinks the same as I, I was pleasantly surprised because it reinforces my beliefs.
Thank you Waldemar, you made my day.
Dear Nolberto,
I only use AHP because of its strong scientific foundation. Its new 1-9 Fundamental Scale of Absolute Number is the key to ensure that an AHP model produces a relative priority numbers in ratio scale. The theory has been validated rigorously. Validations of the 1-9 scale showed the ability of our brain to make estimation. Many market share estimation models produced numbers close enough to the real data. Structuring in a network model gave better estimation than a hierarchy. There are other validation models as well. The AHP with its ratio scale has been proven mathematically that it removes the impossibility in Arrow's theorem, i.e. when ordinal comparison numbers are used.
Dear Wim
Of course, I agree with you about trade-offs between criteria weights, and that happens all the time if they are normalized. My point is not on the procedure but in the quality of weights. If they are obtained by pair-wise comparison they are arbitrary and allow for absurd conclusions such as ‘Speed is 2.55 times more important than Comfort’
Linguistic or ordinal variables have by far more sense. You can say ‘For me it is more important speed than comfort’ and it is perfectly logical. Problem is, how can we introduce these linguistic variables in a numerical matrix?
I plenty agree with you that these variables should not be converted in cardinals and thinking that perhaps using fuzzy logic can, if not solve, at least improve the reliability.
This is a very significant point that I should have added to mi ‘Reflexions’ since many many MCDM have a very important content of subjective criteria.
Thank you for your contribution
Dear Kirti
KIRTI. The main reason for this interchange is to illustrate colleagues of the merits and demerits of each model and learn from others,consequently, please let me comment about your assertions. They do not mean any criticism to your preferences, however as you rightly emphasize from your point of view which are the strong features of AHP, I believe that it is convenient to express also mine. This approach , offering different opinions, can give our colleagues both sides of the coin, which I believe is beneficial, then, it is for them to decide. Thank you for your understanding
KP. I only use AHP because of its strong scientific foundation
NM I know that you are an international recognized expert in AHP, and then I am surprised by your words about its strong scientific foundation, when the main criticism on AHP is precisely its lack of mathematical foundation, and that is stated by many reputable researchers in various papers. Are all of they wrong?
You also know for sure that during early 1990s there was a heated literary debate between Saaty and many researchers that materialized in publications about this lack of mathematical foundation. Dyer fiercely criticized AHP affirming that it is a flawed procedure which leads to arbitrary rankings which provoked that Harker and Vargas staunch defenders of AHP, saw necessary to publish a book, where they try to refute those opinions. I am not taking any part on this, but obviously to generate so intense debates mean that the procedure is not crystal clear, and I have not seen that in PROMETHEE, ELECTRE and TOPSIS or in any other model.
NM. Validate means to prove that something is correct (Cambridge Dictionary).This is a subject that I have addressed several times here in RG, however never got and answer. I wonder, how can you validate something, any method, if you do not know what the true result is? To validate something you need a reference; in MCDM models you don’t have that reference.
I am always willing to learn and I will appreciate it if you can elaborate about other validation methods.
NM. he 1-9 scale, which validity I don’t discuss, is simply an invention of Dr. Saaty based on the stimulus- response psychology theory. As far as I know, psychology is the scientific study of mind and behaviour; it has produced theories and procedures that are extremely useful to understand people behaviour, but it is not and exact science, and cannot guarantee that these theories are true or the best, and not even for all people. As an example, remember that what Dr. Freud proposed a century ago, is no longer totally accepted
NM. I don’t think that the ratio method invalidates the Arrows’ theorem. Whatever the methods you use, you cannot replace what the majority of people want. It is not a matter of mathematic but common sense.
Can a ratio scale demonstrate that the decision to displace more than 1 million people to build the Three Gorges Dam in China, was correct, when people was against that, and however were forced to move? That is exactly what Arrow said; it is equivalent to dictatorship. Probably you have read here in RG that I had a personal experience on this same issue, when because a wrong decision taken by ‘experts’ , a multibillion project in Canada was halted after investing hundreds of millions of dollars. It would be interesting to hear what the ‘experts’ told the stakeholders after these very heavy losses.
Do you remember the Challenger disaster in USA, which killed seven astronauts and stopped NASA projects for years? It was the product of a bad decision, even when the decision-makers were warned that there was potential for disaster, because defective ‘0’ rings.
May I remind you about the Bhopal disaster in your country, or the Vajont dam catastrophe in Italy, both producing thousands of deaths each?
In both cases, individual or group decision experts prevailed over collective decisions and warnings, and the results are there for everybody to see. And yes, you can validate according these results that decisions taken were erroneous.
KP. Structuring in a network model gave better estimation than a hierarchy.
NM. Of course, and it could has been the reason for Saaty to develop ANP.
NM. In your paper ‘Criteria for evaluating group decision-making methods’ you formulate the following question:
KP. ‘Should a decision analyst primarily support a client’s decision process as it is or should he reshape it and teach the client how to make a decision in another way?’
NM. Very interesting. Thus, the DM becomes a sales man/woman trying to ‘sell’ his/her idea, instead of being a neutral analyst, by trying to swing the opinion of the client in his/her favor. It is interesting to notice that in your question you don’t even consider that the client might be right and the DM mistaken, which means that he considers himself infallible. I have three questions about this:
1. How do you know that you are right and the client wrong when you two are debating for instance the convenience that an urban highway project will better run at the street level (alternative that your client does not agree with) instead of being an elevated highway? You have your arguments of course, but your client also has his. However, he has an advantage over you: He has to live with his decisions while you don’t, and most probably he has arguments based on his daily experience living in the area that you don’t. Your statement about ‘teaching’ the client does not surprise me, because it is approximately the same answer that I got from Dr. Saaty when I formulated the same issue.
2. My second question is: What happens if you can’t convince the client? Do you override his opinion? Or you may perhaps think that he is right and that your views better be reviewed?
3. My third question is: How are you going to reshape the opinion of thousands of people?
Your responses will be greatly appreciated
Hello Norbert,
I strongly believe in the use of ordinal (or linguistic) variables to express the importances of the criteria because that is the kind of input the client/user can give. I agree, there are not a lot of MCDM methods who really do (some claim they do, but behind the curtains they cardinalise). That is why I developed in the past myself a MCDM method that does (among other things) just that: the ARGUS-method (ARGUS stand for: Achieving Respect for Grades by Using ordinal Scales only). It requires more input and interaction between the analyst and the client/user but it does not require input that the client/user can not give or that makes him uncomfortable. Other MCDM methods that require an order between the criteria (and do not cardinalize) are MELCHIOR and the Numerical Interpretation Method. If anyone knows other methods, please let me know.
Wim
I am also convinced that linguistic variables play a more than fundamental role in MCDM, and from that point of view it appears that ARGUS seems extremely interesting for determining importance between criteria in a rational way, instead of using arbitrary weights. Could we get more information about your model?
Regarding 'cardinalization' of linguistic variables perhaps I didn't express myself very well because I was thinking in my former comments about converting people appreciation, but relative to alternatives performance values, and for that reason I mentioned fuzzy logic.
Is there anyway that ARGUS can be applied for this purpose?
I have read some time ago that there is a ratio method that can produce weights for criteria based only on endogenous data, albeit I don't think that it deals with linguistics. I don't remember where I read that, but give some time to look at my records of publications related with this subject. If I can find it I will pass it to you.
Hallo Nolberto,
In the ARGUS-method, all input coming from the client/user are considered as ordinal values. More information can be found in the artikle on RG:
https://www.researchgate.net/publication/262971990_Argus_-_A_New_Multiple_Criteria_Method_Based_on_the_General_Idea_of_Outranking
In de Linked Data of that article you can find a limited version of the ARGUS-software ;-) so you can get a better feeling of what input is required.
Chapter Argus — A New Multiple Criteria Method Based on the General ...
Wim
Thank you for sending us your paper
I tried to open it but it appears a message from Acrobat saying that there is an error.
Hello Nolberto,
I have no problem accessing and downloading the paper following the RG-link...
Maybe if you access the paper not by following the link but by a publications search (search key words: argus outranking) on RG?
Dear Wim
Thank you for your answer.
I was not able to download as pdf your paper, as was my intention, but I had the opportunity to read it in RG.
It really impressed me the clarity of your explanations about ordinal scales, and I like your comparison between preferences in AHP; PROMETHEE and ELECTRE. Believe me, it will allow people to better understand these two different classes of preferences, which I have never seen elsewhere.
Coming to your example, I understand, according to your Table 12 and Figure 3, that Site 2 is the ‘best’, however I wonder how you determine the ranking.
What really surprises me is that Site 1 is considered in last place because according to the network in Figure 3 three sites dominate it while there is no domination from Site 1 to any other site..
The reason for my surprise is because when you examine Table 5, Site 1 has the lowest cost by a fair amount, as well as the lowest resistance from population, however it has 19 % lower storage capacity than Site 2. Just considering these values at a glance a prior analysis would probably say that the selection will be between Site 1 and Site 2, however, Site 2 ranks the highest and Site 1 as the lowest.
I made this elemental appraisal, because it is not even an analysis, even before knowing your result, which is something that I always do in small problems, and the fact that Site 2 ≽ ≽Site 1 calls my attention. Could it indicate that there may be assessments for criteria weights should be reviewed?
Needless to say, my comments above do not convey any criticism to your work. I just want to share with you my ideas.
Hello Nolberto,
In the example presented in the ARGUS-paper, site 1 ends up last in the ranking because site 2, site 4 and site 5 are 'outranking' site 1. Looking at the evaluations (in Table 5) one can get the impression that site 1 is not that bad after all. In combination with expressed preferences (table 6 - 10) the position of site 1 is a lot 'weaker'. E.g. the evaluations show that site 1 is better on the criteria 'Cost' and 'Resistance Population' compared to site 5. But the difference in 'cost' is -for the decision maker- not large enough to make a difference (he is indifferent between the two values), the same goes for 'Resistance Population'. In other words -for the decision maker- site 1 and site 5 are equal on the first 3 criteria and site 5 is better on the last criterion -taken the expressed preferences into account-. I know, in a lot of MCDM examples, the smalest difference gives an advantage, but when indifferences expressed by the decision maker are taken into account, you can have a different story. I always use the example of the criteria 'top speed' (in the example of buying a car). I do not care if a car has 160 km/h or 180 km/ h or 200 km/h as 'top speed' because I respect the Belgian speed limit (max 120 km/h on the highways ;-) and I never drive on the highways in Germany (some part have no speed limit). Another decision maker who does go the Germany regurlarly, can have preferences for higher values on the criterion 'top speed'.
Dear Wim
Yes, the difference between Sites 1 and Site 5 is small in both cost and resistant population, and most probably the DM decision was right. But remember that in so doing he is considering ceteris paribus only a part of the problem , not the whole project. The performance vales of both sites need to be compared with the other sites simultaneously, and then the trade-offs between them will probably tell a different history.
You know that I believe that the pair-wise comparisons should not be used in MCDM because in my opinion they distort reality by changing it by personal preferences. What would happen if another DM thinks differently? Or, what would happen if we work with the values we have, considering of course a scale for qualitative criteria? Wouldn’t it be interesting to investigate it?
Naturally, this procedure should not imply the absence of the DM. He/she are the most important element in the MCDM process. What I suggest is that once he gets results using a MCDM method (any), he is in a condition to thoroughly evaluate it, and at this moment he has the opportunity to make changes, based on his preferences and know how. In so doing the DM would be examining a result based on agreed performance quantitative and qualitative values (the initial table) not on preferences, which are subjective.
As a matter of fact I did this exercise using your example, but without weights and preferences, just data, employing the SIMUS model and got the following result: 1-2-4-3-5-6 with these respective scores: 0.25 – 0.25 – 0.25 – 0 -0 -0
with both 1, 2 and 4 getting the same score. Of course I used my own scale for both radioactive waste and resistant population and minimized criteria cost, radioactive and resistance criteria while maximising storage.
Doing an elemental sensitivity analysis by varying performance values in resistance population and assuming that these changes come from the DM, I got this result: 1-2-5-3-4-6 with these respective scores: 0.25 – 0.25 – 0.17 – 0.08 – 0 -0
You can see that the ranking for the two best places holds.
Of course, this proves nothing, however, somehow it appears to affirm that the first glance overview of Table 3 was correct regarding importance of Site 1.
If instead maximizing storage I minimize it (in reality I am in no condition to say which is correct), the result is: 2-1-5-3-4-6 with these respective scores: 0.83 – 0.25 – 0.25 – 0.08 – 0 - 0
I know, in a lot of MCDM examples, the smallest difference gives an advantage, but when indifferences expressed by the decision maker are taken into account, you can have a different story.
With your example of speeds in Belgium and in Germany, you proposed the best example I can think about, and it demonstrates how DM personal preferences or bias influence the result. This is the reason why I am against it. It is OK for trivial problems like this but not for serious projects.
Of course, I can send your the screen capture of these threes results
Wim, pls. Continue with your valuable contributions
Dear Nolberto,
Thank you for your comments and questions. I apologize for the delay of this response. I am actually an AHP practitioner (using AHP in my consulting projects) who is very lucky to get the opportunity to understand its theory relatively well by working closely for many years with Prof. Saaty himself by being his PhD student and supporting his research. AHP was developed using a "construction approach", which means that like constructing a building in general, there need to be fit between the building and its foundation. It is a priority measurement approach with its axioms as its foundation. I studied many (if not all) of its criticisms and concluded that their ideas don't fit the AHP foundation. It doesn't mean that their ideas are wrong, to me it simply means that they have other theory in mind.
My understanding of AHP has been formed by its philosophy that "It is better to be approximately right than precisely wrong" (hence there is a distinction between being fully consistent and being coherent) and "Objectivity is an agreed upon subjectivity" (which means that you cannot simply aggregate the judgments of 2 people, let alone thousands)
Validating AHP always comparing the outcome of an AHP validating model with its actual numbers (derived using a scientific formula or from data).
I am sorry if I gave the impression that I use AHP as a sales person. I am not. I use AHP as a means for collaboration, not for selling ideas. In my consulting projects, I simply use AHP to facilitate their decision making process. In some cases , when the outcome of their AHP models were not the same as their initial 'jump to conclusion' judgment, I simply need to facilitate them 'communicating' with the model to understand why they are different. Sometimes the model needed to be modified because they found that it didn't represent their understanding about the model well.
I don't have the need to convince a client based on my personal opinion because it is their life, not mine. I appreciate that there would be differences, but it usually is irrelevant for the client's situation. I expressed my ideas by asking questions to widen their views or sharing my knowledge or experience for them to consider. It is ok if they don't agree with me, as long as they are happy with their final conclusion.
Your third question is very interesting to me. AHP is measuring the 'sense of priority", meaning that it is a tacit and personal thing. With the AHP, you can't aggregate meaningfully inhomogeneous judgments even of two people only. Alignment of ideas needs some learning that 'changes opinions' so it can only be achieved by conversation which AHP can support - for a specific goal/objective. The questions to address first would be e.g., Do we see a need to reshape opinions of thousands of people? Why? For what purpose and what kind of outcome? What kind of actors they are in the problem/issue (e.g., decision makers, experts, victims). It surely would need a complex design of combining facilitating conversations, survey, and constructing AHP. models It might not be practical or feasible, but I don't see it as impossible. One need to understand the theory reasonably well to ensure coherence between the process design and the objective in mind, and to make sure not to trap into a 'number crunching' operation. I think it is a PhD - or even beyond - level of research.
I hope I address your questions adequately. Thanks for posing the initial question which I think is very important.
Dear Kirti
In bold your comments
Dear Kirti
Yes, I knew that you had the privilege of working with Prof. Saaty, and for that reason I am not surprised at the similitude of your respective answers to a same question.
AHP was developed using a "construction approach", which means that like constructing a building in general, there need to be fit between the building and its foundation.
The quality health of a foundation depends in a large extent on the materials used for its construction, and I don’t think that AHP uses the right materials
It is a priority measurement approach with its axioms as its foundation. I studied many (if not all) of its criticisms and concluded that their ideas don't fit the AHP foundation. It doesn't mean that their ideas are wrong, to me it simply means that they have other theory in mind.
Yours is very elegant way to say that criticisms can’t be explained by the AHP theory!
Most criticisms are not ideas but mathematical demonstrations of AHP flaws. Could you explain why my criticism for AHP by not 'representing' the will of thousands of people does not fit in AHP? Why you and nobody of AHP defenders challenge my assert about that it is wrong taking decision for others? Why don’t you give me materials that prove that I am wrong? You, other practitioners and including Dr. Saaty just deny what I say, period!
For instance, which are your arguments to defend the 1-9 scale when there are many scholars than say that it is not appropriate, and which is more, they demonstrate it. In case of doubt check what our colleague Waldemar Koczkodaj says, he has more than abundant papers written and published and experience to sustain his opinions.
Look at Edouard Kujawsky when he mentions Dr. Keeney, and indisputable recognized authority that said when criticizing the ratio method
“I do not want some administrator to give 2 minutes thought to the matter and state that pollution concentrations are 3 times as important as cost.”
You mention that other people have another theory. I, and I believe that you too, have a very clear objective for MCDM. We need a model that represents as faithful as possible a certain scenario and it is condensed in the initial matrix. I think that you can agree on that, from there you can use different models to select the best option.
My understanding of AHP has been formed by its philosophy that "It is better to be approximately right than precisely wrong" (hence there is a distinction between being fully consistent and being coherent) and "Objectivity is an agreed upon subjectivity" (which means that you cannot simply aggregate the judgments of 2 people, let alone thousands)
You are using Warren Buffett famous remark out of context; he is referring to investing in the stock market or in buying companies, not in selecting project alternatives subject to a set of constraints. He is working with completely uncertain and variable environment, you don’t. Most of the time you have quantitative and reliable information and other subjective coming from surveys and backed up by statistics.
If two or more people believe the same about something, why can’t you add their opinions? If they don’t , why can’t you average their responses? Have you never met with friends deciding to what restaurant go for dinner, and with everybody having his/her own preferences? What then you do? Synthesise this information by a vote.
Validating AHP always comparing the outcome of an AHP validating model with its actual numbers (derived using a scientific formula or from data).
I addressed this same subject in my first answer to you, and asking for your response. You didn’t.
I don’t know how many times I asked this question to different people. How can you validate something is you do not know if the result is correct or not? Could you please explain it to me, because every time that I asked this question I receive no answers?
Coming from you it is strange what you say about scientific formula. In an algebraic formula you replace the unknown ‘x’ by a value and you get the right answer, irrelevant of the quality of the value you place, since the formula does not discriminate if the value is correct or not. Therefore, the procedure is correct but may be data is not. Same happens in MCDM models; procedures are normally correct but they do not analyze the quality of the data, and then the result is just a mechanical procedure.
I am sorry if I gave the impression that I use AHP as a sales person. I am not. I use AHP as a means for collaboration, not for selling ideas.
Well, when you ask about the dichotomy between accepting what the client says or trying to modify his/her thinking, what is it? As far as I know a DM is not a counsellor as you apparently say.
In my consulting projects, I simply use AHP to facilitate their decision making process.
Excellent, if you are a consultant that is part of your job, therefore you either act as a consultant for a company helping the DM or act as a DM. But in my opinion a consultant does a poor job when suggesting ill measures to be taken
In some cases , when the outcome of their AHP models were not the same as their initial 'jump to conclusion' judgment, I simply need to facilitate them 'communicating' with the model to understand why they are different.
Sometimes the model needed to be modified because they found that it didn't represent their understanding about the model well.
You are again playing as a consultant and from that point of view I congratulate you, but that is not what we are discussing here.
What is it that they didn’t understand well, the model or the problem?
I don't have the need to convince a client based on my personal opinion because it is their life, not mine. I appreciate that there would be differences, but it usually is irrelevant for the client's situation. I expressed my ideas by asking questions to widen their views or sharing my knowledge or experience for them to consider. It is ok if they don't agree with me, as long as they are happy with their final conclusion.
I agree in a 100 % with you; however it looks again that you are portraying your role as a consultant. I have been too a consultant for many years, therefore I perfectly understand your position, but here we are not discussing consultancy ethics or behaviour, we are discussing the rationality of a MCDM model. Just as a curiosity, how many times have you advised your clients to consider other models, or you stick always to AHP? Advice on selecting the right model to use is your duty as a consultant, but apparently, whatever the nature of the problem you recommend AHP.
Your third question is very interesting to me. AHP is measuring the 'sense of priority", meaning that it is a tacit and personal thing.
I don’t understand to what 3rd question you refer to, however, are you telling that your ‘sense of priority’ involves using your intuition, which could be a product of your tacit knowledge?
With the AHP, you can't aggregate meaningfully inhomogeneous judgments even of two people only.
Does it mean that two people have to be in the same wavelength? Of course you can’t but from their two divergent opinions you can extract conclusions
Alignment of ideas needs some learning that 'changes opinions' so it can only be achieved by conversation which AHP can support - for a specific goal/objective.
And why is the necessity for aligning ideas? It seems to me that you want to replicate in MCDM very old and respectable religious ideas than try to align people according to what their leaders believe are good and correct. Not in comparison of course, but you are certainly aware of the catastrophic and criminal results of Nazism policies in aligning people beliefs in the WWII.
Your comments insinuate that people must not feel different over some aspect, and again, how are you going by the magic of AHP to align ideas of 10,000 people?
Don’t you think that a much better way is to consult people by performing a survey, and from it extract conclusions? It is embedded in the law of the large numbers and the Gauss distribution. It is the same as the system to elect a president; the candidates make extensive and costly campaigns, with a lot of speeches, try to convince people, demonstrate that what they say is right and convenient for the country and for them, but it is the will of the people, measured in a single number, the quantity of votes, what matters.
The questions to address first would be e.g., Do we see a need to reshape opinions of thousands of people? Why? For what purpose and what kind of outcome? What kind of actors they are in the problem/issue (e.g., decision makers, experts, victims).
It surely would need a complex design of combining facilitating conversations, survey, and constructing AHP. models It might not be practical or feasible, but I don't see it as impossible.
In another words you recognize that it is an impossible task; politicians have known it from the beginning of time.
One need to understand the theory reasonably well to ensure coherence between the process design and the objective in mind, and to make sure not to trap into a 'number crunching' operation. I think it is a PhD - or even beyond - level of research.
With due respect Kirti you act the same as most people defending AHP. You offer only comments but no solutions, nice elaborated sentences, but no rational explanations. AHP is very simply, I grant you that, thus, which is it to understand? We are not talking about Einstein Relativity Theory. I however plenty agree with you that we do not have to fall into number crushing, which unfortunately is what many people using MCDM models do.
A Ph.D. or even beyond level of research?
Well, my Ph.D. thesis, years ago, addressed in essence this problem.
I hope I address your questions adequately. Thanks for posing the initial question which I think is very important.
You have been very kind in commenting my prior remarks, albeit you have not answered them.
Thank you very much for your valuable contribution, and please, don’t stop, we all ways can learn from others.
Nolberto
Dear Prof. Munier:
how are you going by the magic of AHP to align ideas of 10,000 people?
"aligning ideas of 10,000 people" is a problem. I don't think any MCDM technique as this "magic".
--> the way to do so, if required is by social processes such as "Technology of Participation" (ToP) by Intl. Culture Association (ICA).
Best Regards, Chakradhar
Dear Chakradhar
Of course there is no 'magic'. It was just a lively way of speaking in response to Kirti comments about 'align ideas'
Regarding you mentioning ToP. It is defined as an structured facilitation methods to help groups think, talk and work together.
This is not the case here. For MCDM we need to know just what people think about a project, and for them telling us what benefits and/or disadvantage they can expect, that is, pure information, data
We are not working with or lecturing them, we are not teachers or prophets trying to convert people to our ideas. We only need to know what they think and what are their preferences. We don't need to align ideas.
Once we collect this information we can feed our model. As easy as that.
Thank you for your contributions. In my goal of producing a better MCDM procedure, irrelevant of the model used, I need as many comments, criticisms and responses as possible. All of them are very valuable inputs because let me know things that I am not aware of, different points of view, and naturally telling me that I am wrong in such and such aspect, which of course is plausible.
I believe that this way of working and interchanging ideas about a very specific item and using this RG space, can be considered a sort of virtual workshop aiming at improving something, and in which I am only a moderator.
Dear Waldemar
I believe that in MCDM it is not a matter of align ideas and we don't need that either. It is a matter of gather data to understand how people think about something. If a 65 % of people say that Project A is preferred, 22 % of people say that they are inclined for project C and 13 % preferred project C, we have a clear signal of what the universe of people think of each project, and that is what we need. They are no voting for a unique project, they are expressing their preferences for each one.
When I mention Arrow's theorem I link it with the fact that a person or a reduced group of persons can't decide by a universe of persons. I wish I could work with you in the philosophy of the Arrow's theorem, but in all honesty, I am not technically prepared for that.
Hello Nolberto,
When more than one decision maker is involved, I prefer to use the term Group Decision Making (GDM) instead of MCDM but I have noticed that a lot of people do not make a distiction. A lot of GDM approaches are using a MCDM-method as engine which leads again to your question "which model did you use and why?"
My approach is not to use a MCDM method as core of a GDM method, but to let each decision maker (if there are only a handful or 10,000) to come up with his ranking of the alternatives (how he gets there and which MCDM method he used, is for me not important or relevant). When all individual rankings are known, I determine a group ranking that fit 'best' all the given individual rankings.
The resoning is rather simple:
- a measurement of 'fit' between a given ranking and one other ranking (eg the ranking of one of the decision makers) is the rank correlation coefficient of Kendall
- a measurement of 'fit' between a given ranking and several other rankings (the rankings of all the decision makers) is the median of the rank correlation coefficients between the given ranking and each individual ranking of the decision makers
- the problem of finding a ranking that fits 'best' the individual rankings of the decision makers can now be reduced to an optimalisation problem: find the ranking with the highest median rang correlation coeffcient with the given rankings of the decision makers.
The AURORA-method is a GDM method that uses this approach (the used algorithm to find the best group ranking is a branch and boud algorithm. Note that the time complexity of this algorithm is determined by the number of alternatives, not by the number of decision makers). You can find a demo version of the AURORA-software as data link of the RG publication "Why don't we KISS!?". (you can always ask me questions about the use of the software or about AURORA)
https://www.researchgate.net/publication/316880634_Why_don%27t_we_KISS_-_software_CD-ROM
https://www.researchgate.net/publication/4728970_A_branch_and_bound_algorithm_to_construct_a_consensus_ranking_based_on_Kendall%27s_t
Data Why don't we KISS - software CD-ROM
Article A branch and bound algorithm to construct a consensus rankin...
Dear Theo,
This is my response to your earlier request of comment as promised.
I consider myself as an AHP practitioner, not a mathematician nor even an academic (although I was a lecturer in an MBA program). My interest in learning its mathematics is limited to improving my understanding of the theory so that I can ‘speak the AHP language’ reasonably well.
So, regretfully I must say that I failed to understand the context that prompted you to the argument, hence I am not able to give you a meaningful comment on your mathematical conjecture. The way I see it as a practitioner, AHP is a ‘special language’ for articulating a tacit sense of relative priority in somebody’s mind and make it explicit reasonably well/accurate. A thoughtful and well informed individual could be expected to provide a set of judgments that gives an eigenvector with a low inconsistency. Inconsistency level is simply a feedback about the level of coherency in the set of judgments, hence simply for judging how well the resulting eigenvector represents the actual/tacit relative priority being measured. To me, improving accuracy (reducing inconsistency level) cannot be done mathematically. It needs the decision maker to review and revise his judgments. So, I don’t understand what ‘the principal right eigenvector minimizes the AHP inconsistency measure’ means in the purpose of ‘making this tacit sense of order explicit reasonably well’.
My understanding about geometric average is only in the context of aggregating judgments of a group of people. Given the same pairwise comparison judgment question, a group of individuals may give different set of responses, i.e., giving different numbers from the AHP’s 1-9 fundamental scale. Only relatively homogeneous judgments give meaningful aggregation value, hence the AHP’s distance of ratio scales measurement. Here I also fail to understand what ‘the geometric row mean’ represents in your conjecture.
I apologise for not being able to give you a meaningful comment as you might hope for. I can only wish you the best with finalising this research.
Wim
WDK. When more than one decision maker is involved, I prefer to use the term Group Decision Making (GDM) instead of MCDM but I have noticed that a lot of people do not make a distiction. A lot of GDM approaches are using a MCDM-method as engine which leads again to your question "which model did you use and why?"
NM. Well, even when I with agree with you the use of MCDM method does not preclude GDM. In my opinion GDM is one of the options.
I have seen a couple of methods that use GDM together with MCDM. AHP is one of them and if I am not wrong also WASPAS or a similar method developed by Zavadskas allow for doing GDM; also SIMUS contemplates that but based on a completely different concept
WDK. My approach is not to use a MCDM method as core of a GDM method, but to let each decision maker (if there are only a handful or 10,000) to come up with his ranking of the alternatives (how he gets there and which MCDM method he used, is for me not important or relevant). When all individual rankings are known, I determine a group ranking that fit 'best' all the given individual rankings.
NM. I believe that it is a very interesting procedure and for me it makes a lot of sense, and I also understand that each DM starts with the same set of alternatives and criteria, that is, with the same modelling.
If this is the case ranking comparisons make sense. However, if each DM makes its own model, I don’t think rankings can be compared, because they may correspond to different problems.
In this second case it appears that each DM is allowed to have his own idea of reality , when in fact there is only one. Therefore, one could use pair-wise comparisons as in AHP and ANP, while another uses reliable quantitative values, and may be another one doesn’t use any method, and then relies in his knowledge.
WDK. The resoning is rather simple:
- a measurement of 'fit' between a given ranking and one other ranking (eg the ranking of one of the decision makers) is the rank correlation coefficient of Kendall
NM. I agree, and I understand that you are assuming that there could be as many rankings as DMs.
For me the procedure looks sound and mathematically correct. However, the Kendall rank as well as Spearman’s and Pearson's work with random variables, and I don’t get that this is the case here, when it is assumed that the DM may employs a MCDM method, or none, but in any case it seems to me that he is not acting as random. Please correct me if I misunderstood the method or simply if I am wrong.
Than you so much for your offer about the use of AURORA
Best regards
Nolberto
I guess at many times your choice of the model used is affected by your familiarity with the models and with the nature of the problem itself. For example MCDA tools are known and they vary in terms of usability according to the logic of the problem at hand.
I created a decision support system for contractor selection using an assortment of MAUT, AHP, and PROMETHEE II. The model also introduced bounds to the PROMETHEE method which is a new approach.
I'd be glad if you'd like to take a look at the paper and share some thoughts. The paper is called A deterministic contractor selection decision support system for competitive bidding I can make full texts available through my profile.
Michael
Could you send me your paper? I could not find it in your profile
Dear Mudassar
Thanks for your response
You have selected two very good methods, since one of them at least(Promethee) allows for considering resources availability
I am curious and I imagine that it could be very helpful for many people in RG, if we can have a brief description of your problem and how you prepared the initial decision matrix, and which way you used to get the weights.
Bye the way, have you also performed sensitivity analysis?
Dear Abteen
Thank you for answering my question.
There is no doubt that your experience in MCDM plays a fundamental role in giving a coherent answer
What immediately transpires from your comments is that you can’t use the same MCDM for any kind of problems. I also have said that many times, however many people are not aware of this fact.
Another very important aspect of your answer is that the most important task is not to select a method to solve a problem but to gather the data, and I would add to this, that data must represent as faithful as possible the scenario that you are modelling.
Of course, your last paragraph obviously refers to pair-wise comparison, and according some researchers the way a question is posed may lead to different performance values, and then different results. My experience in this is very limited, but common sense indicates than that may be true.