In my specific research (energy efficiency potential of envelope constructions) I have applied three different MCDA methods for evaluation of my goal function (Analytic Hierarchy Process (AHP),Grey Relational Analysis (GRA) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and I have got three different results, but that result aren't correlate with each other. For instance, in my case the largest correlation had GRA method with TOPSIS (weights by Entropy method) methods - 0.962. From the other hand the correlation between AHP and GRA is lesser -0.792. But unfortunately, the key contradictory fact is the ranking of the alternative - there are no obvious leader by all of above mentioned techniques...
As for me I haven't idea what I can to deal with to improve the results of research to generalized my conclusion for decision-making process.
I'll be very grateful for any idea or advice related to the problem solution.
Yuriy Biks, the "problem" you describe is one that we encounter very often when it comes to MCDA methods! Though it sadly isn't discussed nearly as much as it should in environmental/engineering literature. The fact is that true decisions are always subjective, and therefore, different methodologies can lead to different results. We go over this in detail in our paper on environmental decision support using MCDA methods, which I think can be very pertinent to you (especially section 4.2.2 "selecting a decision support method" and section 4.2.5 "dealing with variable results"):
Article Developing successful environmental decision support systems...
I know that it's probably not the answer that you are looking for, but, to quote a section from the above paper: "However, many have concluded that these differences between MCDM methods can be beneficial to the
decision-makers, forcing them to look at the problem through various lenses and giving them a better understanding of the situation and thus the reason for choosing an alternative." (Walling and Vaneeckhaute, 2020)
I personally think that your results are extremely interesting and truly reflect the nature of decision-making. Oftentimes, we (engineers/scientists) tend to want to find the "optimal" solution, believing that one alternative must be objectively better than the others. But in the case where we are truly facing a decision-making problem, as seems to be the case in your situation, there is no optimal solution! There are just different solutions that each have their benefits and drawbacks and the ranking of these solutions depends on what criteria are preferred and what methods are implemented. Now, I know that this doesn't help when seeking to make a recommendation (e.g. a client probably wouldn't be happy if they asked you to help with their decision and you told them that, following your analysis, there is no clear solution", but that's the whole point!!! If you can easily and clearly find a better solution, then you're likely dealing with a very easy decision or an optimization problem instead. Focus should instead be placed on understanding why some alternatives may outperform one-another in different situations/using different methods. What is causing method 1 to be preferred by TOPSIS, while AHP recommends method 3? Based on this information, can I reformulate the decision or can I make my own conclusions on which method I should prefer? Are there factors that you might not have included that might be pertinent? There are many, many, places that you can move forward from here. As I've stated, this is a super interesting place to be in, just open yourself to the variability and you can basically go anywhere with this analysis!
On a more practical note, I do recommend that you make sure that your decision criteria are properly divided and representative of your situation, so that you aren't experiencing any kind of biases (discussed in section 4.2.5 of our paper). It is possible that some criteria might be weighed non-proportionally or be over-represented, which can lead to unbalanced weighing or can even impact the normalization procedure used by these methods (e.g. splitting and asymmetry biases). Hierarchical methods (like AHP) can also be victim to bias due to the hierarchy, if, for example, you placed the same criterion in "level" 1 instead of 3, it can lead it to being weighed differently (usually more importantly) by users.
I hope this helps!
Cheers,
Eric
Yuriy,
I would caution you in the use of AHP in any research application, as it is fatally flawed as a decision analysis method. Unfortunately AHP is prone to rank reversal in regards to a DM's preferences, and may result in suboptimal decisions. I would recommend using methods adhering the the rules of axiomatic decision analysis such as multi-objective utility theory or multi-objective value theory depending on your preference, and then compare your results GRA and TOPSIS results to the output from these models.
Nicholas
Of course you are right, but don't forget that in the utility theory you put values of utility according YOUR point of view which may be different from mine
Dear Eric
EW- the "problem" you describe is one that we encounter very often when it comes to MCDA methods! Though it sadly isn't discussed nearly as much as it should in environmental/engineering literature. The fact is that true decisions are always subjective,
NM. I believe that most personal and some corporate and political decisions have a very large subjective component, but they are not completely subjective. In my opinion, in the industrial field it is the opposite, most of the decisions are based on actual, objective data and very few by intuition. Aa car company don’t select a site for a new factory based on its feelings but in a plethora of facts that are clearly objective, such as land price, taxes, suitable manpower, poor communications, etc.
EW- and therefore, different methodologies can lead to different results.
NM – This is absolutely true, but we have to differentiate subjectivity in the method with personal subjectivity. The first can be found for instance in AHP when assumes that trade-offs can be used as weights. The second may be found in practically all methods, for instance, in PROMETHEE when deciding what function to use.
EW-We go over this in detail in our paper on environmental decision support using MCDA methods, which I think can be very pertinent to you (especially section 4.2.2 "selecting a decision support method" and section 4.2.5 "dealing with variable results"):
Article Developing successful environmental decision support systems...
I know that it's probably not the answer that you are looking for, but to quote a section from the above paper: "However, many have concluded that these differences between MCDM methods can be beneficial to the
decision-makers, forcing them to look at the problem through various lenses and giving them a better understanding of the situation and thus the reason for choosing an alternative." (Walling and Vaneeckhaute, 2020).
NM- I perused your paper, but I fail to see how there could be a better understanding of the situation. If all methods start from the same initial decision making, it is hard to see how this knowledge can be acquired.
Sincerely, I also fail to understand how uncertainty of results can be beneficial. In addition, the paper recommends solving a problem by different methods. What for? What conclusion can you get from 3 or 4 different rankings on the same problem? What information can you extract from it?
Even if two of them have a high correlation, what do you make out of it?
Unfortunately, Section 4.2 when enumerating the different modes the methods use to operate, does not mention the most important of them all, which is Linear Programming and Goal Programming, the pioneer methods in MCDM, and the only methods that can give optimal solutions, if they exist.
EW- I personally think that your results are extremely interesting and truly reflect the nature of decision-making. Oftentimes, we (engineers/scientists) tend to want to find the "optimal" solution, believing that one alternative must be objectively better than the others.
But in the case where we are truly facing a decision-making problem, as seems to be the case in your situation, there is no optimal solution! There are just different solutions that each have their benefits and drawbacks and the ranking of these solutions depends on what criteria are preferred and what methods are implemented.
NM. Again, there could be optimal solutions although possibly not for multiple objectives, although Goal Programming can do that in some extent.
Don’t you think that if a problem has different solutions, the DM is back to square one. In addition, how you measure benefits and drawbacks? Which is the benefit that a method doesn’t contemplate. resources?
However, while I write this, I think that there is a grain of truth regarding benefits and drawbacks. If one method gives a rank of A>B>C while another yields B>C>A for the same problem, it seems that A is the best according one method, and from the mathematical point of view. When analyzed by the DM, it could be that it is not longer the best considering another and intangible point of view, for instance delivery time, which is essential in this project and neither is the second best, B. Examining the other ranking it could be that B is much better than A.
This analysis, which is no more than sensitivity analysis, but applied simultaneously to several methods, can be a way to differentiate them and choose the best method.
EW- Now, I know that this doesn't help when seeking to make a recommendation (e.g. a client probably wouldn't be happy if they asked you to help with their decision and you told them that, following your analysis, there is no clear solution", but that's the whole point!!!
NM- Exactly, and also, I think that he wouldn’t be happy either when asking the DM, and this responding that some input is from his intuition. Again, multiple or parallel sensitivity analysis could be the answer.
EW- If you can easily and clearly find a better solution, then you're likely dealing with a very easy decision or an optimization problem instead.
NM- I don’t think that you can talk of optimality in multi criteria.
EW - Focus should instead be placed on understanding why some alternatives may outperform one-another in different situations/using different methods.
NM-Yes, that is a good suggestion, provided that you have the same stick to measure them from different methods, and in addition, you have to define the most important alternatives, which is precisely the purpose of MCDM
EW- What is causing method 1 to be preferred by TOPSIS, while AHP recommends method 3?
NM – You have to compare rational methods such as TOPSIS with PROMETHEE, but you can’t compare one of those methods with AHP which is completely irrational.
EW- Based on this information, can I reformulate the decision or can I make my own conclusions on which method I should prefer? Are there factors that you might not have included that might be pertinent? There are many, many, places that you can move forward from here. As I've stated, this is a super interesting place to be in, just open yourself to the variability and you can basically go anywhere with this analysis!
NM- You are answering your own question with your second one, which agrees with what I said above about using sensitivity analysis.
Regarding to your last paragraph apparently you forget that you can’t make variations at will. They must be documented. If you are selecting electronic equipment, and method selected A>B>C the DM doesn’t need to accept this result blindly. For instance, he may have knowledge about some deficiencies in the past with equipment A
On a more practical note, I do recommend that you make sure that your decision criteria are properly divided and representative of your situation, so that you aren't experiencing any kind of biases (discussed in section 4.2.5 of our paper). It is possible that some criteria might be weighed non-proportionally or be over-represented, which can lead to unbalanced weighing or can even impact the normalization procedure used by these methods (e.g. splitting and asymmetry biases).
NM- If you use objective weights from entropy, they represent the capacity of each criterion to evaluate alternatives (Claude Shannon theorem), and yes, it could be that there are unbalanced criteria, but that unbalance is consequence of the real data imputed, and normally there is nothing you can do about it. Therefore, there is not non-proportional or over representation. The importance of the criteria does not depend of the DM but from the data.
EW-Hierarchical methods (like AHP) can also be victim to bias due to the hierarchy, if, for example, you placed the same criterion in "level" 1 instead of 3, it can lead it to being weighed differently (usually more importantly) by users.
NM- There is no doubt about it. The hierarchy structure has been obsolete for decades,
Thank you for your very good contribution
Dear Pr,
Please have a look in this article (section 3), I am the writer, I can give you so much help,
Title: Performance of multicriteria decision making methods: study and cases
Authors: Moncef Abbas; Zhor Chergui
Abstract: In this paper, the problematic of MCDM methods evaluation is treated, we study in particular the behaviour of the criteria (tests) measuring the performance of these methods. Indeed, a critical study toward the series of tests proposed by Triantaphyllou is carried out. On this basis, some impossibility results are defined. In addition, a new assessment test is proposed.
Dear Eric ! Thanks a lot for your answer, but I really didn't find a "main point" from your post. I believe that it hypothetically could be found from detailed review of your paper. Best regards, Biks Yuriy
Dear Yuriy
Since the creation of MCDM methods, several decades ago, the problem of different results using different methods, all of them treating the same problem, has been always present.
The reason, in my understanding, is among others, subjectivity. We must start assuming that most methods have sound mathematical algorithms, and when subjectivity is applied it is based on reasoning. AHP is the exception because it does not have any mathematical support, except the use on the Eigen Value Analysis, and this is only good for a number of alternatives is ≤ 3. It is a method based on intuition, strange assumptions, without any ground, and others that contradict logic.
There are excellent methods such as PROMETHEE and TOPSIS, however, both have their drawbacks especially in relationship with weights. You used entropy weights for TOPSIS, which is the correct procedure. However, in TOPSIS you have subjectivity when deciding type of distance to use.
I am not very familiar with GRAY, and then my opinion on it is worthless.
The main problem in comparing rankings is not if the rankings from three methods have a decent Kendall Tau coefficient of correlation, because it does not help. It certainly indicates that they can give very similar results, but it does not mean that those results are close to the ‘real’ result, which of course we don’t know.
In my opinion, the only way to compare results is making for each method a parallelism between each result along a large number of different data for the same scenario, compute their trends, and relate them with the actual trend of data along all scenarios,
This data is often quantitative and reliable, but it also can be qualitative, and still reliable if obtained through sound procedures as statistics or surveys. Consequently, this data is real, and it is then a way to compare our mathematical results with it. The closest the better.
With some colleagues we applied this hypothesis, considering 72 scenarios, all referred to the same alternatives, and with the same criteria, but with different values per scenario, then, determined how the 72 scenarios considered together develop, and contrasted that result with the results from two MCDM methods.
There was a strong coincidence with one of the methods. The paper will be published shortly.
A simple table can be built with two columns belonging to the two methods and a third one corresponding to the actual evolution of the criteria that lead to a result, which, again, we don’t know, but we DO KNOW that this actual data is responsible for the result.
The hypothesis supporting this procedure is that the MCDM method with most coincidences with actual factors is the one which may produce the best results. That is, it is an indirect measure.
As an analogy, there are un infinite number of factors that indicate when a person is in perfect health (if this exists). It is something intangible. However, we have different methods to keep a healthy heart, a proper weight, a healthy skin, a bright mind, etc., and we made different combos with these measures, it suggests that the combo that best approaches a healthy individual in all aspects, is probably the best.
There is another much simpler method, and it is to compare results from one MCDM method, with that of a method not using any type of weights, thresholds or assumption, that is a method that works only with actual facts, as SIMUS. It appears that if a problem is solved by this proxy method and by any other methods, the one that best correlate with the proxy could be the best.
Dear Nicholas ! If I got your advise right, you think that AHP absolutely couldn't be used as an assessment tool for MCDA problems in any circumstances. Am I got your point right?
If so, why can we find a huge amount of AHP applying examples as it advertised on the Saati wiki page?
As far as I know, AHP technique is quite appropriate tool for evaluation of MCDA problems... ( may be in social sciences it is more acceptable than in engineering ones...?)
Nevertheless, I appreciate any relevant advise and thank you for your reply.
Best regards, Biks Yuriy
Dear Nolberto ! Thank you for your comprehensive and ground answer. I understood that there is no easy path to got the truth...
The main consequences that I got from your answer are:
1. There is no "right" technique to obtain proofs for legitimacy of my MCDA tool if only the multi scenario analysis with the same data.
2. AHP is the "bad" sheep in the flock of the MCDA assessment techniques.
3. SIMUS method to be studied for further implementation in my practice.
3. To be continued...
Am I close to the "right" way?
Dear Pr.Nolberto Munier
You can get my paper from this link,
http://www.inderscience.com/offer.php?id=87821
Best regards
Dear Yuriy
Thank you for your answer, which is 100% close to what I said.
Please send me your email, I would like to send you the SIMUS software, of course, if you think that it can help you
By the way, it is true that AHP has a record of being the most used method in MCDM.
From my point of view, it is because it has the ability to make people believe that they can solve complex problems, just by intuition. It appears that mathematics, engineering and economy don't have any place in solving these problems.
In my opinion, AHP is good for trivial problems, and I honestly don't believe that social sciences problems which are normally complex, can be solved by AHP. One of the reasons is because then it would be necessary to consider that social problems can be represented and limited to a linear hierarchy
Yuriy Biks
There can be a problem with data normalization in TOPSIS. The second option is the weights vector. You can check the impact of these components in MCDA problems here: Article Are MCDA methods benchmarkable? A comparative study of TOPSI...
If you know Python, you can use this repo: https://gitlab.com/shekhand/mcda
Dear Wojciech! Thank you for the great opportunity to check my results with additional point of view. Unfortunately I am not confident with the programming skills, so I'll try to find a way for my problem with proposed article.
Dear Yiriy
Perhaps we can help you if we know the nature of your problem
Dear Nolberto Munier
I always keep listening to your discussion about the MCDM techniques and especially the AHP one. I would like to share with you and other MCDM researchers our new MCDM method 'FDOSM' which published recently. Your comments are really important. Please find it at the following link:
https://onlinelibrary.wiley.com/doi/abs/10.1002/int.22322
Dear Mr. Albahri
Thank you for your message.
Following your suggestion and used the link you provide for your paper; unfortunately I can read only the abstract.
In it appears that a main problem in most MCDM methods is aggregation, and I believe that this is correct.
You talk about the FDOSM, which I don't know ,and which works using fuzzy logic, which is something that I am not proficient or have experience on.
For that reason I am unable to make a recommendation
This problem occurs frequently in MCDM as to which calculation method is the best. In that case, do solution ranking with quite a few methods and select that solution which appears maximum number of times with same ranking.
Dear Sumit Mishra ! Thank you a lot lot your answer.To provide your point I 'd have to conduct a lot of ranking procedures with the same criteria for variety of alternative. I feel some doubts for this "training" set of alternatives for my research. Nevertheless, thank you for your sharing.
Dear Sumit
Yes, it is a good procedure because it is reasonable, as a matter of fact I followed it time ago.
That is, you 'fabricate' a new ranking with the most common coincident alternatives for the first, the second, position, and so on.
However, it does not solve the problem of determining which is the best method, since the ranking you created is fictitious because it does not correspond to any of the methods tested.
In my approach I compared by correlation that ranking with the ranking of another method that I assumed could be considered a proxy of reality.
The method I use as proxy was SIMUS.
Why it?
Because reality does not use any subjective data and not documented assumption.
The real result is based in maybe hundreds of factors that intervene on many different ways.
Thus, reality is independent of any anthropogenic contribution, and in this respect it is somehow similar to SIMUS where there are no weights or assumptions, since all data is objective.
My reasoning is based on this similarity. Consequently, any MCDM which is 100 % objective could be considered a proxy, but to the best of my knowledge I don't know such a method, which in addition could incorporate all or most of the characteristics of complex scenarios.
I also agree with Yuriy in that you would need a lot of ranking procedures
You are right Sir Nolberto ! What I am suggesting is not creating a fabricated ranking but selecting the method which is the sole and true representative of the pack of methods used.
Yuriy is right that this suggestion involves working with quite a few methods but then there is no universally single method which by default represents all, although, there are few which are most famous.
In MCDM, we are creating an unbiased index which in conjunction with assigned wights and constraints can give us the right set of leaders in a pack thru ranking. The results of different methods will give different values but we are only interested in knowing the most unbiased ranking in a pack which in general can not be compared so easily.
Hence, use of quite a few famous methods to remove bias of ranking in finally selected method as much as possible.
Dear Sumit
As I understand your aim is designing a single method that encompasses the results of other methods?
I fail to understand how you can do that
Your second paragraph is true.
Your third paragraph is intriguing. You use assigning weights, for what? And wherever you use them, where do you get the weights from?
You also use the word 'unbiased'. If you use subjective weights they are not unbiased.
If you use objective weights, they are common for all methods, because criteria are unique for all of them
You also say that you assign constraints. In my opinion, they can't be assigned at random, they must adjust to the alternatives.
Could you please tell us which are the most famous methods to remove bias?
Dear Nolberto Sir,
What I am suggesting is the use of few famous methods like WSM, AHP, TOPSIS, SIMUS etc. to arrive at Ranking and by comparison, choose a method which has more similar Ranking results with its peer methods.
The term unbiased is used in the sense that we are finally choosing a method which is not much varied from few peer group methods.
In 3rd paragraph I was trying to bring a generic glimpse of any MCDM method but probably I could not elaborate it clearly and complicated it with use of terms like weights and constraints.
Dear Sumit
OK, you compare results or rankings from three A, B and C methods with rankings of say 5 alternatives each, and which is the gain?
Assume that A and C coincide in 2 or 3 alternatives or even both suggest the same best alternative. What does it mean?
Remember that in the first three methods that you mention you are using probably the same weights derived from AHP. That is, you are using AHP weights in TOPSIS and in WSM.
In my opinion that is biasing, because you are using the output of one method (AHP) as an input of another method (TOPSIS), and therefore establishing an artificial link between these two methods. Do you think that under this circumstance both results are independent?
If this reasoning is correct, do you think that there is no bias? The same with WSM or another methods using weights.
Dear Sir,
I am suggesting to use only rank comparison of few famous methods to arrive at the finally shortlisted method.
Values of one method can not be compared to values of another method.
A method ranking: 12345
B method ranking: 12543
C method ranking: 42315
D method ranking: 21345
Therefore, A is the shortlisted method out of peer methods.
Method B is good enough only for 1st and 2nd rank in comparison to A but not for later ranks with peer groups.
I applied a grey relation analysis for machining process improvement. please go through the video https://www.youtube.com/watch?v=9leOLn_2Dds&t=217s
I suggest that you try to analyze the sensitivity of the results obtained by each method to small changes in the initial data and preferences of the decision maker. Perhaps as a result of such an analysis, several of the most promising alternatives will be identified.
Vladislav Shakirov Thank you for your answer, your proposal could be accepted as one of the possible ways of "the best alternative" validation. I tend to use IOSO soft for multicriteria optimization for each type of my assemblies firstly and to compare them with TOPSIS and GRA methods as objective ones.
I meant the following. Methods can show different ranking results, but varying the parameters may change the ranks of alternatives. Thus, each method will produce some set of better alternatives (eg 2-3 alternatives ... how many alternatives are you considering?). If these are intersecting sets, then a better alternative can be found. If this does not happen, then you can carry out the analysis with another method, PROMETHEE, for example. Or maybe it is worth reformulating the problem or revising the composition of the criteria.
Maybe following my article will be helpful:
J. Gogodze, Ranking-Theory Methods for Solving Multicriteria Decision-Making Problems, April 2019, Advances in Operations Research, 2019
DOI: 10.1155/2019/3217949
Dear Yuriy
I fully agree with Vladislav suggestion. In realty, Sensitivity Analysis (SA), can help a lot by indicating which different method has the nest alternative. If one method shows that alternative A is the best, followed by alternatives B and C, and the SA shows that A is very sensitive to the variation of certain criteria, then, probably it will be better that the DM chooses alternative B
Dear Vladislav
I agree with your concepts but I don't understand what is it that you call as 'intersecting sets'
Could you pls. clarify it?
Thank you
Nolberto Munier
Dear Mr. Nolberto,
Thank you for your answer and example. This is exactly what I meant. I will add one more example. If the first method obtained the best alternative A, but as a result of varying the initial data or preferences, alternative B becomes the best, then a set of promising alternatives is formed, consisting of A, B. Next, we perform a sensitivity analysis to the results obtained by other methods. Suppose we have obtained a set of promising alternatives B, C (second method) and F, B (third method) ... The result of the intersection of these sets will be alternative B. It can be argued that alternative B is the best in this case. Of course, I have described a very simple case. In reality, there may not be such an alternative. But with an increase in the latitude of variation of the parameters in the analysis of sensitivity, the intersection of the sets must necessarily appear.
Thank you Vladislav for your clarification
Well, for me, it makes sense, that is, if two or more methods shows preference for alt B after SA, it appears reasonable to assume that this is the best.
However, in my opinion, the problem is not with the method you propose, but how you increase the corresponding criteria. If this increment is based in choosing the criterion with the largest weight, and then increasing it, as is done at present, the effort is useless for three reasons
1st. There is no guarantee that the criterion with the highest weight is the correct one to increase, may be the best is that with the lowest weight, and this has been demonstrated
2nd A weight is not such, unless it is an objective weight, and then, it is not good to evaluate alternatives, since it is not a weight but a trade-off.. It is different with objective weights that are suitable to evaluate alternatives (Shannon theorem)
3rd It is necessary to increase or decrease all intervening criteria, not necessarily the whole set, and do that simultaneously, not one at the time, and this is not done at present
Dear Yuriy Biks ; You can choose the PROMETHEE method because it gives both positive and negative rankings at the same time and finally you can get a net ranking if you want. In addition, if you have a data set that is an example for your work, you can first try the methods through this data set to find the most suitable method for your application. Moreover, I suggest you review again the data set of the criteria and alternatives and the weights of your criteria.
Dear Feride
I agree with you related PROMETHEE.
However , I don't understand your sentence .you can first try the methods through this data set to find the most suitable method for your application
I believe that you refer to find the method that can model the best his problem.
If it is so, you are right, however most, if not all methods ,are only able to model the most elemental characteristics of the project, such as alternatives, criteria and in some very few cases considering resources, as PREOMETHEE.
What about other characteristics such as precedence, impact, simultaneity of maximums and minimums, several scenarios, especial conditions, dedicating weighting and many more?
Sure, the user can review his data, but probably it has nothing to do with methods inability to model slightly complex scenarios
Dear Nolberto Munier ; Because of the advantages of the PROMETHEE method, I have offered this method as a suggestion. But it is the decision maker that actually determines the method. You are right, the choice of method may vary depending on alternatives, criteria, priorities and many situations. However, if different results are obtained when trying different methods, this is not a simple multi-criteria decision-making problem. I agree with you not every method can solve complex problems. Then, in multi-criteria decision-making problem, the decision maker should basically distinguish the methods according to what he wants and what he cares about.
Dear Feride
Thank you for your answer. with which I am in complete agreement.
However, it appears that I did not express myself very well.
It goes without saying that the DM has to select the method to use according to her/his needs, no doubt about it. But the question is that whatever the method that she/he chooses, many aspects of real scenarios CAN'T BE MODELLED, because most methods do not have that capacity.
I suggest you to look in RG in my profile, papers 300a and 300b . In there I propose a matrix to find the most appropriate MCDM method, when the scenario has different types of characteristics. I counted 55 of then, and put examples.
Of course not all of these characteristics apply to a scenario, however, most of them CAN'T BE MODELLED by any of the different known MCDM methods. This is my point, if a scenario can't be modelled not even on PROMETHHE (one of the best) or in TOPSIS, (another excellent) or in ANP (using a network), then, , it can't be solved.
Observe that if we have say 15 characteristics in a complex scenario, there are only two methods that can handle it, Linear Programming (LP) and SIMUS (because it is based on LP)
I would be happy in further discussion of this issue if you are interested
Dear Nolberto Munier ; thank you for answer and explanation. I understand you better now. I have examined your work, I think it is a study that will shed light on such problems. But I'm trying intuitionistic fuzzy logic to look at things from a different angle. It has been observed that it gives better results in all applications because it includes both membership, non-membership and most importantly the degree of hesitancy. What I observed in studies; intuitionistic fuzzy logic gives more sensitive results in multi-criteria decision making. I wonder what you think about this topic.
Dear Feride
I am afraid that I don't understand you. You stated first that you were concerned with Yuriy question that referred to correctly evaluate problems results in MCDM.
Now you say that you are working win intuitionistic fuzzy to look at things at a different angle. Yes, but in so doing you are rightly trying to improve the quality of information that comes from the alternatives, and then you can't talk about evaluation.
Evaluation, for me, is to improve as much as possible the ability of criteria to evaluate or qualify alternatives, and sincerely, I don't see the relation with intuitionistic fuzzy, where you improve the data contained in each criterion.
May I remind you that evaluation in MCDM, is related with the discrimination of values in a criterion, and I don't see that the fuzzy procedure can improve it.
In addition, you talk about better results using fuzzy. My question is: How do you know that there are better results? What are for you sensitive results?
Dear Nolberto Munier ; Thank you for your comments. Yes, you are right, we moved somewhere different from the purpose of the problem . My answer to the main question of Yuriy Biks is; with the PROMETHEE method, it will pass its data through the decision-making process again. Also, when choosing one of the multi-criteria decision making methods, I think you and Vladislav Shakirov should do sensitivity analysis as discussed above. On the subject of intuitionistic fuzzy logic; I think that only one data entry is not enough in datasets, membership, non-membership and sensitivity data should also be revealed. By the way, I think our mutual discussions will deflect the question from its purpose. But If you are interested in fuzzy logic and intuitionistic fuzzy logic I would also be happy to discuss with you in privacy.
Dear Feride
Sensitivity analysis (SA) is mandatory in any MCDM solution. It is not only a matter of uncertainty, since it is more related to robustness of the best alternative selected. It depends on the interactions between alternatives and criteria and among criteria.
I have a lot of respect for fuzzy logic but really I am more concerned about its use. You can use a lower and a higher value simultaneously in a MCDM method, without going intuitionistic
I have said this many times: Fuzzy logic is great to have more accurate data, and very useful, PROVIDED THAT YOU HAVE WHERE TO USE IT.
For instance, there is a scenario in where one of its characteristics is that alternative A for whatever reasons, must precede alternative G, and you have the data corresponding to a criterion such as 'construction' .
You can use intuitionistic fuzzy to get the most exact values for this criterion performance values.
My question is, if the MCDM method that you are using, say TOPSIS, can't incorporate or model that precedence, in the decision matrix, which is the utility in having refined fuzzy values?
I really appreciate your kind offer, but fact is that I am neither proficient not too much interested in fuzzy. I am more interested in modelling, in structures, not in numbers.
Dear Feride Tugrul !
Thank you a lot for sharing the idea, but I also didn't got you point "... if you have a data set that is an example for your work, you can first try the methods through this data set to find the most suitable method for your application." If my decision provided by several techniques has resulted in different ranking, what idea would you propose for further investigation? That is the point...
I also is interested in usage of fuzzy logic approach in my case.
Dear Yuriy Biks ; Thank you so much for your comment. I will try to express myself. I hope I can make a small contribution to your problem. If you get different results when you try your data set with different methods and you are looking for a different way despite sensitivity analysis; I suggest you experiment with expressing your data with fuzzy numbers or intuitionistic fuzzy values. Also, if you are positive about the fuzzy logic approach in your work, you can try to transform your data into fuzzy numbers or intuitionistic fuzzy values. In my work, I use the heuristic fuzzy logic approach because the data are expressed better. There are many ways to express your data in fuzzy logic or intuitionistic fuzzy logic. If you plan to work with fuzzy logic, you need to express your data with the degree of membership and the degree of non-membership. If you plan to work with intuitionistic fuzzy sets, then you need to express your data in terms of membership, non-membership and degree of hesitancy. It is difficult for decision makers to determine the degree of hesitancy, especially in intuitionistic fuzzy clusters. When you encounter such problems, I suggest you take advantage of 'controlled sets'.
Dear Nolberto Munier ; Thank you for your answer. I understand you. What I want to say is that if there is hesitation, I recommend using intuitionistic fuzzy sets. If there is no hesitation in your data set, you do not need to use it. My answer to your question; I have not encountered such a problem in the application areas I have worked until today. If you want to express your data with fuzzy values to model according to the method you use, there are many methods recommended for this. Obviously, there was always some hesitancy in the applications I worked with and that's why intuitionistic fuzzy clusters found a solution to my problem. However, if there is a situation that it cannot model, it should be investigated because I'm talking for myself I have never encountered such a problem before. I hope it was an enlightening answer to your question.
Dear Feride
Thank you for you detailed explanation, however yo make reference to 'my problem" and I wonder what problem you are referring to
I am simply saying that if for a certain problem you use different MCDM methods, and you get different rankings, there is no way to say which is the best
I don't see or don't understand what relation may exist between this problem and intuitionistic fuzzy set and even with hesitancy. I am not saying that you are wrong, of course, my knowledge of fuzzy logic is very very limited, and perhaps this is the reason by which I can't understand what you say.
If you can solve this problem with fuzzy logic it will be a very great advance.
Dear Nolberto Munier ; Thank you for answer. I suggest using the fuzzy logic approach, this is just a suggestion. And if the results are consistent in all methods, we will find a good solution as you said. I hope Yuriy Biks reaches the desired result.
Dear Feride
Yes, I understand that
My question is which is the relation between using fuzzy and disparity of results using different method?
Even if you use fuzzy you will be using the same data in all methods. What changed?
May be I am not able to see properly your proposal, so, please enlighten me. It could very well be that are aspects that I have not seen, but that you detected
Dear Djamal
The final decision may depend on the DM only when he/she uses his/her expertise to analyze the result obtained objectively. In that circumstance he may correct, modify or even reject the result because he is based on solid results, not on preferences
I wonder how the DM knows which is the most important criteria. Care to explain?
I disagree regarding that in a multi objective there are many solutions, because precisely the reason of a method is to find a compromise solution that best satisfy the criteria, therefore there must be one solution.
Remember that your starting point is to reach an agreement between all participants, to decide which criteria they must consider. Once that is settled, there is a solution, not optimal but feasible, provided that it complies with all criteria. If that were not the case no MCDM method would work.
You say that a solution should be chosen from all feasible solution. My question is, how do you know which they are?
Your last sentence is correct, PROVIDED that the preference from the DM is made on reliable results that is, obtained objectively. The reasoning is simple, if the result is consequence of using preferences that alter the original data, there is no guarantee that using other DM, the result is identical
.
However, may I remind you that your and mine answer are not what Yuriy asked ?
Dear Feride Tugrul ! Thank you for your explanation. I do know that in my specific case, where there are not big data sets available the implementation of fuzzy logic apparatus with powerful model learning ( the strong side of artificial networks, which could be integrated into my model. For instance the Back Propagation Error technique...) from test and learn sets of data is quiet bit doubtful, as far as I know... Even so, I have not the output value of my objective function, for such learning procedure, if I got your point of view right...
In my case, there are certain output values of my modelling operations.
Actually, I fully agreed with the opinion of Nolberto Munier , that mentioned before, about absence of an appropriate technique for DM in case of results uncertainty.
To the though of Djamal Eddine Ghersi
"... Therefore, in this stage of decision-making, the final solution is chosen based on the preference of the decision-maker. I also think so, it is obvious, but in general case, the decision-maker have to make a compromise between the obtained results. The point, whom results we can trust more to??? If in general, there a different ones, as it was noticed in the question title...Thank you a lot for your opinion Djamal Eddine Ghersi
.I absolutely agree with your sentence "Consequently, there is not a single final optimal solution, like in the single-objective problems".
But, I believe and convinced in my assumption, that it could be the one/several techniques which definitely could add to the clarifying of the above-mentioned "problem", according to so high feedback's activity...
Yuriy
Your sentence
The point, whom results we can trust more to??? If in general, there a different ones, as it was noticed in the question title...
Well, this is the point. If we have diverse rankings on a same problem using different MCDM methods, which one would we select, is an old question.
We are assuming that all methods are rational and mathematically sound, therefore the results should be sound.
In this optimistic case, suppose that we have three different rankings, as follows:
A-B-C
B-A-C
C-B-A
I have a suggestion for this problem: We can use Sensitivity Analysis (SA).
In so doing the DM can examine each ranking, and determine how strong is the first or best project selected.
Suppose that the DM starts with ranking A-B-C.
In examining project, A, which is the best, he finds that it is very sensitive to small variations of some criteria, or it could be that the corresponding criteria is/are not allowed to take any variation, which makes A very vulnerable, or that it is indeed very strong, because those criteria can vary in a relatively wide range.
If it is very sensitive to criteria variations, it is obvious that the corresponding ranking is not acceptable, but it also depends on the relative importance of the criteria, since not all criteria determine the first alternative. Or it could also be that the second best alternative (B) is not sensitive. And the B can replace A in the first level, or C.
Let’s clarify this concept: The DM finds that A depends on criteria C2, C9 and C13. Regarding C2 and C13 there is no problem since they can vary amply without altering the ranking, but not with C9, which happens to be a by far more important criterion than the other 2. Then, C2 and C13 are irrelevant, and which is really significant is C9.
Then, the DM, examines the other two rankings and follows the same procedure.
In my understanding, he should select the method that gives the strongest first alternative.
I believe that using SA is a reasonable procedure.
Trouble is that with the method we have at present to perform S.A., it may be useless because criteria are selected according to their weight, usually determined artificially, by intuition, coming from AHP, and considered independently of other criteria, which violates Systems Theory, as well as questioned by several researchers.
This is one of the reasons by which I say that the rankings have to be obtained using real values, no weights, no assumptions, ONLY REALITY, and leave to the DM apply his expertise and know how on reliable results, not on altered data using weights unless they are objective weights.
The advantage of the proposed system is that it is based on mathematical data and complemented with the DM judgement,
My question is now, how many MCDM methods work with reliable data and how many of those may identify, out of the whole set of criteria, those that are responsible for the ranking?
If you want, let's simplify the situation with an analogy. 3 points are given to the winning team in football. The loser is assigned 0 points. Whether the winning team wins 10-0 or 1-0 does not affect its score. If the rules were different, for example, the team with the most goals could be the champion. Essentially, both methods are mathematical approaches. But which one is right? It is a fact that the team that earns a lot of points and the team that scores a lot of goals may be different. If the aim of MCDM is to choose the best alternative, these alternatives may differ according to the method.
Moreover, if both criteria (global sum methods- or outranking methods) are important to you, the MCDM selection becomes much more difficult.
Dear Yuriy;
Please have a look at my publications when you have some free time.
I also try to get the same vectors with different methods. As far as I experience it is mathematically possible, but not so appropriate with the consideration of human perception.
Actually, at my first R&D stage, I tried to get the same ranks. After a while, a reviewer of my paper recommended focusing also on the vectors not only the ranks or orders.
That made the decision problem more difficult. Psychologically (mathematical psychology, perception) not.
In any case mathematically possible,
Hence, during my Ph.D. study and afterward, I add and use a final decision step.
"Consensus"
That "Consensus" step may solve the problem of different ranks and vectors of different methods.
It tries to measure the satisfaction of the decision at the end.
Also maybe more importantly or at least important as the "Consensus" is developing your model.
Before developing a model the first thing is to find, decide and select your features or factors.
You should focus on them.
Then their relations. Then your methods and models.
My publications that can give some inspiration and idea to you are as follows:
My PhD Thesis: A New Generic Method for Large Investment Analysis in Industry and an Application in Shipyard - Port Investment
https://polen.itu.edu.tr/handle/11527/8261
Please see the flow diagram in my thesis.
Then my publications
Saracoglu B.O., "Analytic Network Process vs. Benjamin Franklin’s Rule To Select Private Small Hydropower Plants Investments", MedCrave Group LLC, July 26, 2018, (https://medcraveebooks.com/view/Analytic-network-process-vs-Benjamin-Franklin's-rule-to-select-private-small-hydropower-plants-investments.pdf)
Tercan E., Saracoglu B.O., Bilgilioglu S.S., Eymen A., Tapkin S., "Geographic information system based investment system for photovoltaic power plants location analysis in Turkey", Environmental Monitoring and Assessment, 2020, Vol. 192:297, No. 192:297, DOI: 10.1007/s10661-020-08267-5 (https://link.springer.com/article/10.1007/s10661-020-08267-5)
Saracoglu B.O., "Location selection factors of concentrated solar power plant investments", Sustainable Energy, Grids and Networks, 2020, Vol.22, No.100319, DOI: 10.1016/j.segan.2020.100319 (https://www.sciencedirect.com/science/article/abs/pii/S2352467719305600?via%3Dihub)
Saracoglu B.O., Ohunakin O.S., Adelekan D.S., Gill, J., Atiba O.E., Okokpujie I.P., Atayero A.A. "A Framework for Selecting the Location of Very Large Photovoltaic Solar Power Plants on a Global/Super Grid", Energy Reports, 2018, Vol.4, 586-602, DOI: 10.1016/j.egyr.2018.09.002 (https://www.sciencedirect.com/science/article/pii/S2352484717303232)
Ohunakin O.S., Saracoglu B.O., "A Comparative Study of Selected Multi-Criteria Decision Making Methodologies for Location Selection of Very Large Concentrated Solar Power Plants in Nigeria", African Journal of Science, Technology, Innovation and Development, 2018, Vol.10, No.5, 551-567, DOI: 10.1080/20421338.2018.1495305 (https://www.tandfonline.com/doi/full/10.1080/20421338.2018.1495305)
Saracoglu B.O., "Location Selection Factors of Small Hydropower Plant Investments Powered By SAW, Grey WPM & Fuzzy DEMATEL Based On Human Natural Language Perception", International Journal of Renewable Energy Technology, 2017, Vol.8, No.1, 1-23, DOI: http://dx.doi.org/10.1504/IJRET.2017.080867 (http://www.inderscience.com/info/inarticle.php?artid=80867)
Saracoglu B.O., "A PROMETHEE I, II and GAIA based approach by Saaty’s subjective criteria weighting for small hydropower plant investments in Turkey", International Journal of Renewable Energy Technology, 2016, Vol.7, No.2, 163-183, DOI: http://dx.doi.org/10.1504/IJRET.2016.076094 (http://www.inderscience.com/info/inarticle.php?artid=76094)
Saracoglu B.O., "A Qualitative Multi-Attribute Model for the Selection of the Private Hydropower Plant Investments in Turkey: By Foundation of the Search Results Clustering Engine (Carrot2), Hydropower Plant Clustering, DEXi and DEXiTree", Journal of Industrial Engineering and Management, 2016, Vol.9, No.1, 152-178, DOI: http://dx.doi.org/10.3926/jiem.1142 (http://www.jiem.org/index.php/jiem/article/view/1142)
Saracoglu B.O., "A Comparative Study Of AHP, ELECTRE III & ELECTRE IV By Equal Objective & Shannon’s Entropy Objective & Saaty’s Subjective Criteria Weighting On The Private Small Hydropower Plants Investment Selection Problem In Turkey", International Journal of the Analytic Hierarchy Process, 2015, Vol.7, No.3, 470-512, DOI: 10.13033/ijahp.v7i3.343 (http://ijahp.org/index.php/IJAHP/article/view/343)
Saracoglu B.O., "An Experimental Research of Small Hydropower Plant Investments Selection in Turkey by Carrot2, DEXi, DEXiTree", Journal of Investment and Management, 2015, Vol.4, No.1, 47-60, DOI: 10.11648/j.jim.20150401.17 (http://www.sciencepublishinggroup.com/journal/paperinfo?journalid=179&doi=10.11648/j.jim.20150401.17)
Saracoglu B.O., "An AHP Application In The Investment Selection Problem Of Small Hydropower Plants In Turkey", International Journal of the Analytic Hierarchy Process, 2015, Vol.7, No.2, 211-239, DOI: 10.13033/ijahp.v7i2.198 (http://www.ijahp.org/index.php/IJAHP/article/view/198)
Saracoglu B.O., "An Experimental Research Study On The Solution Of A Private Small Hydropower Plant Investments Selection Problem By ELECTRE III/IV, Shannon’s Entropy & Saaty’s Subjective Criteria Weighting", Advances in Decision Sciences, 2015, Article ID 548460, DOI:10.1155/2015/548460. (https://www.hindawi.com/journals/ads/2015/548460/)
Saracoglu B.O., “Selecting Industrial Investment Locations In Master Plans of Countries”, European Journal of Industrial Engineering, Vol. 7, No.4, 416-441, DOI: 10.1504/EJIE.2013.055016. (Indexed in Science Citation Index Expanded, Thomson Reuters) (http://www.inderscience.com/offer.php?id=55016)
These publications have many electronic supplementary materials.
They can help you very much.
Please also have a look at my other publications. They also explain these kinds of things and present my recommendations.
I try to use all methods and try to them for my robots and platforms.
Of course, take time.
My robots and platforms are in artificial intellegence and machine learning topics and integrated with MCDM.
Whenever you want to discuss anything please feel free to contact me.
Good luck.
One more thing "correctly" term has a powerful meaning. It is very difficult to say something correctly. I think in your publications you should consider finding another word in English.
Wish you good luck. Great research topic.
You can find my thesis on "Thesis Center" too.
CoHE Thesis Center | Home (yok.gov.tr)
https://tez.yok.gov.tr/UlusalTezMerkezi/giris.jsp
I hope that the consensus step will help you and give you some inspiration and ideas to solve your problems.
I would like to give some examples about this question.
Please see my publications.
These figures are from my publications.
Figure 7 Comparative model in this study
Saracoglu B.O., "Multiobjective Evolutionary Algorithms Knowledge Acquisition System for Renewable Energy Power Plants", MedCrave Group LLC, May 17, 2019 (https://medcraveebooks.com/view/Multiobjective-evolutionary-algorithms-knowledge-acquisition-system-for-renewable-energy-power-plants.pdf)
Figure 11 The final ranks of the experimental research case study
Saracoglu B.O., "An Experimental Research Study On The Solution Of A Private Small Hydropower Plant Investments Selection Problem By ELECTRE III/IV, Shannon’s Entropy & Saaty’s Subjective Criteria Weighting", Advances in Decision Sciences, 2015, Article ID 548460, DOI:10.1155/2015/548460. (https://www.hindawi.com/journals/ads/2015/548460/)
Figure 18. Comparative model in this study
Saracoglu B.O., "A Comparative Study Of AHP, ELECTRE III & ELECTRE IV By Equal Objective & Shannon’s Entropy Objective & Saaty’s Subjective Criteria Weighting On The Private Small Hydropower Plants Investment Selection Problem In Turkey", International Journal of the Analytic Hierarchy Process, 2015, Vol.7, No.3, 470-512, DOI: 10.13033/ijahp.v7i3.343 (http://ijahp.org/index.php/IJAHP/article/view/343)
I hope those publications explain the question.
For the features (factors, measures etc.)
I would like to give some examples about the problem.
Please see my publications.
These figures are from my publications.
Fig. 6. Grey ISM MICMAC on PEST and its extensions clusters in preliminary
screening stage
PO: Political, EC: Economic, TE: Technological, EN: Environmental; C1: Global
Horizontal Irradiation (GHI), C2: Governments supergrid integration policy, C3:
Supergrid business climate and conditions, C4: HVDC & HVAC electrification grid
infrastructure, C5: Land use, allocation and availability, C6: Geological conditions,
C7: Political, war, terror & security conditions, C8: Topographical conditions, C9: Climatic conditions, C10: Water availability conditions, C11: Natural disaster/hazard conditions.
Saracoglu B.O., Ohunakin O.S., Adelekan D.S., Gill, J., Atiba O.E., Okokpujie I.P., Atayero A.A. "A Framework for Selecting the Location of Very Large Photovoltaic Solar Power Plants on a Global/Super Grid", Energy Reports, 2018, Vol.4, 586-602, DOI: 10.1016/j.egyr.2018.09.002 (https://www.sciencedirect.com/science/article/pii/S2352484717303232)
Figure 9 Clustering on digraph with relations
Figure 10 Clustering visualisation
Saracoglu B.O., "Location Selection Factors of Small Hydropower Plant Investments Powered By SAW, Grey WPM & Fuzzy DEMATEL Based On Human Natural Language Perception", International Journal of Renewable Energy Technology, 2017, Vol.8, No.1, 1-23, DOI: http://dx.doi.org/10.1504/IJRET.2017.080867 (http://www.inderscience.com/info/inarticle.php?artid=80867)
Fig 4 Simple clustering visualization
Saracoglu B.O., "Location selection factors of concentrated solar power plant investments", Sustainable Energy, Grids and Networks, 2020, Vol.22, No.100319, DOI: 10.1016/j.segan.2020.100319 (https://www.sciencedirect.com/science/article/abs/pii/S2352467719305600?via%3Dihub)
I also presented the tools, the software, the layouts that I used when I had prepared those figures.
All readers can click on the links and access the tools.
For example
Fruchterman Reingold layout in Gephi
http://gephi.github.io/
Yifan Hu layout in Gephi
http://gephi.github.io/
Dear@Burak Omer Saracoglu!
First of all, thank you for your willingness to help with the issue. As I read from the previously shared thoughts of colleagues from the community, my problem has no obvious solution. Moreover, every researcher proposes his way to find out the appropriate direction to move on. I'll read your papers in my free time, for sure. It seems to me that you can help with focusing on the result analysis by different MCDA methods. From your materials, I see that you are quite familiar with the different MCDA techniques.
I have to admit, that I appreciate it very much. As for me personally, due to this opportunity, actually the right way to move forward could be found...
I know that there are a huge amount of MCDA techniques up today, and each one fits the specific problem/data.
I believe that your contribution to finding the way to my issue is really big.
As you noticed that "One more thing "correctly" term has a powerful meaning. It is very difficult to say something correctly. I think in your publications you should consider finding another word in English". I'll try to formulate this term in another word in my further publication.
Thank you for the opportunity to cope with this great research topic in my field of investigation!
Dear Yuriy;
Please feel free to contact me whenever you want.
Please also share your new publications with me.
I can learn new things from your publications.
I focus on developing my robots and platforms.
Your publications may give me some new ideas.
Good luck with your research
Have a nice day
Dear Yuriy
There is NO WAY to evaluate results from any MCDM methods.
To evaluate something you need to have a yardstick or something to compare to.
This should be the 'true' solution, but since it is always unknown, there is no way to make this comparison
As an alternative, I would synthetize this problem as follows. If you have a problem, solved by three different methods, and you could get different rankings, do this:
1. Verify which of the three methods best model the scenario and from the beginning discard the method with the poorest modelling. For instance, almost for sure, there are resources involved; if one of the three methods don't contemplate it, then it is useless, because all resources are limited
2. Choose the method that uses subjective weights the least. The more weights of this type the more chances of not representing reality.
As you can see there are not mathematics involved here, only common sense.
Dear Yuriy,
One way is through sensitivity analysis. You change the weights of criteria and you observe the impact on the ranking. A qualitative evaluation is also possible; Experts could give their opinions about the final results.
I would say the best method is the one that allows for more complete modeling of the problem.
Since most methods are rational and supported by mathematics, the method that best models a scenario is the best