In recent years, different MCDM methods have been developed based on different theories (Utility theory, Prospective theory, Regret theory,...). What will be the hot areas of research in MCDM in near future?
Dear Mehdi
I of course don't know which will be the future of MCDM, albeit probably, in improving data reliability
However, and in my opinion, MCDM have been going for decades without considering that models have to adjust to reality and doing a sensitivity analysis that is useless.
We have today the same principles than 40 years ago, and I have heard not a voice proposing its change
Therefore, as long as MCDM methods follow with the same incomplete and not representative modelling of real scenarios as we know today, I guess that the future is not very bright, and then, even considering hot areas for research, it will be a futile exercise since we will not be able to address them properly, or as done today, imagining that we did
Dear Yang
There are already many combined methods, however, since they have the original formats and procedures, their contribution is probably nil, because they don't address the main problem, that is, representing reality
Dear Yang
Providing hybrid methods can be useful for solving some problems. But it seems that in developing these methods, creativity and innovation do not play a significant role. It's as if many of these methods are presented to increase the authors' citations.
Dear Nolberto
I totally agree with you. Like other scientific fields, the dominant paradigms and real world needs should be taken into account in this regard. As you mentioned, our thinking in the field of decision-making requires fundamental changes. I think we have to find tools that represent reality and at the same time are user-friendly and have low computational complexity.
Dear Mehdi
I couldn't have expressed it better!
The good news is that we have the tool, since Linear Programming, because it is based on inequalities, and then, able to make comparisons, can do that job, probably not representing a scenario in full, but by far much much better than actual methods. I have a lot of examples solved by LP representing more faithfully a complex scenario
I work MCDM for engineering tasks such as processes operation, parametric design, technologies generation (also called process planning)- In these areas the important thing, in my opinion, consists in close to desirable solutions generation ones by calclable in the used model criteria. As all engineering task constitute a component part of another larger one, we are in the presence of multiple level, multiple criteria tasks, for what the wished values should be adopted at multiple levels for the multiple objectives that are pursued at each level.. As the exit of the system at each level solution populations are required, for two different reasons: to conciliate solutions of the same level tasks and for rectifying decisores evaluation (and reordering)by the real objectives helped by such tools as simulation procedures and graphical representation and the knowledge of the deciders and that could not be included in the models. Very frquently the engineers have previous knowledge that allow them to suppose desirable values for the criteria. Otherwise they can acquire a first approach finding the ideal decisions byr each criterion.
To generate solution populations close to the desirable ones, heuristic are very useful.
I think that for searching solutions to engineering tasks, this is the way. At least I am devoted to the solution of these problems, working oin basic and applied reseaches..
Dear Jose
JAR
I work MCDM for engineering tasks such as processes operation, parametric design, technologies generation (also called process planning)-
In these areas the important thing, in my opinion, consists in close to desirable solutions generation ones by calclable in the used model criteria. As all engineering task constitute a component part of another larger one, we are in the presence of multiple level, multiple criteria tasks, for what the wished values should be adopted at multiple levels for the multiple objectives that are pursued at each level..
NM.
Instead of wished values for each level, would it be possible to establish desired values for the final result, because as I understand it you can’t have an idea of what well a result matches your wishes until you have the result.
I believe and have said it many times that the DM must use his knowledge by inputting perhaps mew criteria importance vales or performance values, after he gets a result.
In his case, perhaps the partial values that the DM is getting for one level does not satisfy him, and then this is the opportunity to change them
However, as I understand it, you can fix a priory which are, for instance a fixed amount of money or man hours or resources as the values you want at each level by clustering alternatives, and in this way the final result, that the DM may like or not, will be a consequence of matching those different levels.
I am not very familiar with parametric design, so probably I am talking no sense, but this is the way I see it.
I know for instance that this is the way as aeronautical engineers work, they design a mock-up and test it in the wind tunnel. They get different results for different parts of the simulated aircraft, and then they can vary some parameters to adjusts results to their wishes.
JAR.
As the exit of the system at each level solution populations are required,
NM.
I don’t understand what you mean by ‘populations’
JAR.
for two different reasons: to conciliate solutions of the same level tasks and for rectifying decisores evaluation (and reordering)by the real objectives helped by such tools as simulation procedures and graphical representation and the knowledge of the deciders and that could not be included in the models.
Very frequently the engineers have previous knowledge that allow them to suppose desirable values for the criteria. Otherwise they can acquire a first approach finding the ideal decisions by each criterion.
To generate solution populations, close to the desirable ones, heuristic are very useful.
NM.
I agree
JAR.
I think that for searching solutions to engineering tasks, this is the way.
At least I am devoted to the solution of these problems, working oin basic and applied reseaches..
NM
Very interested field indeed, thank you for your contribution, we always can learn from others
Dear Claire
Where is the answer?
You only give a reference to Dr, Aykan
Dear Nolberto
Yes, Linear Programming is an appropriate tool for solving problems. Also, some decision-making techniques have been developed based on LP, such as SIMUS method and Best-Worst Method. As we know, there are four basic assumptions on which the linear programming works, these are: Proportionality, Additivity, Continuity and Certainty. To the best of may knowledge, some of these assumptions are violated in real world problems. So, do you think LP has enough ability to model reality?
Mehdi
Yes, Linear Programming is an appropriate tool for solving problems. Also, some decision-making techniques have been developed based on LP, such as SIMUS method and Best-Worst Method.
As we know, there are four basic assumptions on which the linear programming works, these are: Proportionality, Additivity, Continuity and Certainty. To the best of may knowledge, some of these assumptions are violated in real world problems. So, do you think LP has enough ability to model reality?
NM.- Dear Mehdi
First of all, I am very glad to receive your response. In four or five years that I have been in RG you are the only colleague that somehow challenges what I said, even when I have asked it many times.
Thank you for your input, I wish many people could follow your example. This is the only way we can learn from each other.
Even though I believe that LP is the best method, it does not mean that in can be used in MCDM. The reason, is that LP has two serious drawbacks. One of them is being mono-objective and then, it can’t represent problems with multi criteria which are probably the rule instead of the exception. When there is only one objective it can yield, if it exists, the optimal solution, as is being done since its creation during the WWII until these very days when thousands of companies use it daily.
The second drawback, is that LP works only with quantitative criteria, where there is always an independent value, and then generally it can’t accept qualitative criteria such as ‘Improve people welfare’ because there is not an independent value, or goal or a limit.
This is the reason by which SIMUS, based on LP, was developed, since it can work with as many objectives as wished and as a matter of fact, it is recommended to use as many objectives as criteria, even in the hundreds. The other advantage of SIMUS is that it can work with any mix of quantitative and qualitative criteria, even without independent terms.
Of course, the price to pay for these advantages is that it does not provide optimum solutions but satisfactory solutions, as all the MCDM methods. The other big advantage is that it uses no weights of any kind, although the relative importance of criteria is computed by an algebraic ratio. In addition, LP does not produce rank reversal
Now, coming to your answer
You are absolutely right, however, there is one more assumption and it is ‘Finite choices’.
There is no doubt that many of these assumptions are violated in practice.
And answering to your specific question about if LP has enough ability to model reality, yes, I believe it, if it has only one objective, and this ability is due to its structure. It works with inequalities, and because of that it is able to reproduce many situations and with a clear advantage over methods that don’t use this approach.
However, as I pointed out above, the fact that it can’t work with multicriteria puts a severe limitation. Consequently, I believe that methods based on LP and that have the ability of working with multicriteria, definitely have the capability to handle real-world situations. SIMUS is one of them and perhaps are others and inclusive better, which I don’t know.
Regarding the assumptions that you mentioned, I am putting the definition of each one, albeit they are not mine, but from the Web.
Let’s examine each one:
1. Proportionality: The basic assumption underlying the linear programming is that any change in the constraint inequalities will have the proportional change.
Normally, if you have say, a cost related to a product, it is assumed that it is constant, independently the quantity purchased. Obviously, this is incorrect, because the relationship between cost and quantity is not lineal according to the law of offer and demand.
The solution, as I see it, is to linearize the criterion with the cost performance factors, say for instance dividing the non-linear curve into it three or four segments, therefore, we know that within each interval the price is constant, and then we can build the decision matrix with all of these lineal data, and apply the Simplex. Therefore, for a problem with 3 different products and considering dividing each curve in three, we will have 9 alternatives
2. Additivity: The assumption of additivity asserts that the total profit of the objective function is determined by the sum of profit contributed by each product separately. Similarly, the total amount of resources used is determined by the sum of resources used by each product separately. This implies, there is no interaction between the decision variables.
NM- The first part is correct since the objective function is the sum of the products between the amount selected by the method and the corresponding factor in the objective function, and that is what LP does.
Regarding resources, I disagree, since there is no relationship between alternatives only if you don’t specify them, but you can always express the relationships existent between several alternatives, by simple putting each one as a criterion with the corresponding operator.
For instance, say that alternative A must precede alternative B for whatever reason, you model it by creating a criterion where A>B, or if alternatives C and D are inclusive and both need to be selected you put a 1 in the column of alternative A and a 1 in the column of alternative B, and then use the ‘=’ operator and put a 2 as the independent them. In this way you are indicating the software that BOTH alternatives MUST be considered.
If only one of them must be chosen you use the operator < and put a number 1 as independent term, and then instructing the software to only consider one of them.
If you need to indicate that alternatives G and F are exclusive, that is, it is one or the other you put ‘=’ and the number 1
In my opinion, this is one of the biggest advantages of LP by using inequalities, and this is one of the reasons by which it can recreate more accurately any scenario
Continuity: Another assumption of linear programming is that the decision variables are continuous. This means a combination of outputs can be used with the fractional values along with the integer values.
For example, If 52/3 units of product A and 101/3 units of product B to be produced in a week. In this case, the fractional amount of production will be taken as a work-in-progress and the remaining production part is taken in the following week. Therefore, a production of 17 units of product A and 31 units of product B over a three-week period implies 52/3 units of product A and 101/3 units of product B per week.
NM - Not necessarily, you can model the result asking for integer values. This is incorporated in the Solver which is an add-in of Excel. As a matter of fact, you can choose one of the three different formats in the results: Fractional, Integer, Binary
Certainty: Another underlying assumption of linear programming is a certainty, i.e. the parameters of objective function coefficients and the coefficients of constraint inequalities is known with certainty. Such as profit per unit of product, availability of material and labor per unit, requirement of material and labor per unit are known and is given in the linear programming problem.
NM- Yes, this is true with quantitative values, and it relates with what I said above concerning LP drawbacks
Finite Choices: This assumption implies that the decision maker has certain choices, and the decision variables assume non-negative values. The non-negative assumption is true in the sense, the output in the production problem can not be negative. Thus, this assumption is considered feasible.
Thus, while solving for the linear programming problem, these assumptions should be kept in mind such that the best alternative is chosen.
NM- I agree in a 100 %. No doubt about it
Dear Nolberto
Many thanks to your comprehensive and informative answer.
Many problems can addressed with MCDM, is usually implemented to the projects when we are confused and you couldn't get to the conclusion. Such complex Scenarios, to summarize all the pros and cons of any specific task/project MCDM is used. We have done such a case study, maybe it's interesting for you to read. Article Transportation Planning through GIS and Multicriteria Analys...
What will be the hot areas of research at MCDM in the near future? In my opinion, in the accelerated development of technology there will be several "hot" areas to be developed by the MCDM, but I believe that in the development of decision-making in Industry 4.0 it will be something important to take into account.
Dear Armando
No doubt of what you say, but in my opinion, until researchers and practitioners don't realize that they are assuming scenarios which do not represent reality, there is no gain in further analytic development or application.
They should first see to model reality as much as possible, not what they think it is, if not; what is the gain, as it is as present, in working with scenarios that are a poor representation of real ones?
Remember the old saying; "garbage in, garbage out"
You can read these articles:
Article An introduction to Prospective Multiple Attribute Decision M...
Article Multiple attribute decision making (MADM) based scenarios
https://www.mdpi.com/2071-1050/10/12/4451
Dear Sarfaraz
I believe that it is difficult to answer your question because we don't know which new tools and techniques will be developed in the future for MCDM.
In my opinion, at present, all MCDM methods fail from the very beginning since the initial decision matrix does not usually represent reality, and therefore, no matter the method used and all refinements, they are applied to fictitious scenarios.
Therefore, what is the object of applying sophisticated techniques to improve data if this data does not represent the problem, we want to solve.
Dear Dr. Munier,
What is your comment about decision matrix,
you mean future-based data?
Dear Hamidreza
No, I don't mean about data, but about methodologies and tools
It is for us impossible to know the future, and then we don't know which new tools will be discovered. In decision-making the only tools available until early 1950s were cost/benefits analysis, the Internal Rate of Return, and the Net Present Value also unknown in the 1800s.
At that time, only economic aspects were considered and aspects such as society and environment where not even considered, let alone sustainability.
Most probable they were known, but until early 1950 there was no way to consider them
It all started with Linear Programming in 1948 with the Simplex algorithm by Danzing and the Electre algorithm years later by Roy, followed by Promethee, AHP, and Topsis and with the great contribution of fuzzy analysis by Zadeh.
Then, at about mid 1950 it was impossible for us to know the new tools. In my opinion we are at present in a similar position, because we still can't model reality, and we don't know if in the future there will be new tools for that
There is going to be a shift from the use of techno-economic approach to STEEP (Social, Technical, Economic, Environmental and Policy) approach. It will make MCDM results to be more acceptable among decision-makers. Also, the use of crisp and linguistic values will dominate decision-making processes, especially in science domain.
See the attached paper that offers a comprehensive method of aggregating rankings from different sources.
As per my observation, some of the key issues which need to be addressed properly and will become future trend in decision making are:
1. Handling uncertainty in decision making.
2. Accurate prediction of Risk, Safety level etc.
3. Common concenses in group decision making.
Dear Manoj
I agree, especially with uncertainty.
However I believe that accurate prediction of risk is problematic, since risk has a probabilistic nature, and we can't speak of accuracy in probabilities
With respect to your third point, that is easier. If we can determine how different decisions ameliorate former decisions. This can be done, and is being done.
For me, the most important aspect will be to improve MCDM capacity to interpret reality. In my opinion, if we can't do that, as at present, we are solving fictitious scenarios, because they are not related with the real scenarios we want to solve,
A method that has been proven mathematically to be true based on a set of rational assumptions is valid. All others are suspect. Frequently methods are chosen on an ad hoc basis out of ignorance or because the outcome fits a preconceived notion of what should be correct. See attached for a more detailed discussion.
Theories should be developed to handle decision problems that involve IA issues and big data.
Dear Melfi
The acronym IA has many meanings. Could you please inform us to which of them you are referring to?
Dear Nolberto
My apologies to all of you.
I meant AI = Artificial intelligence
Dear Meifi
I agree with you
In fact, Linear Programming is used in Artificial Intelligence
Refer the following:
https://www.emerald.com/insight/content/doi/10.1108/BIJ-10-2017-0281/full/html
Dear Raja
I couldn't access this file.
May be you email it to me: [email protected]
Thank you
I am with Dr. Melfi. Multicriteria Decision must have qualitative and quantitative methods based on AI technologies.
Dear Rajeev
Well, it may be as you say, but remember that to apply AI you need abundant data.
Yes. For AI/ML, data volume improves the accuracy of prediction. Deep Learning + MCDM combination may be the next step..
Regarding about the future of MCDM, in my opinion, there is not future if we persist in using or refining methods based on false principles and assumptions. The proof of this assertion is that at present there are close to 100 different MCDM methods and continuing to grow, and of course, as has been mentioned many times, they rarely coincide in the result of a same problem.
Why? Because most methods use pair-wise comparisons and arbitrary assumptions.
I don't think that a method based on pair-wise comparison and subjective 'weights' for criteria is right.
It is only a picture of how the DM sees the problem which is only in HIS/HER mind and not related with reality.
Of course, I am especially referring to AHP/ANP, since other methods such as PROMETHEE, ELECTRE and TOPSIS use the DM reasoning (absent in AHP/ANP), based on real data, to define thresholds, probability distributions, veto and distances.
Of course, he/she can be mistaken, but at least the DM decides something not based in his intuition or mood, but in his knowledge and know-how.
Pair-wise comparison may be useful for trivial or personal problems where in reality a comparison between criteria makes sense as in the very well-known case of selecting a car, a restaurant or a movie, and where the results affect the same person taking the decision.
However, to select alternatives in complex scenarios is a completely different issue, because a DM can't assume what hundreds of thousands or even millions of people will benefit or hurt by a project. Of course, a survey among the potential people that will be affected by each project is the best procedure, and that information, which is real, must be inputted in a model, without modification.
Forget deciding which criteria is better than another, let alone putting a value on that preference.
Input the information as it is received from the survey; in this way the mathematical model will work with valuable subjective data and with reliable objective data, and will give a result according to it.
With this result, the DM will be able to analyse by using his/her experience, know-how, and common sense, if the solution reached appears to be the best. In so doing, the DM can modify the ranking, add or delete criteria and alternatives and even reject the whole result, but he/she is doing this based on a mathematically correct result, on a solid base, not on his/her preferences.
The same applies to sensitivity analysis. The DM has to perform it based on solid facts and values, not grounded on the assumed variation of a criterion arbitrarily selected and keeping the other constant.
I suggest practitioners to think about this, and as I have said many times: If somebody does not agree with what I said
just contradict me, with arguments of course, either publicly of privately. My email is: [email protected]
Dear Amin
Thank you for your message and for introducing me to OPA. I perused it and I reckon that it is interesting, however, since you suggest that it can be useful for me, I feel obligated to read it.
I found some aspects that I am not sure I agree, as follows:
1. If you work with a group you say that you have to select them, but you don’t say the procedure you use for that. You realize of course, that selecting experts is a MCDM problem by itself.
2. Then you allow the experts to rank criteria, based on what?
3. Then you allow the experts to rank alternatives, based on what?
I am particularly against decision-making based on subjectivity of any kind, but most especially on values that come from the intuition of the experts. I am however in agreement with you when the experts consider each criterion separately, but that selection must be based on something, for instance, each expert must analyze each alternative regarding compliance with a set of pre-defined criteria.
Subjectivity is indeed very important, it is mandatory, but applied at the right moment, and can’t be absent in MCDM scenarios, but not at the beginning of the process, rather at the very end, once a result based on quantitative data and reliable statistical subjective data is obtained.
Then, the experts can use their expertise and knowledge to modify, change, add, delete or simple reject
the mathematical result.
I disagree with your treatment of sensitivity analysis because you use weights arbitrarily determined and that are not fit for this task. I reckon that most methods use this procedure which is wrong, because importance of criteria is properly measured only using objective weights, based not on preferences or opinions but on real data, such as entropic weights.
In Section 5 you say that AHP is a statistical method. Sorry it is not.
In Section 5.3 the paper says that the DM has determined the weights for VIKOR, TOPSIS and PROMETHEE. How?
As a bottom line, in my opinion, the OPA method uses direct appraisal of criteria, which is good, instead of pair-wise comparisons, but also too much subjectivity, in reality much more than is used in other MCDM methods. Since you put a lot of emphasis in experts’ opinions, what would happen if another group think differently? To whom are you going to believe, the first group or the second one?
Obviously, for instance, in a location problem, it can’t be a function of what a group of experts think or prefer, however, their action is clearly understandable in projects such a producing a movie, a musical play, or hiring new personnel for a company.
Real world projects, at least most of them, are made of very quantitative issues such as prices, water availability, land use, government actions, manpower, taxes, availability of raw materials, equipment efficiency, personnel expertise, etc., and of course, including some qualitative issues such as people acceptance of he project, legal aspects, variable environmental regulations, country natural resources depletion, etc. It is not a matter of opinion or preferences that can be left to experts. It belongs to the realm of engineering, economics, social issues, environment, social capital, etc.
Of course, this is only my opinion, which can or not be shared.
In most of the engineering problems there is a very high number of possible solutions, so the evaluation of alternatives could be limited just to a litle number of them. On the other hand, in most of the cases the engineer wish to approach to some desirable (for him, or to a group of DM) criterion values. By this reason, a like to minimize the normalized Tchebyshev distance between the output from optimization decision and the desirable ones, without weigh coefficients.
Dear Jose
I agree, as long as you know which the output corresponding to the optimum is.
If you know it, you don't need any MCDM method.
In addition, in MCDM its is very seldom that there is an optimum decision, because contradictory criteria.
This is my reasoning, unless that I did not interpret correctly your writing.
Dear Nolberto:
I understand you. I never liked AHP, TOPSIS, VIKOR, ELECTRE, PROMETHEE and so on, by many reason. For me it is needed to find populations of close to optimal solutions. Considering that the model is always an approximation of the reality, the approximaion to the utility function it is also an approximation, the solved problem is always a part of the more spam one (the optimal of the part in the general case do no coincide with the optimal of the whole) and other reasons, according the approximate-combinatorial method (of a Russian scientist), for solving real problems it is neccessary to find successive solutions populations of every time more and more complex task. I would like to read some of your articles. If it is possible please to send me its to [email protected] or to [email protected]. In my researchgate profile you can find a lot of my publications. If you need some other information I am ready to share it.
I hope you will not have problems with COVID-19
1) It is necessary to review the epistemology of multicriteria theory and practice adjusting both the theory and the decision-making processes.
2) Study properties, advantages, and disadvantages of building scenarios with high uncertainty to apply multicriteria.
3) Explore the degree of satisfaction experienced by multi-criteria users based on the results obtained from their application.
Dear Melio
Thank you for your message
Regarding you three points
1. I agree with it. The Dictionary defines it as ‘the theory of knowledge, especially with regard to its methods, validity, and scope. Epistemology is the investigation of what distinguishes justified belief from opinion’
That is, in our field of multi criteria it is fundamental that a user knows about the validity and scope of a certain MCDM method. This is, in my opinion, the problem, since people use a method because it is easy, it is clear, they have the software, or they know of another problem than used the same method, and that is enough proof.
Unfortunately, that procedure is inadequate because a method is not fit to solve any kind of problems. I interpret the ‘validity’ precisely to this capacity to address a problem, not to the fact that the solution it reached is valid, as many practitioners assert.
However, the most important aspect is scope. I interpret it as the ability of a method to be able to model all the characteristics of a scenario. In my opinion, THIS IS THE PROBLEM we have nowadays with our MCDM methods. NONE of them is capable to consider all the characteristics of a particular scenario, and out of more than 100 MCDM methods only one or two can handle most characteristics of a problem.
99% of methods are concerned with data reliability, which of course is very important, trying to find the best procedure to determine weights, and discussing mathematical issues, but none of them address the main problem, which is HOW TO MODEL A SCENARIO PROPERLY. They don’t even determine if a scenario is feasible……
If a method can’t model most characteristics of a scenario, which is the purpose in improving data reliability? People are solving a scenario that is inexistent or that is only in their minds, but that has no relation with the real one. This is my problem and my fight in RG and in my books.
I am trying to make people aware that current methods are unable to solve real-world scenarios and that we need to develop new ones. In that respect, we are using methods that are 40 or 50 years old, and then, in the stone age of informatics. 70 years ago, we were more advanced, since the only objective was to determine profit and there were appropriate tools to manage those scenarios.
Todays methods are unable to solve even lightly complex problems, and I know that I am making a lot of enemies with my assertion. The difference is that I can prove what I say with reasoning and real-world examples, and did it many times, but nobody came along to prove the opposite.
2. In my opinion it is not a matter of building scenarios. Normally, we select a part of them, like in a location problem, but their characteristics normally are no of our doing. Uncertainties on government policies, exchange rate, weather, strikes, shortage of necessary inputs, etc., come with the scenario, even if we did not think or presumed about them; we don’t fabricate them.
We can’t evaluate independently advantages and disadvantages because those are used to evaluate alternatives. We can say that an advantage of location A is the quality of manpower, but we have to link it with natural disadvantages of the site because its harbour can’t operate in winter time and then, export is very limited.
3.My friend, have you ever found a practitioner that is dissatisfied with what he/she did? Haven’t you read about practitioners that loudly express to the world that they got a satisfactory result? Please, don’t ask how he/she knows it, since there is no proof. May be the only merit was to input data in the model, press the start key, and voila! The right result is there. What this person really did was to feed an algorithm with data and then realized that the algorithm yielded a result, which was for which it was developed. Too bad that perhaps he got an inexistent result because the scenario was unfeasible. Who cares! The important is to get published.
No doubt, for sure he will be satisfied.
I think that multicriteria models should converge on a common point, in axiomatic terms. That is, despite its increasing use to solve practical problems, we are still at the beginning of the path until we reach a conceptual consensus in the literature. However, we have to take into account that it is a tool that tries to escape the classic and inflexible models used until then in the decision area. All the proposed models have their strengths and weaknesses, but it seems to be an interesting alternative for shaping the variables that are involved in a complex decision process.
Dear Fernando
What kind of axiomatic terms can we reach when all models lie in personal subjectivity, sometimes based on reasoning as in PROMETHEE and TOPSIS, an other times based in personal preferences and intuitions as in AHP and ANP?
Sorry, but I don't understand what this sentence of yours mean: 'we reach a conceptual consensus in the literature', although perhaps I can understand it, if you refer to have a common idea about MCDMs. Nowadays all methods are solving artificial scenarios, because the real ones are not reproduced in modelling.
When we reach a consensus about that, perhaps we can start talking on how to improve MCDM. Why don't we start recognizing that we need to make sure that a project is feasible? As elemental as that.
You are right when saying that we have to scape from classic and inflexible models.
Don't you think that firstly the should try to determine which are the variables of a problem and their relationship?
Don't you think that firstly we should try to determine which are the variables of a problem and their relationship? That is, to know and understand the problem and see how it can be modelled
Dear Jose Arzola
I agree with the fact that we need to find solutions close to the optimal. It could be done if we knew which the optimal is, and in that case, we wouldn’t need any MCDM. That is, it is a paradox.
Decision-making is not pure mathematics. Every project is always a system, and in a system, there are not local optimal solutions but global. You can optimize the use of a resource, say water. However, if you don’t have a reliable source of water, that former conclusion is useless. And you undoubtedly agree with that when you say ‘(the optimal of the part in the general case do no coincide with the optimal of the whole)’
I am honoured that you want to read some of my articles. Really, it would be a little cumbersome to me make a selection of them, and so I suggest to download the article you want from RG.
Yes, I know that in your RG profile are your articles. Y read some of them. Thank you for your good wishes that I give back.
Nolberto
Te deseo buena salud debajo del sol del Caribe
Dear Nolberto,
Yes, I'm referring to having a common idea about MCDMs. I'm sure variables and their relationships are a key issue in MCDMs. But, it seems to be a factor that concerns more the building of the problem than the multicriteria method itself, although the reliability of the results depends on the quality of the variables selected.
Yes, they are, unfortunately not many people realize that.
You are absolutely right about building a problem, and if t is accepted in a journal the better.
Yes, the reliability of the results depends on the quality of the variables selected. The fact that many of those values are fabricated via personal preferences is secondary.
Mubashar
Could you explain what your comment mean?
Extension of what?
What different sets?
Can the integration with SDSS (called MC-SDSS) be a valuable contribution to the lowering of the subjectivity? (I'm new to this field, so I apologies for my lack of deeper knowledge)
Sandra
Using statistics for data is always good.
I don't know in particular the system that you mention. But again, data reliability is very important, but if the method used can't represent or model reality, we can have the best data and we would be solving maybe an approximation or a simplification of it
Thank you very much Nolberto
The method is kind of a link between non-spatial data (MCDM component) and spatial data (GIS or SDSS). At least, this is my understanding on the subject, until today, but I'm still learning a lot everyday. I'll keep in mind the data reliability. =) thank you again
Dear Sandra
Thank you for your answer
As far as I know the link between GIS and MCDM is not new since there are some papers published about it. I myself am publishing in a journal a paper on the selection of a route for a highway in Africa, using GIS and SIMUS
The novelty of my approach is that I don't use any weights as other methods using AHP and ArcGIS, which are subjective weights, and therefore don't represent reality.
If you are interested we can discuss this issue in depth
Dear Nolberto Munier :
When I talk about sucessive approximations I mean using, at each iteration, a model closer to the behaviour of the calculable objective and restricted functions than them used in the preious one. In the cases of hight dimension models, including multi-level multi-criteria task considering subjective DM criteria and restrictions and possibly using simulation and graphical representation complementary models, the final populations must to be submitted evaluation using modern methods for small dimension models such as AHP. TOPSIS and so on.
Best wishes,
Dear Jose
Sorry, I don't understand.
If you are using an iterative method, which is the 'model' closer to the behaviour of the 'calculable' objective?
What 'final population' are you speaking about?
What are high and small dimension models?
Evaluation using AHP?
AHP can only evaluate simple personal and corporate problems
Dear Nolberto:
It is demostrated in the frame of the Approximate-Combinatorial method that the optimal solution of a complex problem could be found between the solutions to another simpler, approximated to the previous one, that differ by the objective functions values in no more than a certain magnitud. Most of the optimization tasks constitute subtasks of a more span one for what it is generally needed to conciliate decisions at a higher to the solved task level , that means the task in solution process constitute an approximation to the more span one. By this, for solutions conciliation in complex duscrete system it is generally needed to genterate not only an "optimal" solution but a optimal population of them close to the optimal one. Between these solution the higher level system must to select the most convenient one. One the other hand, Tnere exists a group of methods that help fo find the best compromise decision between a set of variants, such as:
1. Outranking based: ELECTRE and PROMETHEE.
2. Preference relations based: Analytic Hierarchy Process (AHP), including it Fuzzy variant designed as FAHP and Multi Attribute Utility Theory (MAUT).
3. Distances based: TOPSIS,
4. Linear relations based methods.
Any the existing techniques follows the following steps:
1. Identifying objectives: that should be specific, measurable, agreed and realistic;
2. Identifying solution options for achieving the objectives: once the objectives are defined, the next stage is to identify options that may contribute to the achievement of these objectives;
3. Identifying all the criteria to be used to compare the options and it sources (models) to compare different options’ contribution to meeting the objectives. This requires the selection of criteria to reflect performance in meeting the objectives, meaning that it must be possible to assess (at least in a qualitative manner) how well a particular option is expected to perform in relation to each criterion.
4. Options analysis: requires human-machine mechanisms to properly aggregate the different performance scores, and addecuate options ranking.
5. Selectig solution options: The final stage of the decision making process consist in the selection of the most convenient option(s). This can be seen as a separate stage because none of the available techniques can incorporate, into the formal analysis, every judgment. At this stage some solutions options could be modified to will by the decider(s).
These methods could be applied for a relatively small quantity of possible solutions tasks by the reason it is neccessary to do some calculations for every one of them. so, it is not possible to directly apply to task wiht millions or infinit number of alternatives. They could be used just for reordering final solutions populations.
I hope you understood me.
Best wishes,
Arzola
Dear Jose
Thank you very much for your explanation, but to be honest. I still don’t get it, albeit it is clearer than before.
Probably, it is a matter of semantics: I understand your mention of multiple objectives, because that is a reality, as well as criteria. But what is it that you call options?
In addition, it is very-well known that in multiple objectives, normally there are not optimal solutions.
I am afraid that I am not familiar with the Approximate-Combinatorial method that you mention, although the principle sounds logical, and I guess that it is what you call ‘iteration’.
My question is, how do you know that its value is an approximation to the optimal if, even if it exists, you don’t know it? Remember that we are not dealing with physical problems where you may know the true result, but that is not the case in MCDM. How do you determine the magnitude of the difference?
What are optimization tasks?
I guess that when you talk about populations you are referring to the results of simulation. In this case, which is the optimal population?
I agree that are methods that help to find the best compromise solution.
What do you mean by identifying solution options?
Sorry, but I can’t understand what you express in section 3, let alone sections 4 and 5. You say that you select solution options, but you don’t explain what options are.
Of course, I am not saying that what you say is wrong, simply, that it is very difficult to follow your process, at least for me. In my humble opinion you are trying to equate engineering problems, where you have parameters such as efficiency, effectiveness, yields, etc. which are known values or at least have limits, with MCDM problems where none of them exist, and in addition, with social, environmental, politics, economic production, etc. conditions.
Please, correct me if I am wrong.
Best regards
Nolberto
Dear Nolberto:
Please to download my book "Sistemas de Ingeniería" from my profile. There you can find the "método aproximatorio combinatorio". Please to read this paragraph and my explanation will be much more clear.
Best wishes,
I made a mistake, better to read "Selección de Propuestas", year 1989. The edition do not have high quality but the content has
Dear Jose
Thank you for your answer
I have read the attached file and in principle, as I understand it, you determine an approximate solution using a heuristic MCDM method, which for me is reasonable and possible.
Then you determine and average of the multiple solutions and establish the ‘α’ that measures the difference between the average and the optimum. My question is: How do you know which that optimum is?
In engineering problems, the process is very applicable since you normally know that optimum or the real value, and what you are doing is to calibrate your model, but in MCDM scenarios you don’t know it, and calibration is impossible.
It appears to me that you are trying to apply a perfectly logical engineering procedure to a MCDM process, that is, if you have an engineering problem and you want to develop a method to represent it, you can try different mathematical methods and compare results. In MCDM scenarios with what are you going to compare the average?
Perhaps I did not understand your procedure or I read it wrongly.
Your answer to this point will be appreciated
Regarding my books, especially the last four ones, they don’t treat engineering problems but MCDM ones.
Regarding your books just tell me where I can find some of them and I will try to read them. However, it seems that you and I are in different disciplines, and then don’t be surprised if I don’t read books that address subjects on which I am not interested.
I would very much like to talk with you, in Spanish of course, using WhatsApp.
My address in it is 1-613-770-7123. We both are in the same time zone
Dear Nolberto:
I am talking about complex multiple objective (and probably multilevel) optimization problems linked or not to engineering. A part of the objectives (criteria) could be calculable and another part subjectively appreciated.
I do not do calibrations. I work over mathematical modeling of complex problems (in the frame of engineering because I must to move in some understandable for me environment). But the problem is the same for any MCDM high dimension (probably multilevel) scenarios.
I find solutions populations for an approximate to another, more complex model (original model) using, desirably, an exact optimization method, but could be used heuristic ones for obtaining at least approximate solutions. In the general case both models (approximate and the original more complex one) have multiple objectives (including subjectively determinate). So, become possible to solve complex problems by successive approximations. In the successive iterations are solved more and more complex tasks, but inside previously obtained solution populations.
The difference of the objectives evaluations between the first and the last solutions in the ordered population are less of equal to some α parameter (vector of parameters in the presence of multiple objectives). If the solution method used is exact then it is possible, in theory, to find all the α-optimal solutions (those solutions that differ from the optimal by the objective function values in no more than α). The weak site consists in that the α value it is not previously known, but given an α value the optimality conditions (or the evaluation of the closeness to the optimal of found solution) are given by the “método Aproximatorio-Combinatorio” of V. R. Jachaturov (scientist of the Russian Academy of Sciences). You can find a good explanation of this method, in Spanish, with linked theorems in “Selección de Propuestas”, paragraph 3.3, page 40).
At this moment I am working on the problem of considering that the more complex problem consists in a high dimension MCDM problem that includes quantifiable and subjectively determined objectives. Reasoning according Approximate-Combinatorial method, once I have the α-optimal solutions population of the optimization problem considering just quantifiable indicators, the optimal solution must to be found by the MCDM classical methods inside this population. Of course, there are some complementary theoretical works to do.
In the frame of the Selection of Proposals method, developed by myself
I demonstrated that the optimal solution of a systems could be found between the α-optimal solutions of it elements (the α parameter value is the same for all and any of the subsystems). This last property is clear by the reason that any system is a subsystem of another, more complex one that is common for all the system elements. “Selección de Propuestas” is devoted for the explanation of this method. For me will be a pleasure to talk by phone with you, but I propose to try, previously, deep in our understanding for having a really useful talk.
Dear Nolberto:
My name is José Arzola Ruiz (and not Luis Arzola), so where it is indicated JA means there are my comments in the document attached.
Best wishes,
José
Dear Nolberto:
Evidently I expressed not clear enough: I said that I am sure that a small collective of persons understood the Selection of Proposals method, by the reason these persons worked with me over the method and it applications. About the rest of the world, including who bought and read my books and read my articles I could not to assure understood or not. It is not so difficult as Einstein Theory of Relativity’, or Plank’s ‘Quantum Theory’!.
The Jachaturov theories understand all the scientists who worked in the department where he was the chief. About the rest of the world I can´t affirm nothing. At least I understood his method and am applying it during many years.
I stablished interchange with you after reading 2 your books and considered we could to complement each other and that you were in disposition to read at least 4 or 5 pages of a 124 pages’ book mine (although it would not have published by Springer). I am sure you are capable to understand it, but without reading a little it is impossible to understand! Of course, it is not so difficult as Einstein theories.
I also worked and am working over research applications with many people including 20 my defended PhD students and, evidently, we understood each other. In other case would be impossible reaching results.
I continue studying the ways of integrating the methods you are using to these I currently apply, reading your and other authors books, and surely will incorporate them in my research approach, with or without your help.
Excuse me for the time lost and best wishes for you
Dear Jose
The only method that I use is LP, it is more than 70 years old and taught in most schools of engineering, economics and management around the world.
My method SIMUS is only a simple application of LP in its direct and dual problems.
I don't know about you, but I write for engineers, practitioners and students, and then try not to use mathematic language but reasoning and with lots of real-world examples. I don't write for people that need to be proficient in high mathematics to understand what I say. I write for people that is too busy to start reading pages and pages of mathematical formulas.
Probably this is the reason by which my books are in more than 1000 university libraries, in more than 85 countries, and have had tens of thousands of downloads of my ebooks
I don't think that we lost our time. I learnt from you and for which I am grateful, but obviously we are in different universes regarding fields and scope.
I thank you for the opportunity to interchange ideas
Warmest regards
Nolberto
Dear Saad
I agree with you in a 100 %.
Pair-wise comparisons never were something rational, something that nobody can defend, let alone justify
The opposite is LP where every single path is rational, understandable and clear
However. LP the oldest MCDM method, has two problems: Addresses only one objective and does not work with qualitative criteria.
SIMUS solved those drawbacks, and it is expected that newer MCDM based on LP can be still better
I fully agree with Melfi Alrasheedi . I think that an additional source of evolution MCDM will be using Artificial Intelligence with expert functions. You can take part in this regard in the discussion:
https://www.researchgate.net/post/AI_as_expert_for_Analytic_hierarchy_process
Dear Vadym
Well yes, it may be. The problem is that we can't forecast the future with the tools that we have today
Yes, dear Nolberto Munier , I fully agree with you. We can't forecast the future.
My humble opinion is that MCDM does not promise us an optimal solution today. But it will get closer to the optimum in the future. We can predict that "improvement" studies on MCDM will reach clearer results in the future. Today, much effort is being made to improve the quality of qualitative data in the MCDM matrix. This is an issue I don't know well, but I think more important issues are still waiting for solutions. We can summarize them as follows: Choosing the appropriate normalization method; choosing the appropriate weighting technique; Choosing the appropriate MCDM method. I am confident that there will be important developments in these matters. It is also possible that the "rank reversal" problem will be solved completely. On the other hand, I guess that a satisfactory solution can be found to develop the best MCDM method, which is an important problem of half a century. Moreover, I think that MCDM methods will be used in many dimensions of daily life, especially in artificial intelligence applications.
Dear Mahmut
The problem, as I see it, is that we most probably will have better MCDM methods in the future, however, how will we know that we are getting to the optimum or close, if we don’t know it?
How we say that we are close? Regarding what?
You speak about choosing the appropriate normalization method. How do we know which is it?
The best weighting technique? We already know which is it: Entropy
The best appropriated MCDM method? How?
However, on this point I am with you, since, in my opinion at least, it is the method that best model reality and solves it. A good measure of the best MCDM method is its ability to represent a real scenario.
Regarding rank reversal there are already methods that are immune to it.
I am not qualified enough to comment about MCDM in AI applications, however I know for sure that some studies and tests have been done to that respect.
Normalization, weighting and MCDM method selection for today's MCDM world, in my humble opinion, these are the most important problems. Because these methods can be chosen arbitrarily by DM and there is no control. Imagine that you have more than 100 MCDM, 10 normalization , 10 weighting type options. If you are calculating MCDM with at least 20 alternatives and 5 criteria like me, you will encounter hundreds of ranking combinations. These combinations offer dozens of 'the best alternative'. Choose, like and buy! :) It is a cause for concern for DM. Yes, dear Nolberto, we don't know the best practices, but since not all of these combinations can be the best, one must be the best, right? Or one is the most suitable. Until this time, the methods were compared among themselves according to their abilities and capacities. But the results were not compared with an indirect or direct verification mechanism. In summary, I would like to state that unfortunately, these chronic problems have been inherited by future researchers without any solution.
Each method has its own advantages and disadvantages. In this context, it is inevitable that the methods give different results for the same problem. this does not mean that the methods are bad. Fortunately, there are some aggregating methods like Copeland and Borda methods used to achieve an acceptable ranking. With the help of these, a combined ranking can be obtained.
Immunity to the rank reversal problem can be produced. For example, the "rank reversal" problem has been eliminated by imaginatively maximizing / minimizing the current ideal values of some methods such as TOPSIS. But in this case, we now have a different MCDM that produces a different ranking. So a genetically modified TOPSIS! . In my opinion, when you take a step in favor of RR, you produce results against MCDM authenticity. So, there are only tradeoffs. The main source of Rank reversal problem for MCDM methods is normalization methods. Another issue is, Mr. Münier, you said Entropy is the best method of weighting. We have no validation criteria on this issue and I am not sure. It is true that it is good in terms of talent and capacity. But is that so in terms of results? As you always ask, let me ask you: According to what is the best? Regarding what?
I don't know the Copeland and Borda method well. I will gladly review articles on this topic.
Dear Faih
I agree with you, we can't say that a method is bad or good. In my opinion all are good.
The problem is not the method, but the practitioner.
If a DM selects a method that is not appropriate for his problem, obviously the result most probably will be different using other method.
For instance, it is a fact that most projects need resources, and that they are limited. How many methods out of more than 100, do you know that consider resources?
A question: What is for you an acceptable ranking? Remember that you don't know which the 'true' ranking is or even if it exists
Dear Nolberto,
I agree with you. The truth is that MCDM makes it easier for us to make decisions in the face of difficult problems.
Dear Mahmut
NM – Thank you for your opinion. This is the type of discussion that can help our colleagues
MB-Immunity to the rank reversal problem can be produced. For example, the "rank reversal" problem has been eliminated by imaginatively maximizing / minimizing the current ideal values of some methods such as TOPSIS. But in this case, we now have a different MCDM that produces a different ranking. So a genetically modified TOPSIS!
NM-Well, the fact is that we must work with reality not with imagination, and in addition I can’t understand how, even using imagination on the TOPSIS ideal values, we can overcome rank reversal (RR). I would appreciate it to have your explanation.
MB -In my humble opinion, an MCDM that keeps its originality and has no rank reversal problem could be produced in the future.
NM-I am glad to report that a method that does not produced RR has been invented more than 70 years ago, and it is Linear Programming,
In a book of mine, I was explaining SIMUS, based on LP, and tested it 66 times with not only deleting or adding one alternative, but several at the same time, and even using two identical alternatives. In none of those 66 cases there was RR.
If you want, I can send you a copy of that research of mine.
In the same book it is also explained why LP algebraically can’t produce RR.
MB- Otherwise, there are only tradeoffs.
NM- I agree, if you use trade-offs by weighting criteria, I believe that they can alter the ranking, since you are balancing gains and losses between criteria, which of course apply to alternatives. If you add or delete one of them, the balance may change, and then generating RR
MB-The main source of Rank reversal problem for MCDM methods is normalization methods.
NM- Not in my humble opinion. I believe that RR is produced because when you add or delete an alternative, you certainly change the problem, but it does not mean that because of that, it also changes some pre-existent ranking.
If in a problem you have this ranking A>B>C>D and delete say C, there is no reason for change the pre-existing relation between A and B and C. Even if the alternative E that you add is the best, you will get E>A>B>C>D
Please allow me to put a very simple analogy. If you have a bowl with pears, oranges, apples and grapes, and you rank them as per your preferences regarding sweetness or taste, the ranking will not change if you remove the grapes, or if you add a new fruit. If the new fruit is more preferred than say oranges, then there will be an alteration in the ranking expressing that preference, but there is no reason by which the other preferences must change.
In my opinion, RR is due to different causes depending the method. For instance, in AHP, for me it is natural that it happens when you add or delete an alternative, because its keeps constant the weights of t he criteria.
If you apply these weights to a method such as PROMETHEE, it may produce a change in the ranking. In TOPSIS, if you add or delete and alternative you may alter the maximum or the minimum value of each criterion, consequently, the results are different.
Why there is no RR in LP? Because in each iteration the new alternative is chosen according to the cost of opportunity economic principle which is computed for each alternative independently, and thus, it does not matter if you add or delete alternatives., and does not alter the pre-existing ranking.
I realty don’t understand how normalization methods can be responsible for RR. In most methods, the normalized value is obtained as a function of the other values in the criterion, using the sum of all values, or the largest value or the Euclidean equation. However, if you use Max-Min, most probably there will be differences, because this system is based on differences while the other methods are based on products and divisions.
MB-Another issue is, Mr. Münier, you said Entropy is the best method of weighting. We have no validation criteria on this issue and I am not sure.
NM- Yes, we have: The Shannon Theorem
MB -It is true that it is good in terms of talent and capacity. But is that so in terms of results?
NM – Talent and capacity, for what? Remember that entropy (s) complement, that is, (1 – s), measures amount of information produced by the discrimination in criteria performance values, and it is independent of the purpose of each criterion.
MB- As you always ask, let me ask you: According to what is the best? Regarding what
NM – Regarding capacity of criteria to evaluate alternatives. Thus, you can rank criteria significance as per their relative capacity. When criteria are ranked according to its subjective weights, or trade-offs, remember that they have been obtained without considering the alternatives that they have to evaluate. First drawback
Second drawback: They are considered constant, which may be false
Third drawback: They are assigned arbitrary values
Fourth drawback: They may be modified according to a formula, and then, the DM that established that comparison in good faith, has to modify them. Is that logical?
Fifth and most important drawback: Weights are not fit to evaluate alternatives, because they were built WITHOUT considering them.
Now my request: Could you show me just a single drawback of entropy derived weights?
In general, the authors argue that the rank reversal problem stems from normalization. On the other hand, any MCDM algorithm's own rules can be rank reversal reason. For example, rank reversal problem in TOPSIS is mostly associated with positive and negative ideal values. A rough solution is to stabilize the PIS and NIS values to avoid rank reversal. Thus alternative addition or removal will not cause rank reversal. In addition, changing the normalization is an additional strategy that has been applied, in which authors using min-max instead of vector normalization have been in the past. Changing the normalization method for MCDM is an innocent choice for me and of course it can be recommended. But fixing PIS and NIS solutions means playing with the original factory settings of TOPSIS. So you are playing with the genetics of an MCDM to solve the rank reversal problem and realize that you have created a completely different MCDM. The Mr. Munier RR problem is either absent or to a certain extent inherent in an MCDM. If there is no RR in SIMUS, congratulations. According to my observations, some outranking methods that do not need normalization and use preference function instead do not experience RR problem. There are also methods that report minimal RR. It should be emphasized here that RR is a concern for DM. Because RR is paired with inconsistency, which creates small amounts of doubt about reliability and validity. In the meantime, there is an alternative addition and deletion for the RR detection, not the criterion. You said from Mr. Munier RR, I don't understand how normalization methods might be responsible. This is actually a very classic and well-known subject. You can try this yourself. Select a known MCDM and compare the individual RR results with the 5 normalization methods. The logic can be explained as follows, some methods may be very sensitive to adding or deleting a new alternative. So much so that this creates a high level of RR. The basis for this is that normalization methods cause a certain amount of information loss. Regarding entropy, yes, Shannon's theory of information is one of the most important discoveries in the 20th century. Integrating it into the MCDM for the weighting factors proved clever and very useful. It is a fact that it is a talented and capable method. But hopefully the next generation of researchers will find out if it's the best. For now, we cannot measure how well Entropy affects the ranking results. Because we do not have any direct or indirect verification criteria. Thank you for your invaluable contribution.
Dear Mahmut
NN- Sorry for the delay in answering you.
MB- In general, the authors argue that the rank reversal problem stems from normalization. On the other hand, any MCDM algorithm's own rules can be rank reversal reason. For example, rank reversal problem in TOPSIS is mostly associated with positive and negative ideal values. A rough solution is to stabilize the PIS and NIS values to avoid rank reversal.
NM - Could you give me the names of the authors that assert that RR is produced by normalization?
Could you please explain this sentence of yours? On the other hand, any MCDM algorithm's own rules can be rank reversal reason. Sorry, but for me this is cryptical
MB- May be. I don’t discuss that because I ignore the process you are talking a bout. On the other hand, any MCDM algorithm's own rules can be rank reversal reason
Thus alternative addition or removal will not cause rank reversal.
NM- Well, that is new for me. Most researchers as Belton and Stewards, Triantaphyllou, Dyle y many authors asser t that this the main reason.
MB- In addition, changing the normalization is an additional strategy that has been applied, in which authors using min-max instead of vector normalization have been in the past. Changing the normalization method for MCDM is an innocent choice for me and of course it can be recommended.
NM- Sorry, you lost me! Changing the normalization method for MCDM? How can it be when normalization is part of MCDM?
MB- But fixing PIS and NIS solutions means playing with the original factory settings of TOPSIS. So you are playing with the genetics of an MCDM to solve the rank reversal problem and realize that you have created a completely different MCDM. The Mr. Munier RR problem is either absent or to a certain extent inherent in an MCDM.
If there is no RR in SIMUS, congratulations.
NM- Thank you, but it is not my doing. It is due to the genius of Dantzig that created the Simplex.
MB- According to my observations, some outranking methods that do not need normalization and use preference function instead do not experience RR problem. There are also methods that report minimal RR. It should be emphasized here that RR is a concern for DM. Because RR is paired with inconsistency, which creates small amounts of doubt about reliability and validity. In the meantime, there is an alternative addition and deletion for the RR detection, not the criterion. You said from Mr. Munier RR, I don't understand how normalization methods might be responsible. This is actually a very classic and well-known subject.
NM- RR paired with inconsistency?
Please, don’t be offended, because it is not my intention. But I notice that you mention and refers to concepts but never explain them. For instance:
Why RR is paired with inconsistency?
Which are the methods that report minimal RR?
Small ‘amount’ of doubts? What are you talking about?
This is actually a very Classic and well-known subject? Who says that?
MB- You can try this yourself. Select a known MCDM and compare the individual RR results with the 5 normalization methods. The logic can be explained as follows, some methods may be very sensitive to adding or deleting a new alternative. So much so that this creates a high level of RR. The basis for this is that normalization methods cause a certain amount of information loss.
NM - Information loss? Why?
MB- Regarding entropy, yes, Shannon's theory of information is one of the most important discoveries in the 20th century. Integrating it into the MCDM for the weighting factors proved clever and very useful. It is a fact that it is a talented and capable method. But hopefully the next generation of researchers will find out if it's the best.
NM- Well, this is true, but until they find that there is a better method, we are sure that entropy is a very good one
Simply because it is rational
Dear Munier,
The reason for RR is basically the use of normalization that changes when alternatives are added or deleted. Please look: Mufazzal, S. ve Muzakkir, SM (2018). I emphasized before that the RR degree also changes with the normalization type. Please look: Aires and Ferreira (2016). For those who want a comprehensive literature on the RR problem and Normalization relationship from past to present, please see: Aires, R. F. D. F., & Ferreira, L. (2018)
Is it a loss of information? Yes, I think a very small amount is lost when converting data with normalization. If looked carefully, it is seen that different normalization types cannot produce the same ranking.
The reason for the RR in the TOPSIS approach is the choice of PIS and NIS solutions in addition to normalization. I suggest the following study on this:
Please look ,García-Cascales, MS ve Lamata, MT (2012).
I hope it will be useful.
RR is of course not synonymous with inconsistency. So it does not mean inconsistency. On the other hand, it evokes inconsistency. And incorrectly RR is mapped to inconsistency. It is even a cause for concern. I may have been misunderstood on this.Because some MCDM methods give a high RR rating. Fortunately, there are methods that do not have RR problems. We also observe that some methods, such as SAW, give a very low RR, which can be called minimal.
Dear Mahmut
Thank you for your answer.
You recommend a series of papers that support the idea of RR produced by normalization. I have not read them, but then, I ask why RR also occurs in TOPSIS, which according to you, if memory does not fail me, normalization is not necessary. Why it does not happen in LP where normalization is mandatory?
Now, why normalization means a loss? Could you explain?
If you have an equation like this: 3x1 + 5x2 + 8x3 = 16, and you normalize by dividing everything by 16, you get in the first term of the equation three fraction values which addition is 1, which is the same value in the second term. Where is the loss? And the loss of what? Not certainly in the amount of information, since it depends on the discrimination of values, which is not altered if you divide every quantity by the same number.
If you read the Garcia Cascales -Lamata paper (by the way I met them both), they use in TOPSIS the Euclidean formula for the mandatory normalization of TOPSIS, but that according to you it is not necessary. Because using normalization, they consider the new values of existent alternatives in relationship with the new one, and for me that is right. They develop a modification of TOPSIS, therefore, the analysis is not valid for the original method and developed for Hwang and Yoon
Regarding inconsistency it would be useful to know what you mean with that term. Some authors use it when there are two restrictions that can’t be met at the same time.
What is inconsistency for you?
Maybe the word inconsistency is not the right word for RR.
In my humble opinion, RR depends on the methodology of each method.
For instance, in AHP, it could be because it does not matter if you delete or add alternatives. The problem is that criteria weights are always constant, because they are not based on the alternatives. Consequently, in the second step of the method, a new set of alternatives is evaluated and that preferences multiplied again by a constant vector, when in reality they should be multiplied by a new vector determined as a function of the new set
In PROMETHEE it could be because you compare alternatives as per their respective performance values for a criterion, multiply the winner by the criterion weight and then add them up. For me, it is natural that if you delete or add alternatives the sum will be different.
In TOPSIS, there is a similar problem.
Why RR does not happen in Linear Programming? Because each alternative is evaluated as its cost of opportunity, thus, in case that there is a new alternative added, it is evaluated according to this principle, and as a result it could be best or the worse.
In the first case, it will go the head of the ranking, and in the second case to its tail,
BUT, AND THIS IS IMPORTANT, IT DOES NOT AFFECT THE EXISTENT RANKING, BECAUSE IT DOES NOT ALTER THE RESPECTIVE COST OF OPPORTUNITY.
If the original ranking is A>B>C>D, a new alternative D could be the best, and in this case the ranking will be D>A>B>C or A>B>C>D if it is the worse. Or may be that it is better that B, and in this case, the ranking will be A>D>B>C>D. This is not RR, but the logical consequence that some added alternative is the best, or the worse. Contrary to RR observe that the original ranking did not change. A, never could be last or C>B.
This is the reason why there is no RR in SIMUS, which is based in LP
Dear Munier,
The details of the questions you were looking for were in the four studies I suggested earlier. You said you didn't read them. If you have time, I recommend that you evaluate the views on the violation of the "Principle of Independence from Irrelevant Alternatives' (PIIA)" rule, which is the cause of RR in these studies.