I was thinking in recommending the same publication suggested by Dewan, and written b y an authority like Lootsma.
Speaking very generally, the American School pioneered by Saaty with AHP, , is based on the utility theory developed by Keeney and Raiffa, that assigns importance as a function of the utility derived from the preference of one alternative over another. Normally, this measure of the utility is subjective, and thus, it dependes on the D M, mood and intuition
The European school pioneered byt Roy with ELECTRE, also compares alternatives, but based on their respective values per each criterion, and is based on outranking, computing differences of values. It does not depend on the DM, although also creates thresholds of equality and superiority in the comparison. There is some subjectivity here in the determination of the thresholds.
In my personal opinion the Europen School is superiot to the American School.
The European Outranking school (PROMETHEE, ELECTRE, FUCA, ORESTE etc.) does not need normalization first. But it uses an alternative converter, which is ranking-based. All alternatives are compared with each other in pairs for each criterion and a general ranking is made with the total scores obtained from here. Alternatives conclude too many benchmarks. And it is no longer possible to confuse their ranking position with each other. As a matter of fact, another feature of this school is that some of its methods have low compensatory efficiency and I think they produce less RR. Moreover, its sensitivity is weaker and its consistency is more stable. Entropy and SD distribution of scores produced by some species are very high. This indicates that they produce an order that is more meaningful and has a higher amount of information.
The most important problem for distance-based methods (TOPSIS, VIKOR, EDAS, ARAS, CODAS etc.) is to choose the right and appropriate normalization type. The initial decision matrix types vary according to the fields. The kurtosis and skewness properties are different, which affects their normalization performance. In other words, for an MCDM, you may need to change the normalization type according to the data of the problem. For example, for CODAS, when you use MAX conversion in economics, it is efficient, but if you use it in finance, the system can be ridiculous. The best of this school is VIKOR if you ask me. Her equation is powerful and sophisticated. However, classical min max normalization may not always give accurate results for VIKOR. TOPSIS is the most widely used type among all MCDM methods. I think it's a good method, but it can produce a lot of RR. It can be said that connecting the whole system of equations to PIS and NIS is its weak point. I think VIKOR is more successful because it tolerates it. It seems that there are some problems with RR generation, compensatoryness, selection of normalization, sensitivity with methods other than VIKOR.
Simple weighted addition or multiplication methods (SAW, WSM, WSA, MEW, WASPAS, etc.) are the oldest, simplest, utility-oriented methods. In fact, their unnormalized form represents the primitive MCDM methods. However, nowadays, these methods are generally used in many ranking applications such as personnel recruitment, countries, universities, cities, student ranking / selection in real life. These methods are highly compensatory. For example, having a very high score for one criterion for an alternative can easily dominate the overall score. This invites a tendency to laziness, such as “work hard for one criterion and mediocre on the other”. Anyway. Finally, I would like to comment on AHP. But it is a very subjective and complicated method. That's why I'm saving a long review for later.
It is always a pleasure to read your messages and consider your ideas. However, this message took me aback.
I don’t think that you are right regarding European methods not needing weights, but don’t you think that you should explain your assertions, when experience says that not all criteria have the same significance?
“Alternative converter, which is ranking-based”. Could you explain what is it? Have you thought that ranking is an output and weights are an input? That is, input precedes output.
“Alternatives conclude too many benchmarks” Sorry, I don’t understand this, could you please explain?
“Moreover, its sensitivity is weaker and its consistency is more stable.?
What does it mean that sensitivity is weaker? And what is consistency, and why it is more stable?
“Entropy and SD distribution of scores produced by some species are very high. This indicates that they produce an order that is more meaningful and has a higher amount of information”
But if you say that they don’t need weights, why do you bring entropy and standard deviation-based weights?
“The kurtosis and skewness properties are different, which affects their normalization performance”.
Kurtosis? We are talking on lineal problems. Why do you introduce probabilities here?
I don’t think that AHP is complicated. It is based on false assumptions, but it is one of the easiest MCDM methods; maybe you refer to ANP, which is indeed complicated, although I agree with you that AHP is very subjective, precisely due to the not mathematical assumptions.
Dear Nolberto, Thanks for your nice words. I have always appreciated your methodological considerations, although we sometimes disagree with you.
I didn't say that European Outranking methods don't need weights. I said they don't need normalization methods. In fact, I never made an assessment about 'weighting'. I didn't even use the word 'weighting'. There are similar situations in your other evaluations.. For example, I used the Entropy and SD methods not for weighting, but as tools for evaluating the final scores of the MCDM methods. Another example is related to the European Outranking school's use of "Rank-based (Outranking) alternative converter". Here, I shared my finding that 'ranking' is used as an alternative to normalization for criteria. It has nothing to do with weight. If I had mentioned the MCDM final ranking, your assessment might have been correct. We're talking about transforming inputs, not outputs.
The reason why these methods were already called 'Outranking' was that they evaluated the raw data in the matrix according to their mutual advantages. That is, the first decision matrix data is transformed differently compared to other schools.