Dear Tran Van Dua

I have read your paper

Combination of design of experiments and simple additive weighting methods: a new method for rapid Multi-Criteria Decision Making

My comments

1- In my opinion, considering all possible combinations of alternatives (DOE), is a very good idea, however, I don’t think it is correct using each experiment as a criterion with the corresponding alternative values, and then using SAW that considers each criterion independently. I thick that all DOE experiments should be considered simultaneously. Let me to put an example to document this assertion.

Assume two alternatives A1 and A2 and four criteria, as follows:

C1= Maximum value of A1 and minimum value of A2

C2=Minimum value of A1 and maximum value of A2

C3= Maximum value of A1 and maximum value ofA2

C4= Minimum value of A1 and minimum value of A 2

Each of these criteria is represented by a straight line which slope is determined by the A1 and A2 values.

Therefore, if we compare C1 and C2, most probably their lines will intersect and the intersection point indicating the scores of A1 and A2

If now we compare C1 and C3, they can intersect, or not, but if they do, there will be another optimal point that indicates the scores for A1 and A2.

Therefore, we will get different optimal points. Obviously, this cannot be the solution of the problem. Consequently, to solve the problem all comparisons must be done simultaneously, and if the problem is feasible, we will get a unique intersection point, which will indicate the scores of A1 and A2.

Unfortunatelly, this cannot be done by SAW or by any other MCDM method. There is however one method that can do it: Linear Programming.

2- In page2 you say “However, SAW, like all other MCDM methods, has one thing in common that is if, after the ranking of options has ended, one/several options are added to the Original Research list, such ranking is required to be started over again. This is seen as a common limitation of all existing MCDM methods”

This is true. As far as I know there it is not a MCDM method that does not require to start again if a new alternative is open or deleted, and in my opinion, it is impossible, because in adding or deleting, you change the entropy of all criteria, which evaluate the alternatives.

3- Table – I don’t understand how you can add up normalized criteria values, when three of them call of minimization and the fourth for maximization. Could you please explain the fundamentals of this procedure? You can have a criterion with minimum and maximum values, but for that you need to have dedicated rows for criteria, one for the minimum and the other for the maximum of the whole criterion. In other words, a criterion with all its values may be maximized or minimize, but not some of its values individually.

4- In page 6, Table 9, which is the object of using different MCDM methods and getting different rankings? Even in the case that out of seven different methods, you can get say three or four coincidences, what does they tell you? NOTHING

Just that two methods or more have a high correlation, does not mean that they are the right answer to your problem.

5- I don’t think that Table 10 is correct, because comparing entropy and no weights with MEREC is in my opinion incorrect, because with entropy and no weight you you don’t change the matrix, but you do with MEREC.

6) In formula (5), why do you apply multiple linear regression to this problem?

I hope these few comments may help you.

Nolberto Munier

Similar questions and discussions