# 145
Dear Muhammad Junaid, Uzair Khaleeq uz Zaman, Afshan Naseem, Yasir Ahmad, Anas Bin Aqeel
I have read your article:
Material Selection in Additive Manufacturing for Aerospace Applications using Multi-Criteria Decision Making
These are my comments:
1- In page 2 you say “The objective of MCDM is to determine the right choice of alternatives by considering the weights and priorities given by subject matter experts or decision-makers”
This is too general a definition, for there are methods that do not use weights from experts, but generate those mathematically based on the data.
2- In page 3 “A participant involved in judgment needs to ensure that the significance of each criterion has been well assessed and that alternatives are well chosen”
And how do you gauge the quality of an assessment? If other experts have different values, which of the two is right? Therefore, that assessment quality is only utopian. This is one of the main aspects most researchers agree, in that pair-wise comparisons are extremelly subjective, and thus, unreliable. In addition, they cannot be used to evaluate alternatives like objective weights
3- In page 5 “After several brainstorming sessions and meetings with aerospace experts, the essential criteria and related sub-criteria in accordance with the goal were identified”
This is an excellent procedure. The selection of criteria must rest mainly on people that has experience in each area.
4-Page 6 “To check the reliability and consistency of judgements, it is important to calculate the overall consistency ratio (CR)”
Why there must be consistency of judgments? Who says that judgements are rational?
It is of course rational to judge with the physical characteristics of elements, but where is rationality in asserting for instance that economics is 3 times more important than environment? Where that number comes from? From intuition? Maybe, but in this case which is its value? I am not asking for your approval of what I say, I am only suggest reasoning, thinking and common sense.
Pair-wise comparison is an example, it is good for comparing things but not for establishing a supremacy of a concept over another. Most of us have different tastes on the same things, like you can say ‘In general, I prefer apples to pears’, and this is correct, but you cannot put a number to that preference. Many people obviously do not think about it, if the method says “express with a number your preference”, They just do it! Thinking about the logic of it? No…what for?
5- In the technical criteria you enumerate in Table 3, there are very good technical ones that shows the participation of experts in AM. If one asks some of them about the difference between tensile strength and elongation to break for instance, he or she will be happy to explain what each one means, and for instance, most probably they will say that the two characterises, albeit different, are mutually related, for you can stretch an Al test tube until it breaks, and in this case its elongation to break will change in a certain percentage.
That is, both concepts are related. The same for instance between fatigue strength and durability.
And here is your problem, because due to the lack of independency among criteria YOU CANNOT USE AHP, that requires independency, as Saaty clearly said. Needless to say, from this, your result maybe invalid.
6- In page 14 “To check the robustness of the methodology, sensitivity analysis was performed to check how much the proposed model is sensitive to any weight change in selected criteria”
Even if the definition is correct, SA using AHP us useless. First, because all intervening criteria should be chosen, not only one, and selected according to its weight value. There is no mathematical support for this, and second, because subjective weights are not good for alternatives evaluation. They are only trade-offs
These are my comments
I hope they can help
Nolberto Munier