Dear Reda M. S. Abdulaal and Omer A. Bafai
I have read your paper
Two New Approaches (RAMS-RATMI) in Multi-Criteria Decision-Making Tactics
These are my comments
1- Page 1, you say “MCDM methods have demonstrated usefulness in finding the optimal solutions”
This is inexact. MCDM methods are useful in determining compromise solutions. Optimality in multi criteria very rarely exists, because you cannot maximize benefits and minimize costs at the same time; it is one or the other, or you aim for a compromise.
2- “In this context, this paper presents new MCDM tools which ranks alternatives based on median similarity (RAMS) between optimal alternatives and other alternatives”.
And how do you determine optimal alternatives?
3- In page 2 “This assertion manifests that the potential of MCDM techniques is evident in terms of demonstrating capabilities in evaluating as well as comparing different results”
There is an extended belief in the sense that near coincidence of rankings for the same problem, using different MCDM methods, are a proof of reliability. In my opinion this is an intuitive assumption that does not have any mathematical support. As an example, assume that for a problem you use an elemental method such as SAW and a sophisticated method, that considers many more characteristics of the problem like PROMETHEE, and the results are quite similar, say A>B>C>D for SAW and A>C>B>D.
The fact that one method considers only performance values from the problem, against another method that takes into account resources, minimums and maximums for the criteria simultaneously, using a mix of crisp numbers as well as a binary system, multiples scenarios, etc., and reach the same ranking, means that this may be a coincidence, since one is solving a simple problem while the other solving a complex one.
4- In page 2 you speak about validating a MCDM method.This is another mistaken assumption because none MCDM method can be validated simply because we do not know how close its result is regarding the true result, since we don’t know it. In other words. There is not a true and exiting ranking to use as a yardstick.
5- In all the excellent and concise description you make of different methods, you omit the crucial fact that in all of them criteria weights are subjective, and thus, subject to the DM that analyze the problem.
Therefore, using a certain method there could be as many solutions and the number of DMs intervening.
Needless to say, in the real-world, solutions are not from opinions, intuitions or assumptions. We are not talking here in designing a skyscraper, where there could be many, and all of them good projects, where subjectivity is very important, as well as opinions and points on view on decisions from different architects, considered from the aesthetic point of view, or by doctors in health were there could be different options for a patient, but not where mathematics enters, as in designing the building, a tunnel, a bridge or selecting the best location for an industry, or selecting the best type of engines for a ship. This is pure engineering. No room for subjectivities.
6- In page 5. What is the perimeter of an alternative? You talk about some methods like RAPS and MCRAT but you don’t explain on what they are based - the translation of the acronym is not enough - let alone their working. I don’t think that a reader can follow the text.
7- In page 5 “The problem data is multidimensional since each criterion is described by its dimensions”
This is incorrect. The problem is indeed multidimensional but not for the criteria but for the 16 number of alternatives, equal to 16 dimensional spaces, equal to 16 coordinate axes
I hope that these comments can help you
Nolberto Munier