In my opinion, it can be difficult. The AHP method needs to make decomposition to a hierarchy structure. We have many matrices to the pairwise comparison. I cannot imagine how AI will be evaluated pairwise comparison. Of course, we can use some database. However, e.g. the cost $2.54 and $1.25 how can be compared automatically? Another example which will be more seems like a joke. What is better and how strongly banana or apple? Sometimes we need to know the preferences of expert/decision-maker.
Thank you, Wojciech Sałabun ! My proposal was regarding difficult tasks such as identifying trends for building a list of disruptive technologies etc. In such case, AI can use the results of Big Data analysis better than human experts. In my opinion, the combination of experts with AI capabilities will more efficient than only an expert's experience.
Thank you Nolberto Munier for your answer. But I know that current algorithms of AI already worked better than a Human by understanding and recognizes voice and pictures. AI will be the winner against any human experts in such tasks.
And welcome to read the article, which was proposed to your attention Shafagat Mahmudova . It is the citation from absctract of this publication: "The validation results show the (AI ) expert system gives a highly satisfactory performance when compared to human experts."
Yes, most probably AI is more 'exact' that any human being, simply because it can handle thousands of different data, a feat that for us is impossible.
Considering that for AI you need a data base which is enriched with every new input, I wonder how you can input that data from hundreds or even thousands of AHP 'solved' scenarios.
First, because those may yield a result, however, there is not guarantee that these results really respond to an actual scenario, because they are obtained on invented weights, based on intuition. Therefore you will be building a fake data base, based on the intuitions of thousands of DMs
Regarding the article you mention, just by reading the title it is possible to realize that it is not very well oriented, since it mentions : "The validation results show the (AI ) expert system gives a highly satisfactory performance when compared to human experts"
Apparently, the author did not take into account that at present there is no way, whatever the MCDM method, to validate results, since we don't know which the 'true' results is. Consequently, how can you validate a result if you don't have any yardstick to compare to?
We have a hot and interesting discussion. As positive results, I think we can estimate full consensus with Nolberto Munier , that in many cases AI can be "more 'exact' than any human being, simply because it can handle thousands of different data, a feat that for us is impossible". It is a good start point for future evolving such concepts and other contributions.
Of course, Nolberto Munier and Wojciech Sałabun with Antoni Wilinski have right reasoned that the combination of AHP and AI is not easy technology.
My proposal was to use AI only as some experts inside a common team of human experts. It can be the combination of 20 % AI-experts and 80 % human experts, or 80 % AI-experts and 20 % human experts, or only human experts with different AI-systems for support of Decision Making.
Dear Nolberto Munier, I fully agree that validation of results of the working of AI experts in some tasks is very difficult. But the same issues we have in the case of only a human team of experts.
For an example of such difficult task, I suggest to your attention the task of building a prioritization list of all COVID '19 vaccines from different countries. I think that for such tasks AI as experts will have more authority than humans, which can be corrupted.
I wonder how can we prioritize COVID vaccines, if the only information we have at present is about vaccines from UK, USA, Russia and may be China, and the only thing that we know is that their TEST efficiency is about 95 %
I believe that after the different vaccines have been applied to millions of people, we can't make any prediction. Don't forget that on top of preventing COVID-19, vaccines are also analyzed from their side effects. I sincerely hope there are non, but it could be that the remedy is worse than the disease.
In addition, it may sound very simplistic on my part, but when we know the results in maybe a couple of months, and we have statistical data of efficiency, why do we need to use AHP or experts? What for?
Statistics saying for instance that vaccine x offered a protection of.....% per million cases and with no noticeable side effects, what else we need?
Sorry, perhaps I did not interpreted rightly what you propose.
In my humble opinion, you can't mix a sound technology as AI is, with AHP, that is completely subjective and not based on anything tangible
Dear Nolberto Munier , regarding parameters of COVID'19 vaccines we need to use for choice not only statistical data but also price, the temperature of transportation and storage, effectiveness to produce of anti-bodies, the principle of design (synthetic matrix or natural) etc. I started a special questionnaire concerning this here:
I suggest learning the mathematics of ranking things. First of all, you cannot rank thinks except by a real scalar measure--you need to have a well-ordered set. That means that MCDM is not a valid concept. All things (AHP included) are ranked by real scalar measures. The only question is, is it the right real scalar measure? Second, a preference must be deterministic, i.e., it must be clear and distinct. That means you don't need AHP, or any other such aid. Third, AHP is based on faulty mathematics--mistakes--so it won't rank things properly anyway. The necessary mathematics is well developed and well documented. You just need to learn it. Dumping AHP has nothing to do with how old it is. It should be dumped because it is wrong.
George Hazelrigg , interesting point. But I know many cases when using AHP was better than nothing. It was a good start point for evolving a decision making process. For an example you can read the article with results of evaluation of directions of armored personnel carriers characteristics improvement according to the questionnaire with usage of method of paired comparisons. 88 persons participated in the questionnaire. Twelve questions, which characterize main determinants of an armored carrier such as: protection, mobility, fire power, were chosen for paired comparisons. An additional point is that experts were proposed to answer on questions concerning general characteristics of an armored personnel carrier.
Article Оцінка вагомості показників бронетранспортера за даними опит...
Vadym, how do you know it was better than nothing? To actually "know" that, you would have to done the whole exercise in groups, with a control group using nothing and another using AHP. Is a method that gives wrong answers, leading you astray, really better than nothing?
Several years ago, I said that AHP was no better than random numbers. Some folks decided to prove me wrong. They did a series of test cases, some with AHP and some with random numbers and found the performance of random numbers was as good as AHP.
Dear Nolberto Munier , AHP was a historic event and will be in the textbooks as an early concept for MCDM. I think that for the evolution of AI to the expert level AHP technics will be useful as 1st step to understanding general issues. Regarding of Alternative of AHP can be read this questionnaire:
Effectively, AHP may be better than nothing, but this is restricted to personal and trivial corporate problems, not to real-world, serious scenarios. You can't decide the fate of a problem simply by intuition, again, this could be done only for personal matters. It is like trying to determining the value of the hypothenuse in a right angle triangle by intuition, instead on using the Pythagoras theorem.
Remember, the AHP was developed by Saaty when working for the military and perhaps as an answer of some questions from them, and it was correct, because the way the military worked, or used to work, in a linear hierarchy, but the real world does not work in that way. The real-world at the time of Saaty creation, in the 50s had already adopted since the beginning of the XX Century, the lineal hierarchy, used for thousands of years by the military, consequently AHP fit as a glove.
But since then, everything changed, any book on Industrial Organization will tell you how the corporate and industrial structure were forced to change, motivated by a rapid evolution of the world after the WWII.
It evolved in many aspects including social new structures and demand, by scientific advances in communications, computers, health, etc., that provoked that the linear or military structure was no longer representative, as it was. Organization changed to new organizational structures such as Linear, Functional, Line and staff, Project based and Matrix. Even Saaty recognized this by creating ANP, network based.
Consequently the linear hierarchical structure became obsolete and new tools were needed to analyze them.
Your example of armored carriers confirms what I said. Remember that when you applied pair-wise comparisons you used experts that knew what there talking about, they were not guided by intuitions, but by experience. They could reason and support what they say, something that does not happen in AHP or in ANP.
Just to finish this long answer, probably you now that Russia in the WWII won the war against the German invasion, in part, because the country was able to organize and move its immerses resources, thank to Leonid Kantorovich who created Linear Programming, to optimize resources, and this took place about 1939. Later he was awarded the Nobel Prize for this creation. This is not precisely a task to be solved with intuitions and pair-wise comparisons
Dear Nolberto Munier , thank you for your expanded answer. We have a consensus that expert, which guided by experience is better than only by intuitions. The next step to improve this line will be an expert with knowledge. It is human with AI Decision-Maker System or only AI - in the future.
If I understand you, in AHP or in ANP used only experts with intuitions? If they guided by experience and could reason and support what they say then it is not AHP or ANP ? What MCDM is more adequate such experts level?
The only entity that can validly express a preference is the person who is making the decision. Unless an AI device can read the decision maker's mind, it cannot produce a valid result. Furthermore, the preference must be clear and distinct to the decision maker. If it is not, then the decision maker is somehow being led astray (probably by the software or other method being used). If it is clear and distinct, then the decision maker doesn't need AHP or any other such method. These conclusions are a simple consequence of the mathematics of preferences. We don't have to guess about these things. The theory is well developed with a history spanning 300 years. The earlier work is good, the later work goes astray.
The purpose of a rating method is to assign a "score", R, to each alternative such that the alternatives are ranked by the scores they receive. Thus, if A is better than B, then R(A)>R(B). And, if A and B are equivalent, A~B, R(A)=R(B). Focus on this equality. AHP fails to provide a rating that assures R(A)=R(B) in the case of indifference between A and B. It's that simple. From this condition alone, it is easy to show that the weights assigned in a typical MCDM algorithm have NOTHING to do with their importance. Rather, they define an indifference surface.
I fully agree with George Hazelrigg regarding the purpose of a rating method. I think that AI as an expert can assign a "score" in a few cases better than humans. It will be the basis for a more adequate choice and Decision Making. Particularly for personal use, military applications etc. As an example in China was introduced social experiments with ranking by the "scores" of peoples with the help of Big Data and some AI algorithms. This trend will be expanded to other countries.
This is not a process of assigning scores, it's a matter of stating preferences and converting them to a numerical score. Preferences belong to individuals, not groups. It's generally not possible to rank outcomes for a group (Arrow's Impossibility Theorem) and, indeed, such rankings most often do not exist (they violate transitivity). Preference orderings exist (only--transitivity again) for individuals. AI is not an expert on my preferences, only I am that. That's why AI cannot do it correctly. The validity test for a scoring method is to see whether it rank orders ALL outcomes in precisely the order as the decision maker. If not, the method is not valid.
The integration of AHP and TOPSIS, is not new, and as far as I know it is not a trend.
In reality, it worsens the performance of an excellent method as TOPSIS, because introduces subjective weights, derived from the first stage of AHP, and then corrupting original data.
Regarding your comment about 'to renew AHP', sorry I don't share your opinion. AHP is structural as well as mathematically faulty, and so there is no cosmetic that can improve it.
I have not read the article you mention, therefore, I can't make any comment on it.
Regarding AI and MCDM, I don 't know, but I think that in previous discussions George Hazelrigg and I have expressed our doubts about it, and also given our reasons for this doubts. Most notorious was George's when he said that AI should be able to read the mind of the DMs.
Thank you Nolberto Munier ! Your comment was very important. In any case, we can give some chances for AI technologies to evolving current DM concepts, particularly to personal and management tasks.
In my opinion your idea might have some future, but not working making comparisons. It appears that for having your idea working we would need thousands of results from different DM and on the same problem, which is impracticable.
In addition, I share George comment who is posing a real concern. Do we need a 1984 (George Orwell), updated?
Dear Nolberto Munier and George Hazelrigg . I think that you have a very big pessimism. The last point in the DM process will be human where it is possible.
Do you think that AHP , a MCDM method cares for people?
If you do, please explain me why a person may decide on the wellbeing of thousands of persons, ignoring what they want and think, something that Arrow called 'Dictatorship', and that I would call 'subjugation ' of people, by unknowing their rights
I also had the opportunity to discuss this same issue with Saaty, years ago.
Respecting his memory I prefer not to put his answer,
AHP is a method for eliciting preferences (e.g., I like vanilla ice cream better than pumpkin ice cream). I don't see AI telling me what I like. Secondly, in order that preferences exist in the sense that they can be well ordered, they must be clear and distinct, i.e., deterministic. And that means you don't need AHP to tell you what they are. If you don't know your preference between A and B, it's because you've been asked to express a preference over something you don't directly care about. E.g., which of three metals would I prefer for the camshaft of my car? I don't know and I don't care. I care about how my car performs and its reliability and lifetime. To get from reliability to the choice of metals takes a discipline-specific model. But, the people who were touting decision theory didn't want to become disciplinarians, so they invented a scheme to avoid the disciplinary stuff--AHP (and other such methods). The only problem is that these methods don't yield correct results because they are flawed at their core. And since they don't reflect the (result of the) correct preference, they aren't useful in guiding such choices. In fact, they can very easily lead us to make rather poor decisions.
As to Nolberto's question: people care for people; software cares for no one. Second question: I contend that people always make decisions based on what they want. I don't see how it could be any other way. Maybe I want to help others as much as I can. Then I make altruistic decisions. Bottom line, I'm making decisions based on my preferences, no one else's. Try to think of an example where you made a choice that opposes your preference. I cannot think of any such that I know of, or even how I could do it.
On a different note, would you consider designing a system, say a rocket, based on the notion that F=ma/2? I doubt it because you know it will fail if you did. Well, AHP is equally flawed, so why would anyone want to use it?
I have never seen a collection of sentences than in few words express so many truths!
‘you've been asked to express a preference over something you don't directly care about'.
Any doubt about this?
AHP makes comparisons under a unique goal or objective. What if the DM does not agree with it or think that it is not relevant? He will be doing comparisons between two criteria regarding something that he doesn’t know or disagree.
Is this logical?
‘But, the people who were touting decision theory didn't want to become disciplinarians, so they invented a scheme to avoid the disciplinary stuff—AHP’
Exactly.
Now, regarding criteria, the DM is taking decisions on something he may have no knowledge about.
'People care for people; software cares for no one'
And this is the reason that data from subjective issues, such as for instance, building a highway that will partition a city in two, the DM NEEDS the opinion of people, the DM is NOBODY to express his preference on this, since it does not know what people needs and problems that said project may cause.
‘ I'm making decisions based on my preferences, no one else's’
Based on my 35 years experience in government, I would suggest that the person who has to decide about a highway will indeed base the decision on his/her own preference, and that preference is likely to be, "I want to keep my job." Government employees do indeed have other preferences and may use them in certain cases. Leaders may choose to protect the nation, leave a legacy, cure a disease, emphasize science. E.g., in funding research, I wanted to get as much benefit for the nation as I could. But, any time there was any question about it, my key preference was to keep my job.
I think you are misinterpreting what I am saying. I never said the people aren't honest, most are very honest. I never said they don't have to justify what they do. All these things are necessary to keep one's job. What keeping your job requires is doing the "safe" things a bit more than taking risks. It requires compromise more than having your own way. Real preferences are rather straight forward and clear. Mapping preferences into decisions is the real problem. In decision theory and optimization, all uncertainty relates to prediction of the outcomes of decisions. Preferences and choices are deterministic.
Probably I misinterpreted you. In my book a person working for somebody and by being afraid to be fired he accepts, against his principles what he boss says, is not honest. Of course a guy has a compromise with his employer but it does not mean to be converted in a yes man.
Real preferences are straight forward?
In my opinion you can map your preferences in decisions, and we do that every day. The problem is trying to apply those preferences to people
AI and AHP don't do the same thing. AHP is (allegedly) about "discovering your preferences. AI is about intelligence, whatever that means, but they are not the same thing. In any event, why would you need a mathematical procedure that is incorrect?
The purpose of AHP is to assist you in the elicitation of your preferences. For example, suppose you are trying to select an ice cream flavor for an ice cream cone. Please explain how AI would help you to make this selection.
Take this example one step further. Suppose there is a list of ice cream flavors from which you may choose. Tell me how you would choose the one you don't want. I would contend that the only way to do this is to have a preference that overrides your desire for the flavor you like. E.g., "I want to punish myself." Hence, in choosing the one you "don't want," you are actually choosing the one you want most. This is why decision making is always optimization.
Dear George Hazelrigg and Nolberto Munier , I agree that this example is good, but you used a very easy approach. In this example, an AI expert can make a very useful contribution to DM based on big data analysis regarding that user's preference history, user's health, comparing prices in nearby stores, ice cream production time (storage time), ingredients and calories in one ice cream, name manufacturer, country of location of the manufacturer, etc. Such a large amount of information can help in making a decision. But this is just ice cream, and for more important purposes, AI experts can provide more benefits.
Whether it's ice cream or the International Space Station, preferences are simple and straight forward, and they belong to an individual. In fact, the validity test of a preference is whether it ranks outcomes precisely the same as the decision maker. Please explain how you would check the validity of an AI-produced preference if not by asking the decision maker. And, if you ask the decision maker, why would you need AI? Note that the decision maker can change his/her mind at will. I might not want the same flavor ice cream every time. I vary from day to day. How would AI know what I want today? But I know.
We must recognize that the comparisons from George always hit the nail on the head.
The classical question is the validity of the theory because you can compare its result to what the individual wants. If you knew this, what do you need AI for? Why to bother if you already have the answer?
This is similar to people when saying that they were able to validate a result in MCDM, which means that they are able to compare their result with the 'true' result.
However, if you knew that true result, what do you need MCDM for?
Thank you, George Hazelrigg and Nolberto Munier for continue this discussion.
On my opiniion the using of personal preferences is not possible in different areas, such as goverment and defence sectors, because it will be corruption. In this regard implementation of AI elements to Decision Making Process will be productive.
Please explain how you could use someone else's preference. You would only do so because you want to, and that would mean you are acting on your own preference. This is a big problem in systems engineering: how should you incentivize engineers to make their decisions based on a common preference rather than strictly their own preference, and what is the net benefit of doing so? In my experience in industry, the engineers were going with their own preferences much to the disbenefit of their employer. And, in 35 years with the federal government, I saw the same behavior. Altruists are what they want to be--it's their preference.
Remember that the military has an incentive system to get troops to obey orders: they punish you, somewhere between busting your rank and execution. In Russia during WW-II, troops that didn't obey orders were summarily shot. That's an incentive such that your effective preference is to follow orders. Deep down, however, incentives don't change preferences, rather they offer alternatives whose outcomes are preferred under your existing preferences. E.g., in the Russian case, the soldiers preferred not getting shot over the outcome of not obeying orders (namely, getting shot). Preferences are quite stable, which is why design to preference is preferred to design to requirements. Requirements are quite unstable, making design goals a moving target and failing to account properly for uncertainty. Many years ago, I asked the CEO of Boeing, "What is your preference?" He responded immediately, "To make money, more is better." It would seem that this preference hasn't changed over the past half century.
Software often allows us to make mistakes much faster than we could manually. In the case of preferences, however, it's considerably more ridiculous. A preference is something known only to the person whose preference it is (unless that person tells others). Asking AI to tell me what my preference is is equivalent to asking a computer to read my mind. Suppose I had a preference order over 5 flavors of ice cream. There are 5!=120 possible rankings of which mine is one. Ergo, the computer has about a 0.833% chance of getting it right...unless it could read my mind. Of course, I could tell the computer what my preference is. Then it could tell me, in return, what my preference is. Does this make sense?