An Expert System is a special kind of software and they are working on knowledge base. They are specifically designed for an special purpose. They have logical ability like human and much more faster than human.
This question has been around for almost 30yrs that I've been in involved in the industry. It has continued to be the panacea and the 'holy grail' this whole time. Expert systems, when limited in their area of expertise, can perform exceptionally well. Even IBM's famous Watson is only as good as the data loaded into it.
Do I want a robot/ai operating on me, as suggested by Mohammad Bhuiyan? absolutely not. the robot/ai is only as good as the information that is loaded into it. if a difficulty during surgery is encountered, will it be one that the AI has "seen" before - is it in the database? If it is, then things might turn out alright. if it isn't, what surgeon's "intuition" will it fall back on to infer problem A is caused by B? Are human surgeons always correct and fool proof? no. would you trust the code *you* wrote to delicate eye surgery, knowing that one tiny logic error would be enough to lose your sight forever?
I think expert system can replace human. For example, a robot surgeon can not perform a wrong move during a critical operation as every possible logic is loaded in his memory. On the other hand, a human being faces some limitations like he can get puzzle, can be tired, his hands can be shaken etc.
Might, and this might be more efficient - and more cost effective, but it will not be as able to practice fairness and justice as the GOOD human being might do!
This question has been around for almost 30yrs that I've been in involved in the industry. It has continued to be the panacea and the 'holy grail' this whole time. Expert systems, when limited in their area of expertise, can perform exceptionally well. Even IBM's famous Watson is only as good as the data loaded into it.
Do I want a robot/ai operating on me, as suggested by Mohammad Bhuiyan? absolutely not. the robot/ai is only as good as the information that is loaded into it. if a difficulty during surgery is encountered, will it be one that the AI has "seen" before - is it in the database? If it is, then things might turn out alright. if it isn't, what surgeon's "intuition" will it fall back on to infer problem A is caused by B? Are human surgeons always correct and fool proof? no. would you trust the code *you* wrote to delicate eye surgery, knowing that one tiny logic error would be enough to lose your sight forever?
Yes, for low risk activities an expert system might replace humans. It is however my strong personal belief, that expert systems will never be able to completely replace skilled "workers", e.g. a surgeon or a pilot. Jerry stated the reasons: programming might be insufficient and programming might be faulty.
@Marc, yes programming might be faulty; you are absolutely right. But, do you believe that humans are not faulty. We can rectify programs but how could you rectify a person. To rectify a program doesn't take much time but for human up gradation, we need so much time and money. Now the next thing, do you believe that the most perfect person can work as much fast as human??? or can do job 24/7 without any problem??? AND THE FINAL TRUTH... HUMAN WILL DIE AND THE SOFTWARE NEVER DIE. It is immortal and replicable that mean you can make as much efficient software as possible. So, for expert job, I don't think so that a human can survive against Expert Systems.
while expert system are extremely helpful in perscision, efeciency, and can be cost effective, they can never replace a human being work. Or work involves alot of communication, compassion, and relationships that are humanistic in nature.
What does Expert system do in essence? It scores some function against entries in database and makes decisions using a predefined algorithm based on calculated scores. It can be much more intelligent than individual human expert because of database capacity. And probably AI will be able to replace human experts in most situations. The major difference with humans is that Sane Stable AI cannot and should not improvise and make responsible moral decisions then there is not enough information. I strongly believe that we will be able to develop AI which can make such moral choices. But nobody will entrust such decisions to AI unless it would be considered Human. So this is not a question of AI development but porting humans to a different hardware platforms. For example currently combat robotic systems are prohibited to make a killing shot decisions unless approved by human operator, despite radical decrease in combat efficiency.
So AI will always be only an adviser to a human decision maker in all problems related to human life and wellness. There is one problem though if AI will be responsible to all decisions with enough information human experts will only make decisions then the information is insufficient with much smaller chance of success. So in critical situation they might panic or just cease to make a decision. There are alarming examples of such behavior of pilots then autopilot requires human assistance.
@Rizvi: Certainly not. Expert systems are supplement to experts not replacements. It will be useful to some extent. But, blindly following the Expert System's decision is not advisable. Intrepretation of experts i.e. involvment of experts is absolutely necessary to take any solid decision, I strongly believe
Dear Dr. Rizvi, Most of us are of view point that replacement is not possible. However, it is coming to my mind that why not to continue our debate on: "Can it be Equivalent to Human or Substitution for Human"?
every thing has some limitations and that is quite natural otherwise on the first attempt we get the best thing but it is not possible in real. We are living in limitations and whenever we move one step ahead; it is known as invention. So, we should consider it as a new way of success.
I will give here some example(s) in detail soon to support my point.
Qaim - yes everything has limitations, no question about that. An automobile shouldn't be used to cross a lake, and a boat shouldn't be used to cross dry land. Each is an 'expert' in their own area.
We are now over 40yrs into the era of 'expert systems' (http://en.wikipedia.org/wiki/Expert_systems) and the more we learn, the less we understand. This is not to say that things won't get better, but do not expect 2001 HAL or Assimov's "I, Robot" anytime soon.
We people are just hesitating to accept the ability of Expert System and one more thing is the FAITH. We are fearing with Expert System however we have very positive results. Here I am giving you the story of one of the oldest Expert System and practically it is not in use:
MYCIN was developed over five or six years in the early 1970’s at Stanford University. It was written in LISP as the doctoral dissertation of Edward Shortliffe under the direction of Bruce Buchanan, Stanley N. Cohen and others. It arose in the laboratory that had created the earlier Dendral expert system.
MYCIN operated using a fairly simple inference engine, and a knowledge base of ~600 rules. It would query the physician running the program via a long series of simple yes/no or textual questions. At the end, it provided a list of possible culprit bacteria ranked from high to low based on the probability of each diagnosis, its confidence in each diagnosis' probability, the reasoning behind each diagnosis (that is, MYCIN would also list the questions and rules which led it to rank a diagnosis a particular way), and its recommended course of drug treatment.
MYCIN was never actually used in practice. This wasn't because of any weakness in its performance; in tests it outperformed members of the Stanford medical school. It was as much because of ethical and legal issues related to the use of computers in medicine; if it gives the wrong diagnosis, who can be held responsible? Issues with whether human experts would find it acceptable to use arose as well.
RESEARCH INDICATED THAT IT PROPOSED AN ACCEPTABLE THERAPY IN ABOUT 69% OF CASES, WHICH WAS BETTER THAN THE PERFORMANCE OF INFECTIOUS DISEASE EXPERTS WHO WERE JUDGED USING THE SAME CRITERIA.
One possible explanation of the apparent ability of MYCIN like systems to produce reasonable results, inspite of all their weaknesses, might be the possibility of tuning them, by adding and deleting rules and by changing weights of rules, together with the emphasis to comparative rather than quantitative results.
Expert systems are good for areas where where knowledge can be formalized easily and reasoning under uncertainty is required. They can make superior decisions in such areas, when compared to humans, although I agree that they cannot absolve us from the ultimate moral responsibility .
Actually, it's a two-way road: expert systems can help the domain experts to formalize their information better, and to some extent avoid the problem of "unknown unknowns". This reminds me of an example from the book "AI: A Modern Approach": an experiment with PATHFINDER diagnostic system from the 80-ties failed to produce valid diagnosis in 10% of cases because experts had assigned 0% probability to an unlikely, but possible event.
Dr. Coyle, none of your examples require reasoning under uncertainty, therefore they are not well suited for expert systems in the first place. The point of a probabilistic reasoning-based expert system is not so much to conclusively "solve" a problem, but to emulate the behavior of a rational, but resource-bounded agent. One cannot reasonably expect to "solve" the problem of e.g. driving a car in live traffic.
It is just as other man-made systems to help human in areas that functionality of human is not efficient or at least with considerable inefficiency. The application of expert systems is not an exception to this rule and generally they work with human experts. As stated by Atis, they help experts to formalize their knowledge where this could help them to overcome the difficulty of tacit-explicit knowledge conversion. But when it comes to realize the domain knowledge and to systematize the procedure to implement the knowledge for system upgrade, there is no way to replace human as it is part of human-exclusive capability and needs human mind.
The canonical 'Expert System" paradigm can, in restricted areas of knowledge equal or outperform a human. This is surely no longer a point of contention. I suggest, furthermore, that another truism is that the wider and less well-defined the field, the less easily the problem can be captured and automated.
I used to work as a trajectory analyst for an aerospace firm. We had a TSTO vehicle modelled, and the computer (a microVax II running F77) could trivially find better flight paths than the seasoned aeronautical engineers. And this was 20 years ago.
Would I want an automated eye surgeon? Yes.
Just not yet.
Maybe in a few years time.
I'd certainly want a Google car right now if I had the choice when undertaking a cross-country journey.
As to the wider question of whether a human-made system outperform a human?
Of course it can.
I can't poke atoms - but an AFM can.
I can play chess but my phone plays it better.
We are slow fragile meat and operate in very narrow regions of environmental conditions.
Our creations will be faster and better in any number of ways than we.
We should embrace this opportunity to give rise to new life, allowing it to recognize its heritage, share our wisdom and treat us with compassion.
Expert systems are now been widely looked at as obsolete as they are widely believed to have failed their main purpose to mimic human being experience. In construction they have not achieved significant success and they are almost abandoned. I was trying to track whether new efforts for ES are being exerted, specifically in construction and civil engineering but found none over the few past year, this is very indicative of a paradigm shift away from ES's. Nevertheless, my personal belief is that ES ought have been the natural extension for any automation effort in any field, in my PHD efforts I'm trying to suggest a new role for ES's.
In the areas of knowledge and experience, where is huge amount of information or it is hard to find direct connections expert systems (e.g. data mining based) may be in the future useful source of "second opinion" (ES may be simply quicker than human). But human expert rather rarely mistakes, even if he uses not only knowledge and experience, but also professional intuition, and is much better than current computational intelligence.
If we look back, since we started to hear about computer (1970's), rare books, difficult to practice (due to lack of computing machines) but our teachers and books always reminded us "Computer is a Slave".
May I remind you that the combined autonomous car fleet of Google has racked up over 500,000km of driving? On open public roads, and on private tracks. Only two accidents have been reported, one when the car was being manually driven, the other when the car was rear-ended.
I don't drive this well. Few people do.
You may also find that the majority of airline crashes arise from either mechanical failure, or more probably, human error. Rarely is it a bug in software. Naturally, human experts don't make mistakes often - this is surely a good working definition of 'expert'. But already we rely heavily on autonomous systems to route Net traffic, airplanes, trains, electricity, gas, etc. This trend will continue - of that I am sure - and accelerate.
To imagine that humans will continue to be the pinnacle of creativity and intelligence is hubris of the worst kind. And rather than a hackneyed 'master/slave' relationship can we not instead look to a mutually supportive role?
As a clinician I have written rather about clinical decision making process in the conditions of limited and incomplete knowledge, continuous observation, and many constraints, where skills, experience and intuition of the responsible specialist plays significant role. Every case/patient may be different and without previous similar case, especially in patient-tailored therapy. If we have (almost) complete information, in almost routine situations - such flexible automatization (as in car driving), ES will be enough, but I am aware that in not fully recognized cases specialist-human will be always responsible for ultimate decision.
Some human actions are when triggered and in process then sometimes quickly we judge and reverse the intended action. This situation is difficult to simulate in machine.
A general note to all, may I advise a little caution in one's pronouncements.
Saying that something *cannot* be done is very different from saying that something is presently not possible, has no obvious path to a solution, and is unlikely in the short term to be achieved.
We presently have no understanding of how a synthetic intelligence (without or without emotions) might be built.
Is it somehow ruled out by the present understanding of physics?
No.
Obviously not.
I would have expected a more optimistic 'can do' attitude from folk here on ResearchGate. Aren't engineers and scientists trying to do the seemingly impossible? Aren't we a forward-looking species?
Don't we look to overcome obstacles?
Or do we forget the pronouncements of earlier years that heavier-than-air flight was *obviously* impossible - and many similar statements?
For those of you who are reading this thread and want to go beyond the level of superficial "a machine will never be able to do this or that" level of comments, this Quora topic offers great insights: http://www.quora.com/Why-is-machine-learning-not-more-widely-used-for-medical-diagnosis
The replies, which are based on first-hand experiences, show that the real-world adoption of automated medical diagnosis systems is hampered not so much by technical problems, but by politics, traditions, moral and legal issues, and the amount of initial work required to introduce any new practices.
Interesting question and replies. Many seem to favour human analysis in stead of a computer (ES). A question that can be asked is how well our own brains work.
As James Garry states, computers might be better in performing tasks than humans.
In clinical practice (physical therapy) it not seldomly occurs that the therapist makes choices on an emotional or empathic basis rather than logic. It can be questionned whether that leads to the best result for the patient. (It is my opinion that making more 'rational' choices often is better....)
Our deep fear however is that a machine will stick to logical choices even when there is this single exception, where man would have made another choice. I wonder whether there is any literature/study on a comparison between human/machine choice and see which gets the best score?
I agree with Jan-Paul... indeed, the machine will stick to choices as this machine is programmed for - and there will be no place to manouver this... which sometimes might bring a disasterous end!
To answer the original question: Sure they can, especially if the data-need is much larger than a person's personal memory. And in particular for what they can "sense" on their own, which is mostly quantifiable, discrete data points, classifications and numerically specified values, probabilities, etc.
So medical diagnoses and tests, the finding of legal precedents, figuring out what is wrong with an engine or vehicle, perhaps chemical testing of items.
The problem requires humans when novel things or ambiguities are introduced. Is a new "investment" in a new kind of business a deceptive con game or an actual new opportunity? Is Uber an employer of drivers that owes the government 3 billion in employment taxes, or is it a marketplace for independent drivers to find passengers (and vice versa), so that it does not owe the government any employment taxes?
It still takes humans to figure out whether some things are dangerous, or to anticipate the future implications of new situations, or often to react (like a surgeon) to novel or unexpected conditions outside the normal domain. Humans are still the champion generalists.
Machines can never replace the human clinical skill in diagnosis, ,as it needs several soft skills used simultaneously, yet highly individualised to suit a variety of patients. It is always one human being interacting with another human being and never with an idiotic machine; because 90% of diagnosis come from individualised interrogation and physical examination and investigations are done only to tie the loose ends.
What is intriguing is not the distrust in machines, but the trust in human decision making. Do we really think we are flawless?
As for input needed to base decisions on: we assume we need a lot of information, but how about advanced machines that may be able to analyse our body, bone structure, metabolic and neurophysiological systems within seconds?
Such comprehensive systems don't exist yet, but I assume we can wait for them.