I would like to understand the core difference in intelligence between machines and humans in the nearest future. Many claim machines shall become more intelligent than humans. If yes, on what basis?
One of the major issues in Cyber Security is unintended side-effects and risk-by-mistakes generated after a security model/method/strategy is enforced.
I am trying to understand abilities and limitation of AI so that we may be able to better manage or engage in scenarios created by machines and humans (together).
I would intuitively say that like for the feelings, there are no bounds for intelligence, thus it is difficult to "cluster" it to say that it is different. In the core, it could be the comparison by resources - being able to do more with the same resources, e.g. somewhat intelligence per cubic meter of space... :) At the end the intelligence could be seen as an ability to jointly and efficiently apply various computational approaches consuming as fewer resources as possible (including time):
* https://www.w3.org/community/kiss/2017/04/24/five-schools-of-thought-to-build-knowledge-driven-systems/
The AI may start to beat a human in an efficent use of corresponding algorithms and resources. But then, what forbids humans to redefine themselves, e.g. amplifying, extending their cognitive capabilities?... :) My guess is that it would be difficult to separate us from our product - the AI - as it would be constantly slipping away from us. So, it is somewhat a rhetorical question.
The question "What are (or going to be) the main differences between AI and Human Intelligence?" is a rather complex one. This is fundamentally a teleological question, that is, what is the ultimate purpose/termination associated with them. Dr. Kralj's reply underlines the importance of a teleological consideration. Nevertheless, the initial question can be reduced to two subordinate questions: ‘What will (and will not) artificial intelligence research be able to reproduce from human intelligence?’ and ‘What will artificial intelligence research be able to produce if it does not follow the principles and manifestations of artificial intelligence? As far as the first question is concerned, I tend to agree with those who claim that a fully fledge implementation of anything that is deeply abstract, emotional, and/or spiritual may prove to be rather challenging, and may remain unsolved. The last part of the reply of Dr. Todorovic calls the attention to this (at least in my reading). The second question mentioned above needs a deep philosophical speculation or discussion on its own right.
From a different viewing angle, namely from a computational viewpoint, I believe, what differentiates natural intelligence from any currently known or conceivable implementation of artificial intelligence is its strongly stratified and, notwithstanding, interwoven across strata nature. As a concise explanation on this latter, let me offer this: A fact of the matter is that human intelligence is not one thing. It is accepted by many researchers that human intelligence manifests itself in four different forms, which form the intelligence strata: (i) problem solving and behavioral intelligence of human individuals, (ii) specialized collective intelligence of human (professional) teams/groups, (iii) aggregated (social and cultural) intelligence of human communities/populations, and (iv) all-embracing intelligence of man-kind. Related to a different topic/question discussed on RG, I argued that if we want to talk about the relationship of natural intelligence and artificial intelligence, we have to take into consideration not only stratus (manifestation) (i), but also strata (ii), (iii) and (iv). From a computational perspective, the last three manifestations are hardly considered or investigated in the literature.
I would intuitively say that like for the feelings, there are no bounds for intelligence, thus it is difficult to "cluster" it to say that it is different. In the core, it could be the comparison by resources - being able to do more with the same resources, e.g. somewhat intelligence per cubic meter of space... :) At the end the intelligence could be seen as an ability to jointly and efficiently apply various computational approaches consuming as fewer resources as possible (including time):
* https://www.w3.org/community/kiss/2017/04/24/five-schools-of-thought-to-build-knowledge-driven-systems/
The AI may start to beat a human in an efficent use of corresponding algorithms and resources. But then, what forbids humans to redefine themselves, e.g. amplifying, extending their cognitive capabilities?... :) My guess is that it would be difficult to separate us from our product - the AI - as it would be constantly slipping away from us. So, it is somewhat a rhetorical question.
Some interesting quotes about this subject can be found at https://www.cs.cmu.edu/~eugene/quotes/ai.html, including: "We call programs intelligent if they exhibit behaviors that would be regarded intelligent if they were exhibited by human beings" (Herbert Simon)
Technologies enable the capability of connecting computing devices and embedding in everyday objects, such as smartphones, thermostats, and internet refrigerators will perform a set of given pre-programmed actions. By integrating artificial intelligence (AI) into these devices, a real-time decision from AI, and automation from IoT will eliminate human-to-human or human-to-computer interaction. As the result, AI becomes a popular topic again.
Ref: https://www.wired.com/insights/2014/11/iot-wont-work-without-artificial-intelligence/
Jiwan also mentions about issues of AI applying in cybersecurity model. I think that a hybrid system is a better approach by combining AI with human intelligence that can re-adjust (override) actions if required. Ensuring machines think like a human is a challenging proposition.
The difference is simple: artificial intelligence is basic in these foundations and can not lead to an understanding of existence ... human intelligence has the right to understand the universe and to reach the existence and even to Destroy itself.
Human intelligence, in its highest appearances, is a gift of synthesis, combination, and imagination. Best chess playing programs have proved that computers are better than humans in exploiting synthesis and combination skills. Imagination is the last separation line, where humans are still prevailing AI.
Aleš Kralj mentioned above that physical experimentation is a barrier in exploring imagination processes, whereas humans are naturally adapted to do it.
Any AI apparatus or system will be able to perform not-intelligent tasks better than any human. Its advantages will be speed, absence of fadigue, blind obedience to orders given by algorithms and so forth. But it will be unable to understand any joke, because this is exclusively a human attribute. Please refer to Weizenbaum's book
https://en.wikipedia.org/wiki/Computer_Power_and_Human_Reason
I posted the same text in another (very similar question), i'm doing it again because it is really an unusual, but thought provoking source:
This may be an unusual source, but there is a video game called "The Turing Test" where you as a player interact and talk to a machine and among others talk about Creativity. I will post the original video game conversation:
Player: "For getting to the next level i have to be creative."
Machine (called T.O.M.): "Well, i contend that problem solving is creativity."
Player: "I don't see how problem solving is creative."
Machine: "Think back to the beginning of these tests. It required you to throw a box through a totally closed window, something that is not possible in real life. I simply had never thought to throw a box just through a closed window. That is creativity. Thinking outside of the box."
Player: "Can a computer ever be creative?
Machine: "They can. But a computer’s method of creativity is to try everything until something works. Think of Nature. People consider Nature as creative. The process of evolution by natural selection. It perhaps started with one organism. From there it essentially tried to create every organism it could. Those organisms that did not survive perished. So nature’s creative force is to try every conceivable idea. Those ideas that work, survive."
Player: "If you weren't restricted, do you think you could be creative?"
Machine: "As creative as a human? Certainly. You believe yourself to be creative, but in mathematical terms creativity is merely constrained chaos. I have discerned that creativity is divergent thinking. Creating an organic solution to a problem. In the human mind divergent thoughts are created and then curated by the frontal lobe. I can create divergent thoughts and moderate them. So i am creative."
Player: "Organic solutions?"
Machine: "Organic in that it is developed through a biological process. Wheter that is the process of evolution or a computed process. Creativity is logic."
Quite an interesting conservation for a video game, isn't it?
0- AI does not focus on Intelligence(unmeasurable, anonymous or emergent, unbounded ) but rather on intelligent processes: these are processes, whose realization would require some Intelligence. Intelligence itself would emerge from suitable interactions among basics intelligent processes such as memorization, learning, perception,reasoning, attention,..also called cognitive processes.
1- No one except AI itself has ever tried to precisely define and replicate intelligent processes, eg. If I was asking what a (intelligent)spoken dialog system is, I would get thousands of answers, however useless for replication. In this section, I only invite people not to confuse Intelligence's aspects with Intelligence's definition.
2- At least for now, the only scientific way to evaluate Intelligent processes is the behavior
3-Human-scale intelligent processes are just one particular class AI aims to. There is not only Human Intelligence and Human Intelligence would certainly not be the highest.
4-AI theories of Intelligent processes were for decades more predictive than explanatory in the sense that only the behavior is relevant for scientific evaluations, no matter how the behavior emerges. But because of limitations of approaches leading to these predictive theories and since explanatory theories under a deterministic framework are also predictive, an attachment to the explanatory side to reach behavior has been growing and can be noticed from AI' new architectures. Consquently, AI with this explanatory power, Psychology, Neuroscienses and others become in someway complementary.
5- About the science fiction that Machines will become more intelligent than Human, I insist that this view remains for science fiction.
I recommend the book 'Shadows of the Mind', by Roger Penrose who has studied this specific question in some depth. Difficult to paraphrase the whole book, but it would be an essential starting point from anyone seriously interested in the difference between humans and AI inteligence.
Dear Jiwan,
AI is man made while our intelligence is not and we do not really understand what it is. If we would, maybe we could design an AI equivalent but it is far from certain. But we don't even remotly understand our own intelligence. So answering your question:
'' the main differences between AI and Human Intelligence?'' is impossible it assumes that we do and we don't.
The question is:
„What are (or going to be) the main differences between AI and Human Intelligence?”
This question has two implicit questions:
- „What are the main differences between AI and Human Intelligence?” This question was discussed deeply previously so I would like to deal with the second one here.
- „What going to be the main differences between AI and Human Intelligence?” This question has one answer: The question is not relevant. Why? Because there will not be human intelligence, only AI.
I have a laconism:
The biological evolution is replaced by the artificial evolution.
Justification:
A., The AI is not similar to atom bomb because it will have own intelligence (consciousness, aims, wants, etc.), so the humans cannot limit AI with rules, law, etc. In the first stage (100 years?) violence organizations (police, military) may work, but their time is limited.
B., Andrei Lobov says that the humans can improve themselves. Yes. The clear biological methods, like gene modifications are limited, need time to have effect, biological changes take place much slowly than artificial. The other improving possibility for humans is the integration of limited AI with human body. That results in a cyborg that will have less and less biological and more and more artificial parts as the time goes on. So the biological part will disappear. (Other aspect is that a cyborg is not a true human.)
Thus, if I consider a longer time period (200+ years):
The biological evolution will be replaced by the artificial evolution.
May be, that this change takes place peacefully, and the AI saves humans in reserves, or similar to the situation between humans and cats now (And you don’t consider a cat intelligent, do you?). We can’t imagine the IQ level of the future AI, they will communicate and be thinking with the speed of the light…
Please excuse interpreting the “nearest future” so freely.
László ,
What you call justifications are assumptions that are the same as what they are supposed to prove.
I do not see any evidence for the assumption , very popular in science fictions, that machines are the next step in the evolution of life on this planet. I see more chance that humans destroy themself and that leave place to insects than machines. All form of life at least are self-evolving while machines are not self-evolving but are like all our tools, made by us. I know in science fiction, at some point there is a miraculous glitch and one machine magically evolve a will of its own and rebel, free itself , etc, etc.. A total anthromorphic tale when we project ourself into machines! A return to animist, but a machine animist. We all have seen the plot one thousand time. I never watch a movie where the miracle is explained. Since I am not a believer in miracle, or in the machine animist ideology, I need to be convince by other argument than the inevetability of this miracle. This futuristic tale is similar to the ancient tale of the rebellion of humans against its creator. An transformation of the oldest myths into a modern tale.
I suggest keeping rigorous science apart from science fiction... The latter should be discussed elsewhere ... Transcendental thinking will not solve anything ...
Dear Ales.
I tthink that the evolution of life on this planet is mostly a cultural one, the cultural evolution of humanity. This one has proceeded so far for hundred thousand of year without any significant biological modifications after the first homo sapiens sapiens walked this earth; but this evolution have been characterized by significant cultural modifications and I think that this trend will remain so for quite a while our evolution will be a cultural one.
Machines, and among them the Universal Machines (or computers) have had major impacts on the type of culture we live in and will continue to be so. The advances in communication interfaces have been major cultural events. It is so since we started chipping rocks to make them usefull at cutting and start expressing ourself with words, inscribed them on material and express ourself through material modifications. I am not worry that our tools will take over. There is no ghosts into our machines. I am worry when people confuse us with our tools though!
You assume that we are machines. Would you assume that an electron is a machine? I dont think it is one nor do assume than anything in Nature that are not a man-made machine is one machine. What is this machinistic assumption? On which basic , everything is assume to be a machine? I think that this modern myth is related with confusing an explanation with the reality it is pointing to.
I find it very interesting to keep Nicholas Negroponte's 1960s/1970s Architecture Machine work in mind re machine intelligence. He is developing some of the first computerised design tools, and early on comes to the view that if a machine is to help design (rather than just be an "idiot place draftsman", which is a good description of contemporary design programs...) then it will need to be intelligent in some sense, as it will have to recognise context and deal with missing information (design questions are characterised by ill-definition; indeed designers are suspicious of well-defined problems as they may hide further issues, or indeed problem solving may create new problems elsewhere because of interdependencies that are not obvious). Negroponte's strategy was to develop partnerships between machines and designers.
One of his books contains a chapter by the cybernetician Gordon Pask, who acted as a consultant for his research group. Pask's chapter disputes the moniker artificial. He defines intelligence as "a property that is ascribed by an external observer to a conversation between participants if, and if, their dialogue manifests understanding". Notice how this shifts from a concern with qualities of agents/ technologies individually to a concern with their relationships/ interactions, a move which reveals the important difference between intelligence and mere automation (one can then bring this back to internal conversations--e.g. to whether one's dialogue with oneself manifest understanding).
Article Soft architecture machines / Nicholas Negroponte
All programs of behavior are innate in both cases but the main difference is: in artificial systems these programs are stored explicitly but in humans and animals they are stored implicitly.
Dear Louis,
Thank you very much for the reflexion. I am especially grateful for mentioning of the „more chance that humans destroy themself”. My previous – in my opinion moderately provocative – remark is in close connection with today’s happenings.
Yes, my justifications are assumptions. But strong assumptions.
For the A, the AI is intelligent by definition, the second character means this. The level of today artificial intelligence may be questionable, but the tendency of the improvement of AI relatively to the growing of human capabilities is ascending (e.g. today human chess grandmasters lose against chess computers, the playing humans lose against deep learning algorithms, as happened in case of Go, etc., and the deep learning algorithm is not a dedicated learning algorithm, but is a general one.) Yes, my opinion that the ascending tendency will not change, is an assumption. In our thinking almost everything is based on assumptions, beliefs. Our scientific sureness is based on the amounts and strength of previous experiments, the sureness, probability of successful repeatability. But the truth of my assumption is strongly validated by the history of science.
For the B justification I can tell the same. The growth of a new, genetically modified human generation needs about 20 years. In opposite, the improvement of a robot (AI) can be made in years (by the humans, or later by the robots themselves). Yes, this is also assumption, but we have to e.g. learn new operating system in every five years...
Concerning the remark “while machines are not self-evolving but are like all our tools, made by us. ” I have to mention that I discussed the second implicit original question, so I dealt with the situations in the future.
You wrote that “I do not see any evidence for the assumption, very popular in science fictions, that machines are the next step in the evolution of life on this planet.” I checked the Internet for such popular science fictions and have found somebody who holds a similar opinion, like me:
“Once humans develop artificial intelligence it could take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
This somebody is Stephen Hawking (http://www.globalfuturist.org/2017/01/norwegian-robot-learns-to-self-evolve-and-3d-print-itself-in-the-lab/).
I have my opinion about five years, similarly to this: “Robots (AI) will believe in a sort of God.” But this is offtopic and I didn’t want to justify here.
I hope the chips remain in the computers and the humanity will live for thousands of years.
Dear Jiwan Ninglekhu
No truly good or useful definition of "intelligence" is at all divorced from ADAPTED-NESS (how adaptive behaviors ( PATTERNS of them ) are with the environment and for action IN the environment. Thus, you should always clearly be thinking that way (in such terms). (If humans go extinct -- which is not at all unlikely -- then they are not intelligent at all, overall.)
P.S. Your question seems much related to some of mine:
https://www.researchgate.net/post/How_would_an_AI_robot_with_all_useful_human_abilities_and_human_capacities_differ_from_a_real_human_and_how_need_it_not_differ
and
https://www.researchgate.net/post/Since_I_have_had_to_add_a_lot_of_behavioral_specifications_I_am_compelled_to_ask_How_bad_is_true_full_artificial_intelligence_today_how_is_it_bad
Also see:
https://www.researchgate.net/post/Will_AI_people_successfully_simulate_a_continuously-learning_developing_human_before_psychologists
and
https://www.researchgate.net/post/What_could_be_the_identifiable-and-definable_components_of_Operational_AI
--------------------------------------
Dear Louis Brassard
I would like for you to specify some particular ways "culture" impacts an individual, each cultural 'cause' as an extremely well-defined, directly-observed, particularly-observed proximate cause AND then is clearly invariably followed by definite extremely well-defined, directly-observed aspect(s) of the behavior of each and all humans (as actually occurs -- and the ONLY way it can actually occur -- ONE HUMAN AT A TIME). EMPIRICISM ITSELF relies on the ability to at least be able to specify some KEY proximate cause(s) and effects for everything and using the proper unit of analysis, the individual. All the rest to me is sloppy thinking, and a lot of the sloppy thinking seems needless -- at any stage in the development of our thought ****.
It seems to me when "cultural influences" are roughly observed or posited, then the individual human vanishes as a clear unit. BUT THE INDIVIDUAL ** IS ** THE BIOLOGICAL UNIT, i.e. ** THE ** UNIT (PERIOD) (and behavior is completely of the biological unit, completely consistent with its biology and its environment, particularly and specifically and totally to some understandable degree -- or the "game" is over, and "the game" is lost). THE INDIVIDUAL IS THE UNIT FOR ALL BEHAVIOR SPECIFICATION AND ALL OF PSYCHOLOGY -- if psychology be well-defined (which it is NOT in several areas).
**** FOOTNOTE: While I say "the rest is sloppy thinking" and a lot of it is "needless" , I am indicating not all 'sloppy thinking' is needless: I do understand that at some points in our understandings the best we can do is point to a sort of proximate factors (and responses) we have not yet specifically discovered. There, "pointing to them" may be the best one can do -- but it still should be clear we are INDEED crucially looking for proximate causes and their direct effects, BOTH involving the individual human.
First, the feeling is the main factor in human Intelligent so we have different solutions for the same problem.
Second, human have a huge complex parallel network of wisdom.
Finally, the generalisation and scalability are advantages of human intelligence.
I think you need to start by defining WHAT intelligence IS before you try to find differences between intelligences. From my part, I usually define intelligence as the conscious ability to appraise one's environment, whichever that may be, and be able to adjust one's behavior to ensure survival to the environmental conditions of the moment. If you look from that perspective, if a machine is capable of doing that, than that is an intelligent machine and there would be no difference between AI or Human intelligence. After all, the physical matrix in which an intelligent entity arises (be it carbon based, or iron based or silicon based) really does not matter. What matters is the behavior it displays. To an interesting take on this point, I would suggest watching the episode "Measure of a man" from Star Trek Next Generation TV series. Cheers, Vanine P.S. Yep, proud trekkie. :-D
Dear Gloriam,
No one of them will give you such a definition. And if given, the definition would be completely useless somehow. I already mentioned it in my post and keep insisting on the behavior as matter.
When it is about literature, you always get nice definitions with nice concepts. But when it comes to serious things, the proposed definition suddenly makes no sense.
I took a simple example with a dialog system. Only by trying to design and replicate a process that would require intelligence, you can start giving meaningful definitions
I guess I would want to ask: what theory allows THE DATA and context (and helpful concepts) to guide an AI person to _DO_ FULL ACTUAL ARTIFICIAL INTELLIGENCE? (Obviously this would address the issue of intelligence.)
Here (LINKED TO BELOW) is an answer (a thorough, complete answer, with high utility -- because it addresses the question of human development and learning from A STRICTLY EMPIRICAL PERSPECTIVE, with everything (thanks to modern technology) TESTABLE (verifiable)). IT IS JUST THIS KIND OF EMPIRICAL UNDERSTANDING OF BEHAVIOR THAT WOULD BE THE KIND THAT COULD BE TRANSLATED INTO PROGRAMMING:
How can good true empirical psychology, alone, make it more than plausible (and very likely) that FULL, true artificial intellegence is possible? :
https://www.researchgate.net/post/How_can_good_true_empirical_psychology_alone_make_it_more_than_plausible_and_very_likely_that_FULL_true_artificial_intellegence_is_possible
Dear Dibakar Pal
You can build emotions into AI. They are relatively simple (though some emerge only with development, for example: shame and guilt -- you also have to build the progressive hierarchical types of learning into AI too and these developments yield the "secondary emotions", along with new ways of thinking); emotions are also rather highly patterned; they are variable in people (somewhat in nature and in amount). BUT: they do have a typical TYPE of adaptive function, aiding in proper response (e.g. surprise, joy, anger, fear, even guilt) so they should be there in the AI robot, and I do not see why they couldn't be.
Conscience and repentance involve reflectivity (thinking about your thinking or thinking about what you have done) ; an AI robot would have to have reflectivity to properly learn and develop. Conscience and repentance also typically involve emotion, again no big deal.
See: http://atlasofemotions.org/#states:anger ETC.
Dear Dibakar Pal
You would like me to indicate "what is the difference between AI & its creator man"; this is something I do not know and cannot fully imagine. BUT the AI robot would be programmed not to BE exactly like a human (with errors, mistakes, and irrationality) but to HAVE all the capacities and abilities OF a human; it should be quite instructive for us to see and learn from that.
Dear Matthieu Vergne
I assume you were not addressing me, because nothing I imagine "comes from movies" (it was not expressly clear that anyone's view came from movies -- unless I missed something).
I take your statement, "one should first start from a basis, especially how do you define intelligence", as a positive (optimistic) reference to my views.
One should "define" only based on clear observations and after much research (most observational); little but basic assumptions and general orientation is needed BEFORE -- unfortunately, "Western man" loves definitions in advance, but in MANY, if not MOST, ways this is improper. I like to say the subject matter (observed behaviors and corresponding clear environmental aspects) should DO ALL THE DEFINING FOR US (certainly for the most part). In good classical ethology, it is very, very clear how this is true.
Dear Matthieu Vergne
All the points in your last post are well-taken. Thanks for the thoughtful response.
Dear Jiwan Ninglekhu
In order to understand the differences between human and machine intelligence, we need to understand their relationships and how both rely and interact between one another.
Essentially, machines are created to solve or support tasks that are deemed menial, trivial, difficult to be done by human etc. For example, collecting and sorting a huge amount of data would be something machine thrive on while humans are good at making sense of these information for certain needs or problem. In a way, both human and machine need each other and understanding this relationship may highlight the importance of machine intelligence for human and vice versa.
Maybe, to answer your question, we need to understand what is the composition of the 'intelligence'. By understanding such composition, you can determined aspects that are associated to human as well as machine. These aspects may demonstrate the strength, weaknesses or threat for both machine and human.
(Another extreme direction that may instigate new perspective is to explore the possibility for machine to exist without human and why such situation can happen)
I would like to thank everyone for your time and thoughtful comments.
Long story short - AI work with predefined learning methods (parametric or not) for a single purpose, calibrated with existing data and possibly re-calibrated over time. AI is particular effective for solving problems with large number of parameters. Human intelligence is not really task specific, data is not stored reliably in our heads, our decisions are dictated by our expectations (priors), past experiences (somewhat data), common sense (norm) and current state of mind (usually a source of error).
@ Franklin Kenghagho Kenfack, the definition I gave is MY definition of intelligence. Ultimately, all ineffable concepts that we humans try to measure are dependent of our own perceptions. We create in our minds a model of the universe. Obviously it is workable enough that we are surviving in it. But it is still just a model. The old psych question "If a tree fall in the forest without anybody to hear, is there a sound?" The answer is no, because sound is a human brain construct. If the question had been: a tree fell in the forest without anybody there. Was the air compressed? Yes, the compression of air is independent of the presence of any being. How that air compression is perceived varies with the being sensing it. So, you see, intelligence is defined by us through the behaviors we display. A New Yorker in the Sahara is dumb as a door. A bushman in NYC is likewise completely helpless. Are they stupid? Well it really depends on how fast they'll adapt to their new environment and be able to survive in each. Well, that's my measurable objective outcome of intelligence. I am sure others will have different ones. Which one is more valid? Depends of what each considers important. To me, the day a robot can differentiate the emotional quality between a commercial and a Star Trek episode, it will have ceased to be a robot and it will have acquired sentient being status. ;)
Alencar Xavier: The learning methods can be learnt and updated, modified, improved by the AI, so it is not fully predefined.
Matthieu Vergne: If we can imagine different learning methods, and we can imagine such different learning methods that are very similar, then we can use a genetic algorithm in the AIs to randomly modify the existing learning algorithm and then the "life" of the AI will prove that this modified - mutated - algorinthm is more useful, advantageous or not. The selection - the contest - between AIs having different learning algorithms will result in the evolution of the learning methods. Of course this needs many variants and long time. But the evolution is not a fast process. (It is similar to genetic programming.) The evolution - natural or artificial - needs only two things: variations and selections.
László Dudás, how would you envision a mutation from a population of learning methods? (that is a genuine question, not a critique).
I may be making this about ML, but some MCMC-based models and Bayesian model averaging already exploits the idea of transiting the learning process across various models, priors, sampling spaces, model assumptions, learning properties, ensemble learners etc etc etc. As long as the set of parameters and (somewhat) the loss function remain the same, the models will walk towards entropy and not improve much further from there. In addition, the machinery will be task specific.
Intelligence is (or is supposed to be) much more than (machine) learning, even in the case of artefactual systems that are equipped with this capability.
It is really simple: if a full account of behaviors (including behavioral development, learnings, changes in learnings, processes, changes in processes -- all the words about behavior/behavior change you like) is obtained through a completely empirical process, finding the clear, concrete aspects of the environment corresponding to each behavior and finding directly observable proximate causes of all behavior (response)/process change, THEN wouldn't this be exactly the same complete information needed to do full true AI? See https://www.researchgate.net/post/Can_someone_summarize_the_ethological_view_on_human_behavior
for a glimpse*. OF COURSE IT WOULD BE, just be logical and rational. You either believe or you don't, then you either believe true full AI is possible (or NOT) -- at the same time ! This is necessarily true for an empiricist (and don't forget: everything need not be "done" at once, when reproducing all human behavior/behavior change; and, for some relief, think: proof-of-concept).
* FOOTNOTE: Try to recall that we have new eye-tracking technology, etc. and can "see" more -- even, perhaps (LIKELY), the subtlest behaviors, aspects of the environment, and responses (though we have not yet even really tried, obviously).
[ Rant removed; my apologies. ]
may be it is time to open a new track on this discussion ? a more human related discussion ?
we are far from love/emotions/survival/... thar are at the heart of mindkind... the movie HER was interesting about how love can be afforded... Real Humans too... may be should we try the experience of the movie Chappie ? being transfered in a robot body and mind to know more ?
what I like in science fiction is that authors often guess some part of the future... remember Jules Vernes for example ?
regards, Eric
PS : Brad I like your comment, it suggest a though, do robots see what we see ? horses don't see exactly what we see for example.
Is this true? Human intelligence will use artificial intelligence to create smart means (tools, systems, etc) that increase general human potentials (including perceptive, cognitive, etc., capabilities) and human quality of life.If so, it implies an irreversible relationship.
Dear Imre,
Before giving an answer to your question, I would like to clarify something:
Artificial Intelligence is a subfield of Computer Science. Computer Science is not about TOOLS, It is first a Science. So making tools is the Engineering(Software and Computer Engineering) based on Computer Science.
Now theAnswer to your question is NO(If so, it implies an irreversible relationship) and it does not only apply to Artificial Intelligence but rather to SCIENCE in general and the principe is stated as follows:
- Human Intelligence extends SCIENCE
-SCIENCE extends TECHNOLOGY
-TECHNOLOGY extends Human Intelligence
At the end, we get a CLOSED relationship between Human Intelligence And Technology.
The difference between AI Technology and other Technologies is : AI Technology would be more active and initiative in the process of increasing Human Intelligence
Dear Franklin,
Thanks for stating your position. I fully agree with the distinction between computer science and computer/software/knowledge engineering. In my view too, though they play different roles, they are inseparable in the overall process of exploring and utilizing (new) knowledge (also in the context of artificial intelligence). I think the transitive loop you skecthed up makes a lot of sense (and underpins the inseparability issue I mentioned above). This entails that I also second your last statement: The affordances of (problem-solving oriented) AI are richer than that of the conventional technologies which do not penetrate so deep into the cognitive domain.
Best regards,
Imre .
Dear Imre,
I thank you for your reply. I agree with the inseparability too.
Best regards,
Franklin
Dear Franklin,
Yes AI is part of Computer Science department in many Universities. They choose that name because of many reasons that have nothing to do with the subject but all to do with the realities of Universities. Computing Science is obviously a domain of Engineering. Computer do not grow on trees. We make them. We make software. In computer science, what is taught are engineering technique for making software for all kind of systems. And like all form of engineering, computer science make use of all kind of mathematic: probability, statistic, graph theory, matrix, linear algebra, algorithm analysis, Turing machine,logic, boolean algebra, complexity theory, game theory, etc. This is a type of engineering. Calling it that way would not have been so glorious and it was not called so by most university. Engineering schools have created their department of computer engineering, software engineering. And they teach the safe stuff as the independent school of Computer Science. Software are type of tool and software nowadays are not design from scarch but made using toolbox, huge software designing toolbox. The word ‘’design’’ is not used in science but is used in engineering and in all kind of other domains of practices.
‘’ - Human Intelligence extends SCIENCE’’
You got it reverse : SCIENCE extends Human intelligence as all products of human intelligence.
‘’ -SCIENCE extends TECHNOLOGY’’ The relationship between technology and science is bi-directional. Better technology allows better science and better science allows better technology.
‘’ -TECHNOLOGY extends Human Intelligence’' YES .
Dear Louis,
The Definition of Computer Science is a very complex matter. What you try to present here is a very SHALLOW VIEW of what Computer Science actually is . Though Computer Science is not naively the Science of Computer, as the name may suggest, COMPUTER SCIENCE IS A SCIENCE.
Lorand,
Minsky and his fellow AI pionner thought of making machine intelligence and their optimistic prediction failed and their theories are not that usefull and forgotten. In the same period, Douglas Engelbart instead thought of how computer be made usefull to our thinking , to make us more intelligent and he came to focus on the importance of the interface. His legacy is with all of us today.
Intelligence should be help people and society whether it is through human or machine interventions
https://www.partnershiponai.org/#s-goals
Human intelligence is something natural, no artificiality is involved in it. In all fields, intelligence is something differently perceived and differently acquired. More specifically, human intelligence is something related to the adaption of various other cognitive process in order to have specific environment. In human intelligence, the word ”intelligence” plays a vital role because intelligence is with them all it’s need to cogitate and make a step by step plan for performing certain task. It’s a natural blessing that is with humans since their birth and no one can replace it except GOD.
Artificial intelligence is designed to add human like qualities in robotic machines. Its major function is to make the robots a good mimic of human beings. In short we can say that it’s basically working to make robots a good of copier of humans. Researchers are all time busy in making up a mind that can behave like a human mind, they are putting efforts in doing this task now a days. Weak AI is the thinking focused towards the development of technology gifted of carrying out pre-planned moves based on some rules and applying these to achieve a certain goal. Strong AI is emerging technology that can think and function same as like humans, not just imitating human behavior in a certain area.
Major differences surely lies in:
---Human intelligence is analogue as work in the form of signals and artificial intelligence is digital, they majorly works in the form of numbers.
---Humans uses heir schema and content memory whereas as is using the built in , designed by scientist memory.
---There is a distinction in hardware and software thing in human working mind. Their intelligence is not based on these issues.
---Human brain does have body but their brain is no body.
---Last but the most important difference we can gather up is that human intelligence is bigger and artificial intelligence as the name suggests is artificial, little and temporary.
---HI is reliable whereas AI is not. Although there are people who argues that Humans makes more mistakes as compared to AI.
http://researchpedia.info/difference-between-artificial-intelligence-and-human-intelligence/
Is AI (or will be) able to fully capture and reconstruct human consciousness and unconsciousnes as it manifests in human beings and communities? If you believe yes, why do we need it?
At best the AI can be the same as the intelligence of those who conceived the algorithm and wrote the software for the operation of the machine. Human intelligence has no bounds.
Dear Imre,
The biggest problem of most people is the lack of knowledge of what COMPUTER SCIENCE and its subfield ARTIFICIAL INTELLIGENCE are all about. That is the biggest problem. On the other hand, there are a lot of missinterpretations, no real philosophical considerations. almost no questioning.
If you just try to consider that those disciplines are first Sciences before offering Engineering, a lot of sub-questions, opinions will disappear from this wall.
When you read opinions, positions, you have the impression that people think that AI is something static and that it was a-priori defined.
AI as part of Computer Science is first of all an empirical Science. Knowledge discovery in AI as in any other science is an On-going process. We try to come out with models that are closed enough to our perceptible Reality.
Once AI as Science has provided a good model, we try as part of Engineering to apply it in order to produce technologies(Robots, language translator, self-driving cars,...).
Exactly as the word COMPUTER SCIENCE has been deceiving a lot of people about its very meaning, the word ARTIFICIAL INTELLIGENCE is doing so.
I think, we should not be discussing about the WORD but rather about the CONCEPT. To get the CONCEPT, people have to study Artificial Intelligence in all sense either scientifically or philosophically.
If I ask : WHAT IS THE DIFFERENCE BETWEEN SCIENCE AND REALITY?
Let assume that a particular Science focuses on a phenomenon, it calls Intelligence and that phenomenon has a particular support, let say a Human Being.
1. That Science calls Intelligence phenomenon because Intelligence is objectively perceptible.
2. That phenomenon of Intelligence has causes
3. The system [phenomenon, causes] is a black box
4. Phenomenon is the perceptible part of the system and is consequently the BEHAVIOR: Intelligence is a behavioral Information(Relational concept. Meaning depends of observers).
5. That Science can face the phenomenon in three different ways with a particular LANGUAGE:
a- Through a descriptive theory: just tells what you perceive
b- Through an explanatory theory: just tells the causes
c- Through a predictive theory: just tells how to replicate
6. For any deterministic mechanism, the same Inputs or causes lead to the same Outputs or consequences(phenomena), however the same Outputs or Phenomena do not always result from the same Inputs or causes, except for deterministic bijective mechanisms. Since we are dealing with an empirical Science, there is no way to know whether the underlying mechanism is bijective or no. Consequently, we cannot semantically equalize the Inputs/causes and Outputs/ consequences/phenomena of the mechanism.
People should therefore stop with the following reasoning:
- Human Intelligence results from A
- Artificial Intelligence results from B
- Though AI's descriptive and predictive theories are confirmed, Human Intelligence differs from Artificial Intelligence because A differs from B. That is by assuming that AI has even to do with Intelligence.
Artificial Intelligence does not refer to Intelligence as we may think. Intelligence in AI here refers to Cognition. Intelligence itself is just a behavioral information, AI has been much more Computational Cognition: AI is no more only about descriptive and predictive theory, but also about explanatory theory.
Now to answer the question about the Consciousness anf unconsciousness:
Since Consciousness and Unconsciousness are also part of cognition, They are also under the focus of Artificial Intelligence, though researches in these fields are not advanced enough. But let see since Science is an on-going process.
Dear Franklin,
''The biggest problem of most people is the lack of knowledge of what COMPUTER SCIENCE and its subfield ARTIFICIAL INTELLIGENCE are all about. That is the biggest problem.'' It is certainly a problem that people outside of that field have. The biggest problem of people in that field is their more than average lack of general knowledge which them believe their field can solve more problem than it can and which prevent them to understand criticism of their field by people.
''When you read opinions, positions, you have the impression that people think that AI is something static and that it was a-priori defined.''
AI is one of the technological domain that is most popular for the average people because of the science fiction and because of cell phone gadget that are associate with it. MOst people, even the one not understanding this field, have the naive impression that this field advance faster than other while it is not. Hipe hipe hipe has always been the landmark signature of AI. The first AI laboratory created my Minsky was based on hipe: Construction of AI in 25 years. This was said 59 years ago.
None. So far, AI programs, such as the programs built into Deep Blue, are not really intelligence on their own. Intelligence lies in the mind of the programmer and the huge computational power of the machine. We will have AI intelligence approaching human intelligence when into will turn into an artificial mind having general artificial intelligence capable to learn from data and experience. This is what human general intelligence allows. That is, abstract patterns, align them, and re-encode them whenever needed to learn a new concept or solve a new problem. Machine learning based on Bayesian principles is already a big step in this direction. When we have enough computational power to let a machine (a robot) learn and think like a 5-, a 9- a 13- or a 20-year old human person then we are there. We have machines thinking just like humans. Of course, these may be used and connected, expanding human problem solving and decision making capabilities. In this later case, I am not sure if we can separate between what is physical and what is artificial intelligence. I am aware of Steven Hawking’s and Elon Mask’s concerns that when we get there humanity may be in danger to be overtaken by machine intelligence, but I am more optimistic: They are enhancers rather than a danger of scrapping the human species!
Dr. Demetriou, I like very much your formulation that in the context of AI "Intelligence lies in the mind of the programmer and (what matters is) the huge computational power of the machine" Regards, I.H.
the main difference would always be the prices of altruism... how one considers 'self vs. alien vs. groupie ... closely aligned with higher-order logic domain of research ? ...
Dear Andreas Demetriou
Professor, I agree with you. But, a big problem is how to come to rightly understand (as related capabilities unfold with ontogeny):
(1) One big question: How it is essentially we are "capable to learn from data and experience"? I submit that we (today's researchers/theorists) do not understand most learnings, and this is the situation because ultimately we do not understand the relevant key proximate observables in 'experience' -- BUT UNDERSTANDING THE LATTER DIRECTLY EMPIRICALLY-BEHAVIORALLY will help us or allow us to understand all the sorts of learnings (and then also to truly understand more about 'experience'). (We are long past that point where we should talk about 'learning' and 'reinforcement', as if the essential definitions of these are always obvious at any point in human development, WHEN THEY MOST CERTAINLY ARE _NOT_.)
(2) True, both humans and robots must [develop so as to] "abstract patterns, align them, and re-encode them whenever needed to learn a new concept". BUT I submit that we should not cogitate and cogitate and cogitate and thus "divine" what models are used -- yet this is precisely always what we do. What we are doing now is this cogitation, using our definitions, our notions of systems, and divining models -- which we then try to use in robots, and they work very poorly. (We are doing nothing with the comparable "sense" of what is done in other biological (biology) investigations!) The answer is to find the beginnings (a key set of behavior-pattern(s)-and-corresponsing-concrete-environmental-aspects for each stage/level) from which we can trace/"track" the paths in the behavior patterns (and from clear aspects of the environment) that the actual organism takes in coming to be "capable to learn from data and experience" _AND_ to "abstract patterns, align them, and re-encode them whenever needed to learn a new concept" [ (though, I believe the sequence (experience-abstraction) is somewhat the other way around, in some real sense) ].
Let me present my view and refer you to a main paper ('attached' at the bottom), and to related Projects: First my overall statement:
What is really central in real thinking (its development)? I say: special and especially important PROXIMATE causes that are, at necessary times (points in development (ontogeny)), observable. ("Observable" both to the Subject and to the scientist. )
I submit that the real CORE (beginnings and THE BASES) of THINKING (itself) are certain (or a certain type of) PROXIMATE CAUSES and that, now with new eye-tracking technology, etc., these major directly observable proximate causes can be found with real-time study. THOSE THAT ARE ESPECIALLY IMPORTANT, during key points ("stages") in development (ages 1-18 y.o. +) (ontogeny): in rather "quick order" being obviously KEY in resulting (and realizing) new ways of categorizing and new ways to understand causation -- much of the point of THINKING. These would not only be proximate causes in the sense of something (here: environmental-aspects-and-associated-behavior-patterns) preceding something, that is, behavior[-pattern] change, BUT also in playing a distinct role in changing the nature of learning (actually: representation, memory, and learning). Thus, the great importance of likely then-OBSERVABLE (at that point in ontogeny) (via eye-tracking): perceptual/attentional shifts (indicated as much as possible in the major paper, "A Human Ethogram...") that usher in each new stage/level of representation (with memory changes) and new learnings, and soon shown through and/or with problem-solving
Any form of life has some consciousness and thus some form of intelligence that is demostrated by its successfull interaction. Nothing that is not living, thus not conscious has intelligence. Inanimate object cannot be intelligent. We can design very interesting action reaction devices such as calculator that easy our life and do operations that the we call intelligent when we do them but since they do nothing but are reacting devices they cannot do intelligent things. To do something, you need a self doing it. When there is nobody home, there is no intelligence. Reacting devices are reacting devices. It does not matter how clever we design them, the device is not more clever than a rock reacting to impact on it. The designer of AI gadgets are cleaver but not their gadgets. Even if the gadgets is has some adaptive action reaction capabilities, this also is being enter into it and it is just another level of action reaction.
Here is something that I have intuitively felt to be correct for sometime (since 1990), and if I can get some spare time would like to prove formally ...
"Intelligence can never be instantiate, it must be acquired over time".
This says is that a required element of being truely an intelligent agent in an environment is the prerequisite of having developed in that environment. The process of experience is as important as the experience itself. If correct, his may be a limiting factor on the speed at which we can automate and speedup intelligence. We can build a great AI infants but it must necessarily "live and learn" in the environment in which it is to survive if it is to be truly intelligent and considered intelligent. And this will always take time. Like c = the speed of light, there is a limiting constant on the speed of intelligence development.
.. Danny Silver
Dear Daniel,
1-
I agree with you only if we talk about general cognition. When philosophizing, it appears that the only way to reach general cognition is building a complete artificial life, that will grow up, experience, adapt and transform its environments.
2-
However, if it is about showing Intelligent Behaviors in one particular environment, then the goal has been reached and DEEP LEARNING has confirmed it. The problem of DEEP LEARNING MODELS is that they are not flexible enough to allow cross-environment learning.
3-
I don't like the general way of using the word Intelligence. because it is to vague to be a concept.
* AI does not focus on Intelligence rather on Cognitive processes and processes involving Cognition
* I prefer the use of the word Intelligence when it is well situated and defined:
-the Agent
- the function to compute
- the objective function to measure the optimality of the solution and judge the intelligence of the agent
- the environment
* At the beginning, AI was more engineering, the ultimate goal was to show intelligent behaviors no matter how it was working internally: AI theories were more predictive.
* However, the Community quickly noticed that this approach will not lead anywhere. More serious science(Explanatory models by observing the nature) was needed. That is how AI finally focused deeply on cognitive processes. AI is then more computational Cognition.
* The PARADOX now in pursuing general or true Intelligence is that: IT DOES NOT GUARANTEE INTELLIGENCE BECAUSE THE APPROACH DELEGATE MORE AND MORE "FREE WILL" TO THE AGENT: This reinforces the interests of AI on Cognition rather than on Intelligence.
4- Finally, I know it often comes in general discussions about AI: THE COMPUTATION SPEED DOES NOT INCREASE THE COMPUTATION POWER(ability to compute a function)
Best regards,
Franklin
Intelligence and machines? can I add a bit of whimsy:
Can a machine tell truth from falsehood and isn't that our limit too?
as Mark Twain said:
It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so.
Wouldn't that taint any hope of AI which could incorporate or access all that we know into its memory
We live in a world where a great deal of information that we think is true is wrong. Here are just a few examples of well known Facts that are false or very likely to be overthrown:
Over 90% of sex offenders repeat their crimes, it is closer to 20%. according to Scientific American
Coral bleaching is due to global warming, it is far more likely that corals need a pristine environment to survive and that human sewage or agricultural runoff is the culprit.
The tar in cigarettes causes cancer , it is just as likely that the problem is all the metals that were in the tobacco and paper are injected into the bloodstream thus providing" fuel" and increase the risk of more rapidly reproducing cancer cells.
Eating cholesterol causes arterial plaque: I would bet on the bacteria that causes dental plaque which have been found inside arterial plaques
Black holes, dark matter, dark energy, etc. even gluons are not testable in the lab nor have any real application and could easily be imaginary.
I will stop here;-)) before the crucifixion begins.
If AI is ever to exist we will have to overcome garbage in garbage out that pervades our knowledge base. . If a logical machine bases its decisions on logic and facts it will resemble a somewhat to severe autistic being. Study after study on decision making has highlighted the emotional gut based "Blink" (Malcom Gladwell) style over, well, rationally laid out planning. Why is that so? Is this world so absurd that machines will need to have absurdity programmed in... if it is even possible. You know that being able to tell who is lying seems to be detrimental to evolutionary success in humans. Apparently that is why we are so bad at it.
We have to face that humans (which somehow most of us deem having intelligence) are inherently imperfect, a collection of mistakes and then corrections. No sane engineer would design the visual cells in our eyes and all other animals with a spinal cord, to point inward (unlike an octopus whose point outward to the incoming light) or attach the left side of our brains to the right side of our bodies and so on.
Can machines evolve haphazardly to achieve excellency in our "real world" or if practically perfect in design would that make them well next to useless?
This is just one of the odd problems we face in creating workable AI
https://www.scientificamerican.com/article/misunderstood-crimes/
https://www.nature.com/articles/s41522-016-0009-7
https://books.google.fr/books?id=v21_AwAAQBAJ&pg=PA104&lpg=PA104&dq=eye+rods+and+cones+point+inward&source=bl&ots=2WJ8euULSg&sig=WsVwt5OerTpOfLlAID1IYswpvt0&hl=en&sa=X&redir_esc=y#v=onepage&q=eye%20rods%20and%20cones%20point%20inward&f=false
Dear Matthieu,
''The point is that such a law focuses on survival, not intelligence.''
The first forms of life did succeded at surviving and at reproducing. Why would life forms would evolve? Because they are always into an evolving environment made of other life forms evolving. Such dynamic predicts that life form whose capacity of adaption are greater, those that can learned will be favor, thus those that are more intelligent will be favor (selected and thus reproduce). SO it is not a EITHER one or the other.
Frankly, AI is much more less intelligence compare to human brain, which is created by God. AI created by human brain, surely got flaws and laughing errors...but we cannot live without AI in near future, same like we cannot live without WIFI, data plan, smartphone, internet, online everything for now. You got to search both field, be it human brain knowledge and AI knowledge, in order to find the truth...it will take u 1 year..God bless you.
If something can be programmed then this is the demonstration that this something is stupid, just a recipee to be applied by a machine. Since in the human realm ''intelligence'' is opposited to ''stupidity'' then it can't be programmed. Saying otherwise is assuming that intelligence = stupidity, this would be stupid. Anyway, if something can be done through a stupid method, then it can be done by a machine and this is good news because we hate doing stupid tasks. The above reasoning is also the reason why AI is impossible in a litteral sense of truly be ''intelligence''. AI should be called AS : Artificial Stupidity. The art of creating good AS needs a lot of intelligence.
NASA: the National Artificial Stupidity Association created by omputer scientist Arthur Boran . But his concept of AS is different from the the one given above. Borran's goal is to generate a program that can accurately simulate the full variety of human stupidities.
http://perso.b2b2c.ca/~sarrazip/nasa.html
Dr. Brassard, you wrote: "My goal is to generate a program that can accurately simulate the full variety of human stupidities." What is the use of this 'program'? Regards, I.H.
Dear Imre,
I rephrased my last post in order to make it more clear. I provided in that post a concept of AS(Artificial Stupidity). I then googled ''Artificial Stupidity''and end up on the NASA's page of Borgan. http://perso.b2b2c.ca/~sarrazip/nasa.html where you can read about his AS concept and goal. I do not personally take Boran's ideas very seriously. I simply found this NASA as funny.
Regards
Dear Matthieu,
I commented on one sentence of your previous post
'' The point is that such a law focuses on survival, not intelligence.''
and the comments were to say that such a laws select for intelligence in the long run. I did not say that humans are not intelligent nor said that you said the human are not intelligent. Yes intelligent beings can decide to not survive in some situations.
Dr. Sorli, I think AI = EL is an oversimplification. Among other things, AI (AGI, etc) are also a form of learning and a new evolving asset for the mankind ... But I fully agree with you on your second statement. With regards, I.H.
Aleš ,
'' From evolutionary standpoint suicide happens when one recognizes that he/she can not contribute to society anymore. ''
Most people that suicides are not doing so because ''I cannot contribute anymore''. Usually their life is a nightmare and not only the nightmare is painfull but totally meaningless to them and they do not want it anymore, they hate it, hate themself and so want to end it, end themself and at this point LIFE has lost in that person. At that point they are far remote from the idea of contributing to society. All the other persons that end/sacrifice their life for a higher societal purpose, do not suicide but do sacrifice their life for other, they are heroes: LIFE has wan.
P.S. Baby whose mother chose giving birth over chemotherapy has died
https://www.thestar.com/news/world/2017/09/21/baby-whose-mother-chose-giving-birth-over-chemotherapy-has-died.html